Building an Inexpensive Oracle RAC 11g R2 on Linux - (RHEL 5) by Jeff Hunter, Sr. Database Administrator Contents. Introduction; Oracle RAC 11g Overview. NAME; SYNOPSIS; WARNING; DESCRIPTION; EXAMPLES. As an interactive shell; From shell scripts; On one command line; Mount disks automatically; As a script interpreter. Linux man pages: alphabetic list of all pages. Jump to letter. 3 a b c d e f g h i j k l m n o p q r s t u v w x y z. Administration and Deployment Guide | SUSE Enterprise Storage 4. In this part of the manual you will learn how to start or stop Ceph. CRUSH maps. and how to manage storage pools. It also includes advanced topics, for example how to manage users and. RADOS device snapshots, how. Operating Ceph Services#. Ceph related services are operated with the systemctl. The operation takes place on the node you are currently logged in. You need to have root privileges to be able to operate on Ceph. Starting, Stopping, and Restarting Services using Targets#. To facilitate starting, stopping, and restarting all the services of a. Ceph services, or all MONs, or all OSDs). Ceph provides the following systemd unit files. Lvm Activate Inactive Logical Volume Display Not ShowingAdministration and Deployment Guide. Guides you through Ceph installation steps and cluster management tasks, including description of basic Ceph cluster structure. To start/stop/restart all Ceph services on the node, run. To start/stop/restart all OSDs on the node, run. Commands for the other targets are analogous. Starting, Stopping, and Restarting Individual Services#. You can operate individual services using the following parameterized. To use these commands, you first need to identify the name of the service. Lvm Activate Inactive Logical Volume Display On MonitorSee. Section 9. 3, “Identifying Individual Services” to learn more about. To start/stop/restart the osd. Commands for the other service types are analogous. Identifying Individual Services#. You can find out the names/numbers of a particular type of service by. For example. systemctl | grep - i 'ceph- osd.*service'. You can query systemd for the status of services. For example. systemctl status ceph- osd@1. If you do not know the exact name/number of the service, see. Section 9. 3, “Identifying Individual Services”. Determining Cluster State#. Once you have a running cluster, you may use the ceph tool. Determining cluster state typically involves. OSD status, monitor status, placement group status and metadata. Tip: Interactive Mode. To run the ceph tool in an interactive mode, type. The. interactive mode is more convenient if you are going to enter more. For example. ceph> health. Checking Cluster Health#. After you start your cluster, and before you start reading and/or writing. You can check on the health of. Ceph cluster with the following. HEALTH_WARN 1. 0 pgs degraded; 1. If you specified non- default locations for your configuration or keyring. Upon starting the Ceph cluster, you will likely encounter a health warning. HEALTH_WARN XXX num placement groups stale. Wait. a few moments and check it again. When your cluster is ready, ceph. HEALTH_OK. At that point, it is okay to begin using the. To watch the cluster’s ongoing events, open a new terminal and enter. Ceph will print each event. For example, a tiny Ceph cluster consisting. OSDs may print the following. HEALTH_OK. monmap e. MB data, 2. 19. 9 objects. GB used, 1. 67 GB / 2. GB avail. 9. 52 active+clean. INF] 1. 7. 7. 1 deep- scrub ok. INF] 1. 0 scrub ok. INF] 1. 3 scrub ok. INF] 1. 4 scrub ok. INF] pgmap v. 41. MB data, 1. 15 GB used, \. GB / 2. 97 GB avail. The output provides the following information. Cluster ID. Cluster health status. The monitor map epoch and the status of the monitor quorum. The OSD map epoch and the status of OSDs. The placement group map version. The number of placement groups and pools. The notional amount of data stored and the number of. The total amount of data stored. Tip: How Ceph Calculates Data Usage. The used value reflects the actual amount of raw storage. The xxx GB / xxx GB value means the amount. The notional number reflects the size of the stored data before it. Therefore, the amount of data. Ceph creates replicas of the data and may also use storage capacity for. Checking a Cluster’s Usage Stats#. To check a cluster’s data usage and data distribution among pools, you can. It is similar to Linux. Execute the following. SIZE AVAIL RAW USED %RAW USED. M 2. 73. 04. M 2. M 0. 9. 7. NAME ID USED %USED MAX AVAIL OBJECTS. M 4. metadata 1 0 0 5. M 0. rbd 2 0 0 5. M 0. hot- storage 4 1. M 2. cold- storage 5 2. M 1. pool. 1 6 0 0 5. M 0. The GLOBAL section of the output provides an overview of. The POOLS section of the output provides a list of pools. The output from this section. For. example, if you store an object with 1. MB of data, the notional usage will be. MB, but the actual usage may be 2. MB or more depending on the number of. NAME: The name of the pool. ID: The pool ID. USED: The notional amount of data stored in kilobytes. M for megabytes or G for gigabytes. USED: The notional percentage of storage used per. OBJECTS: The notional number of objects stored per. The numbers in the POOLS section are notional. They are not inclusive of. As a result, the sum of the. USED and %USED amounts will not add up to the RAW USED and %RAW USED. GLOBAL section of the output. Checking a Cluster’s Status#. To check a cluster’s status, execute the following. In interactive mode, type status and press. Ceph will print the cluster status. For example, a tiny Ceph cluster. OSDs may print the following. HEALTH_OK. monmap e. MB data, 2. 19. 9 objects. GB used, 1. 67 GB / 2. GB avail. 1 active+clean+scrubbing+deep. Checking OSD Status#. You can check OSDs to ensure they are up and on by executing. You can also view OSDs according to their position in the CRUSH map. Ceph will print out a CRUSH tree with a host, its OSDs, whether they are. Checking Monitor Status#. If your cluster has multiple monitors (likely), you should check the monitor. A quorum must be present when multiple monitors are running. You. should also check monitor status periodically to ensure that they are. To display the monitor map, execute the following. To check the quorum status for the monitor cluster, execute the following. Ceph will return the quorum status. For example, a Ceph cluster. Using the Admin Socket#. The Ceph admin socket allows you to query a daemon via a socket interface. By default, Ceph sockets reside under /var/run/ceph. To access a daemon via the admin socket, log in to the host running the. To view the available admin socket commands, execute the following command. The admin socket command enables you to show and set your configuration at. Refer to Viewing. Configuration at Runtime (http: //docs. Additionally, you can set configuration values at runtime directly (the. Authentication with cephx#. To identify clients and protect against man- in- the- middle attacks, Ceph. Clients in. this context are either human users—such as the admin user—or. Ceph- related services/daemons, for example OSDs, monitors, or RADOS Gateways. The cephx protocol does not address data encryption in transport, such as. Authentication Architecture#cephx uses shared secret keys for authentication, meaning both the client. The. authentication protocol enables both parties to prove to each other that. This provides. mutual authentication, which means the cluster is sure the user possesses. A key scalability feature of Ceph is to avoid a centralized interface to. Ceph object store. This means that Ceph clients can interact with. OSDs directly. To protect data, Ceph provides its cephx authentication. Ceph clients. Each monitor can authenticate clients and distribute keys, so there is no. The monitor. returns an authentication data structure that contains a session key for use. Ceph services. This session key is itself encrypted with the. Ceph monitors. The client then uses the session key to. OSDs that. actually handle data. Ceph monitors and OSDs share a secret, so the client. OSD or metadata server. This form of authentication will. To use cephx, an administrator must setup clients/users first. In the. following diagram, the. Ceph’s auth. subsystem generates the user name and key, stores a copy with the monitor(s). This means that. the client and the monitor share a secret key. To authenticate with the monitor, the client passes the user name to the. The monitor generates a session key and encrypts it with the secret. The client then decrypts the data with the shared secret key to. The session key identifies the user for the. The client then requests a ticket related to the user. The monitor generates a ticket, encrypts. The. client decrypts the ticket and uses it to sign requests to OSDs and metadata. The cephx protocol authenticates ongoing communications between the client. Ceph servers. Each message sent between a client and a. OSDs, and metadata servers can verify with their shared secret. The protection offered by this authentication is between the Ceph client. Ceph cluster hosts. The authentication is not extended beyond the.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
November 2017
Categories |