Ceph 1.3 Lab Installation and Configuration Guide
Overview
In this article we will setup a Ceph 1.3 cluster for purpose of learning or a lab environment.
Ceph Lab Environment
For this environment you will need three VMs (ceph1, ceph2 and ceph3). Each should have 20GB root disk and 100GB data disk. Ceph has three main components: Admin console, Monitors and OSDs.
Admin console - UI and CLI used for managing Ceph cluster. In this environment we will install on ceph1.
Monitors - Monitor health of Ceph cluster. One or more monitors forms a paxos part-time parliment, providing extreme reliability and durability of cluster membership. Monitors maintain the various maps: monitor, osd, placement group (pg) and crush. Monitors will be installed on ceph1, ceph2 and ceph3.
OSDs - Object storage daemon handles storing data, recovery, backfilling, rebalancing and replication. OSDs sit on top of a disk / filesystem. Bluestore enables OSDs to bypass filesystem but is not an option in Ceph 1.3. An OSD will be installed on ceph1, ceph2 and ceph3.
On All Cephs nodes.
#subscription-manager repos --disable=*
#subscription-manager repos --enable=rhel-7-server-rpms
#subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-rhceph-1.3-calamari-rpms --enable=rhel-7-server-rhceph-1.3-installer-rpms --enable=rhel-7-server-rhceph-1.3-tools-rpms
Configure firewalld.
sudo systemctl start firewalld sudo systemctl enable firewalld
sudo firewall-cmd --zone=public --add-port=80/tcp --permanent sudo firewall-cmd --zone=public --add-port=2003/tcp --permanent sudo firewall-cmd --zone=public --add-port=4505-4506/tcp --permanent
sudo firewall-cmd --zone=public --add-port=6789/tcp --permanent
sudo firewall-cmd --zone=public --add-port=6800-7300/tcp --permanent
sudo firewall-cmd --reload
Configure NTP.
yum -y install ntp
systemctl enable ntpd.service
systemctl start ntpd
Ensure NTP is scychronozing.
ntpq -p remote refid st t when poll reach delay offset jitter ============================================================================== +privatewolke.co 131.188.3.222 2 u 12 64 1 26.380 -2.334 2.374 +ridcully.episod 148.251.68.100 3 u 11 64 1 26.626 -2.425 0.534 *s1.kelker.info 213.172.96.14 2 u 12 64 1 26.433 -6.116 1.030 sircabirus.von- .STEP. 16 u - 64 0 0.000 0.000 0.000
Create ceph user for deployer.
#useradd ceph #passwd ceph
#cat << EOF >/etc/sudoers.d/ceph ceph ALL = (root) NOPASSWD:ALL Defaults:ceph !requiretty EOF
#chmod 0440 /etc/sudoers.d/ceph
#su - ceph #ssh-key-gen #ssh-copy-id ceph@ceph1 #ssh-copy-id ceph@ceph2 #ssh-copy-id ceph@ceph3
Set SELinux to permissive. Ceph 2.0 now supports SELinux but for 1.3 it was not possible out-of-box.
#vi /etc/selinux/config SELINUXTYPE=permissive
Create ceph-config dir.
mkdir ~/ceph-config cd ~/ceph-config
On Monitors.
#subscription-manager repos --enable=rhel-7-server-rhceph-1.3-mon-rpms
#yum update -y
On OSD Nodes.
#subscription-manager repos --enable=rhel-7-server-rhceph-1.3-osd-rpms
#yum update -y
On admin node (ceph1).
Setup Admin Console and Calamari.
#sudo yum -y install ceph-deploy calamari-server calamari-clients
#sudo calamari-ctl initialize
#su - ceph
[ceph@ceph1 ceph-config]$cd ~/ceph-config
Create Ceph Cluster.
#ceph-deploy new ceph1 ceph2 ceph3
Deploy Ceph monitors and OSDs.
[ceph@ceph1 ceph-config]$sudo ceph-deploy install --mon ceph1 ceph2 ceph3
[ceph@ceph1 ceph-config]$sudo ceph-deploy install --osd ceph1 ceph2 ceph3
[ceph@ceph1 ceph-config]$sudo ceph-deploy mon create ceph1 ceph2 ceph3
[ceph@ceph1 ceph-config]$sudo ceph-deploy gatherkeys ceph1
Connect Ceph monitors to Calamari.
[ceph@ceph1 ceph-config]$sudo ceph-deploy calamari connect --master ceph1.lab ceph1 ceph2 ceph3
[ceph@ceph1 ceph-config]$sudo ceph-deploy install --cli ceph1
[ceph@ceph1 ceph-config]$sudo ceph-deploy admin ceph1
Check Ceph quorum status.
[ceph@ceph1 ceph-config]$sudo ceph quorum_status --format json-pretty { "election_epoch": 6, "quorum": [ 0, 1, 2 ], "quorum_names": [ "ceph1", "ceph2", "ceph3" ], "quorum_leader_name": "ceph1", "monmap": { "epoch": 1, "fsid": "188aff9b-7da5-46f3-8eb8-465e014a472e", "modified": "0.000000", "created": "0.000000", "mons": [ { "rank": 0, "name": "ceph1", "addr": "192.168.0.31:6789\/0" }, { "rank": 1, "name": "ceph2", "addr": "192.168.0.32:6789\/0" }, { "rank": 2, "name": "ceph3", "addr": "192.168.0.33:6789\/0" } ] } }
Set crush tables to optimal.
[ceph@ceph1 ceph-config]$sudo ceph osd crush tunables optimal
Configure OSDs
Prepare and Active OSDs together
[ceph@ceph1 ceph-config]$sudo ceph-deploy osd create ceph1:vdb ceph2:vdb ceph3:vdb
OR
[ceph@ceph1 ceph-config]$sudo ceph-deploy disk zap ceph1:vdb ceph2:vdb ceph3:vdb
[ceph@ceph1 ceph-config]$sudo ceph-deploy osd prepare ceph1:vdb ceph2:vdb ceph3:vdb
[ceph@ceph1 ceph-config]$sudo ceph-deploy osd activate ceph1:vdb1 ceph2:vdb1 ceph3:vdb1
Connect Calamari to Ceph nodes.
[ceph@ceph1 ceph-config]$sudo ceph-deploy calamari connect --master ceph1.lab ceph1 ceph2 ceph3
Tips and Tricks
Remove OSD from Ceph
[ceph@ceph1 ~]$sudo ceph osd out osd.0 [ceph@ceph1 ~]$sudo ceph osd crush remove osd.0 [ceph@ceph1 ~]$sudo ceph auth del osd.0 [ceph@ceph1 ~]$sudo ceph osd down 0 [ceph@ceph1 ~]$sudo ceph osd rm 0
Ceph Placement Group Calculation for Pool
- OSDs * 100 / Replicas
- PGs should always be power of two 62, 128, 256, etc
Re-deploy Ceph
In case at anytime you want to start over you can run below commands to uninstall Ceph. This of course deletes any data so be careful.
[ceph@ceph1 ~]$sudo service ceph restart osd.3
[ceph@ceph1 ~]$sudo ceph-deploy purge <ceph-node> [<ceph-node>]
Summary
In this article we installed a Ceph cluster on virtual machines. We deployed the cluster, setup monitors and configured OSDs. This environment should provide the basis for a journey into software-defined storage and Ceph. The economics of scale have brought down barriers and paved the way for a software-defined world. Storage is only the next logical boundry. Ceph being an OpenSource project is already the defacto software-defined standard and is in position to become the key beneficiary of software-defined storage. I hope you found the information in this article of use, please share your experiences.
Happy Cephing!
(c) 2016 Keith Tenzer