Ceph 集群部署

duimian97oe 9年前

来自: http://blog.csdn.net/chinagissoft/article/details/50491429



Ceph的底层是RADOS,它的意思是“A reliable, autonomous, distributed object storage”。 
RADOS由两个组件组成:
• OSD: Object Storage Device,提供存储资源。即osd
• Monitor:维护整个Ceph集群的全局状态。即mon
RADOS具有很强的扩展性和可编程性,Ceph基于RADOS开发了Object Storage、Block Storage、FileSystem。
Ceph另外两个组件是:
• MDS:用于保存CephFS的元数据。即mds
• RADOS Gateway:对外提供REST接口,兼容S3和Swift的API。即rgw

环境准备

1,准备四台ubuntu1404 server 

192.168.3.106   client (部署ceph-deploy)

192.168.3.8     node1  (部署mon,mds)

192.168.3.9     node2  (部署osd)

192.168.3.10   node3   (部署osd)

2,为每台配置hosts

root@node2:~# cat /etc/hosts  127.0.0.1       localhost  192.168.3.8     node1  192.168.3.9     node2  192.168.3.10    node3    192.168.3.106   client

3,开启root 用户ssh

vi /etc/ssh/sshd_config

修改PermitRootLogin yes 

执行service ssh restart  重启ssh服务

4,配置client 到各个节点的无密码登录

node1,node2,node3 执行

ssh-keygen

client执行:

ssh-keygen

ssh-copy-id node1

ssh-copy-id node2

ssh-copy-id node3

root@client:~# ssh-copy-id node1  The authenticity of host 'node1 (192.168.3.8)' can't be established.  ECDSA key fingerprint is 1b:16:32:16:e8:92:4e:f2:8b:98:1f:1a:9b:e4:27:e9.  Are you sure you want to continue connecting (yes/no)? yes  /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed  /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys  root@osd's password:     Number of key(s) added: 1    Now try logging into the machine, with:   "ssh 'node1'"  and check to make sure that only the key(s) you wanted were added.
测试无密码登录

ssh mon ,登录成功!

root@client:~# ssh node1  Welcome to Ubuntu 14.04.3 LTS (GNU/Linux 3.19.0-25-generic x86_64)     * Documentation:  https://help.ubuntu.com/      System information as of Sun Jan 10 11:09:47 CST 2016      System load:  0.01              Processes:           93    Usage of /:   19.7% of 5.51GB   Users logged in:     1    Memory usage: 5%                IP address for eth0: 192.168.3.8    Swap usage:   0%      Graph this data and manage this system at:      https://landscape.canonical.com/    Last login: Sun Jan 10 10:34:52 2016  root@node1:~#

安装ceph 集群

1,安装ceph-deploy

执行:apt-get install ceph-deploy 

2,配置一个mon 节点 node

执行ceph-deploy new node1 ,这里必须用主机名

root@client:~/ceph# ceph-deploy new node1  [ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy new node1  [ceph_deploy.new][DEBUG ] Creating new cluster named ceph  [ceph_deploy.new][DEBUG ] Resolving host node1  [ceph_deploy.new][DEBUG ] Monitor node1 at 192.168.3.8  [ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds  [node1][DEBUG ] connected to host: client   [node1][INFO  ] Running command: ssh -CT -o BatchMode=yes node1  [ceph_deploy.new][DEBUG ] Monitor initial members are ['node1']  [ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.3.8']  [ceph_deploy.new][DEBUG ] Creating a random mon key...  [ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...  [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...

执行成功后目录下回生成 如下三个文件

ceph.mon.keyring ,ceph.conf,ceph.log

编辑ceph.conf 添加 osd_pool_default_size = 2

root@client:~/ceph# vi ceph.conf     [global]  fsid = 3a562301-bd64-45c5-aaa0-ef57e3dfd76f  mon_initial_members = node1  mon_host = 192.168.3.8  auth_cluster_required = cephx  auth_service_required = cephx  auth_client_required = cephx  filestore_xattr_use_omap = true  osd_pool_default_size = 2

3,安装ceph 到三个节点上

执行ceph-deploy install client node1 node2 node3

root@client:~/ceph# ceph-deploy install client node1 node2 node3  [ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy install client node1 node2 node3  [ceph_deploy.install][DEBUG ] Installing stable version emperor on cluster ceph hosts client node1 node2 node3  [ceph_deploy.install][DEBUG ] Detecting platform for host client ...  [client][DEBUG ] connected to host: client   [client][DEBUG ] detect platform information from remote host  [client][DEBUG ] detect machine type  [ceph_deploy.install][INFO  ] Distro info: Ubuntu 14.04 trusty  [client][INFO  ] installing ceph on client  [client][INFO  ] Running command: env DEBIAN_FRONTEND=noninteractive apt-get -q install --assume-yes ca-certificates  [client][DEBUG ] Reading package lists...  [client][DEBUG ] Building dependency tree...
[node3][DEBUG ] Unpacking ceph-mds (0.80.10-0ubuntu1.14.04.3) ...  [node3][DEBUG ] Processing triggers for man-db (2.6.7.1-1ubuntu1) ...  [node3][DEBUG ] Processing triggers for ureadahead (0.100.0-16) ...  [node3][DEBUG ] ureadahead will be reprofiled on next reboot  [node3][DEBUG ] Setting up ceph-fs-common (0.80.10-0ubuntu1.14.04.3) ...  [node3][DEBUG ] Setting up ceph-mds (0.80.10-0ubuntu1.14.04.3) ...  [node3][DEBUG ] ceph-mds-all start/running  [node3][DEBUG ] Processing triggers for ureadahead (0.100.0-16) ...  [node3][INFO  ] Running command: ceph --version  [node3][DEBUG ] ceph version 0.80.10 (ea6c958c38df1216bf95c927f143d8b13c4a9e70)  Unhandled exception in thread started by   sys.excepthook is missing  lost sys.stderr

到各个node去执行ceph  --version 检查ceph安装结果

root@node2:~# ceph --version  ceph version 0.80.10 (ea6c958c38df1216bf95c927f143d8b13c4a9e70)

激活监控节点

root@client:~/ceph# ceph-deploy mon create-initial  [ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy mon create-initial  [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node1  [ceph_deploy.mon][DEBUG ] detecting platform for host node1 ...  [node1][DEBUG ] connected to host: node1   [node1][DEBUG ] detect platform information from remote host  [node1][DEBUG ] detect machine type  [ceph_deploy.mon][INFO  ] distro info: Ubuntu 14.04 trusty  [node1][DEBUG ] determining if provided host has same hostname in remote  [node1][DEBUG ] get remote short hostname  [node1][DEBUG ] deploying mon to node1  [node1][DEBUG ] get remote short hostname  [node1][DEBUG ] remote hostname: node1  [node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf  [node1][DEBUG ] create the mon path if it does not exist  [node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node1/done  [node1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node1/done  [node1][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-node1.mon.keyring  [node1][DEBUG ] create the monitor keyring file  [node1][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i node1 --keyring /var/lib/ceph/tmp/ceph-node1.mon.keyring  [node1][DEBUG ] ceph-mon: mon.noname-a 192.168.3.8:6789/0 is local, renaming to mon.node1  [node1][DEBUG ] ceph-mon: set fsid to 3a562301-bd64-45c5-aaa0-ef57e3dfd76f  [node1][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node1 for mon.node1  [node1][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-node1.mon.keyring  [node1][DEBUG ] create a done file to avoid re-doing the mon deployment  [node1][DEBUG ] create the init path if it does not exist  [node1][DEBUG ] locating the `service` executable...  [node1][INFO  ] Running command: initctl emit ceph-mon cluster=ceph id=node1  [node1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status  [node1][DEBUG ] ********************************************************************************  [node1][DEBUG ] status for monitor: mon.node1  [node1][DEBUG ] {  [node1][DEBUG ]   "election_epoch": 2,   [node1][DEBUG ]   "extra_probe_peers": [],   [node1][DEBUG ]   "monmap": {  [node1][DEBUG ]     "created": "0.000000",   [node1][DEBUG ]     "epoch": 1,   [node1][DEBUG ]     "fsid": "3a562301-bd64-45c5-aaa0-ef57e3dfd76f",   [node1][DEBUG ]     "modified": "0.000000",   [node1][DEBUG ]     "mons": [  [node1][DEBUG ]       {  [node1][DEBUG ]         "addr": "192.168.3.8:6789/0",   [node1][DEBUG ]         "name": "node1",   [node1][DEBUG ]         "rank": 0  [node1][DEBUG ]       }  [node1][DEBUG ]     ]  [node1][DEBUG ]   },   [node1][DEBUG ]   "name": "node1",   [node1][DEBUG ]   "outside_quorum": [],   [node1][DEBUG ]   "quorum": [  [node1][DEBUG ]     0  [node1][DEBUG ]   ],   [node1][DEBUG ]   "rank": 0,   [node1][DEBUG ]   "state": "leader",   [node1][DEBUG ]   "sync_provider": []  [node1][DEBUG ] }  [node1][DEBUG ] ********************************************************************************  [node1][INFO  ] monitor: mon.node1 is running  [node1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status  [ceph_deploy.mon][INFO  ] processing monitor mon.node1  [node1][DEBUG ] connected to host: node1   [node1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status  [ceph_deploy.mon][INFO  ] mon.node1 monitor has reached quorum!  [ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum  [ceph_deploy.mon][INFO  ] Running gatherkeys...  [ceph_deploy.gatherkeys][DEBUG ] Checking node1 for /etc/ceph/ceph.client.admin.keyring  [node1][DEBUG ] connected to host: node1   [node1][DEBUG ] detect platform information from remote host  [node1][DEBUG ] detect machine type  [node1][DEBUG ] fetch remote file  [ceph_deploy.gatherkeys][DEBUG ] Got ceph.client.admin.keyring key from node1.  [ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring  [ceph_deploy.gatherkeys][DEBUG ] Checking node1 for /var/lib/ceph/bootstrap-osd/ceph.keyring  [node1][DEBUG ] connected to host: node1   [node1][DEBUG ] detect platform information from remote host  [node1][DEBUG ] detect machine type  [node1][DEBUG ] fetch remote file  [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-osd.keyring key from node1.  [ceph_deploy.gatherkeys][DEBUG ] Checking node1 for /var/lib/ceph/bootstrap-mds/ceph.keyring  [node1][DEBUG ] connected to host: node1   [node1][DEBUG ] detect platform information from remote host  [node1][DEBUG ] detect machine type  [node1][DEBUG ] fetch remote file  [ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-mds.keyring key from node1.
执行成功后本地生成三个key 文件,

ceph.bootstrap-mds.keyring,ceph.bootstrap-osd.keyring,ceph.client.admin.keyring

root@client:~/ceph# ll  total 100  drwxr-xr-x 2 root root  4096 Jan 12 14:07 ./  drwx------ 5 root root  4096 Jan 12 13:53 ../  -rw-r--r-- 1 root root    71 Jan 12 14:07 ceph.bootstrap-mds.keyring  -rw-r--r-- 1 root root    71 Jan 12 14:07 ceph.bootstrap-osd.keyring  -rw-r--r-- 1 root root    63 Jan 12 14:07 ceph.client.admin.keyring  -rw-r--r-- 1 root root   251 Jan 12 13:53 ceph.conf  -rw-r--r-- 1 root root 66652 Jan 12 14:07 ceph.log  -rw-r--r-- 1 root root    73 Jan 12 13:52 ceph.mon.keyring

并且mon 节点 node1 上会有ceph_mon 进程

root@node1:~# ps aux|grep ceph  root      1518  0.2  1.8 154700 19284 ?        Ssl  14:48   0:00 /usr/bin/ceph-mon --cluster=ceph -i node1 -f  root      1846  0.0  0.2  11748  2208 pts/0    S+   14:49   0:00 grep --color=auto ceph


配置osd 节点

给node2,node3 分别添加一块硬盘8G,创建分区/dev/sdb1,格式化为xfs 文件系统 ,

root@node3:~# fdisk /dev/sdb  Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel  Building a new DOS disklabel with disk identifier 0xe5f14ca9.  Changes will remain in memory only, until you decide to write them.  After that, of course, the previous content won't be recoverable.    Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)    Command (m for help): p    Disk /dev/sdb: 8589 MB, 8589934592 bytes  255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors  Units = sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 512 bytes  I/O size (minimum/optimal): 512 bytes / 512 bytes  Disk identifier: 0xe5f14ca9       Device Boot      Start         End      Blocks   Id  System    Command (m for help): n  Partition type:     p   primary (0 primary, 0 extended, 4 free)     e   extended  Select (default p): p  Partition number (1-4, default 1):      Using default value 1  First sector (2048-16777215, default 2048): 2048  Last sector, +sectors or +size{K,M,G} (2048-16777215, default 16777215): 16000000    Command (m for help): p    Disk /dev/sdb: 8589 MB, 8589934592 bytes  255 heads, 63 sectors/track, 1044 cylinders, total 16777216 sectors  Units = sectors of 1 * 512 = 512 bytes  Sector size (logical/physical): 512 bytes / 512 bytes  I/O size (minimum/optimal): 512 bytes / 512 bytes  Disk identifier: 0xe5f14ca9       Device Boot      Start         End      Blocks   Id  System  /dev/sdb1            2048    16000000     7998976+  83  Linux    Command (m for help): w  The partition table has been altered!    Calling ioctl() to re-read partition table.  Syncing disks.  root@node3:~# mkfs.xfs /dev/sdb1  meta-data=/dev/sdb1              isize=256    agcount=4, agsize=499936 blks           =                       sectsz=512   attr=2, projid32bit=0  data     =                       bsize=4096   blocks=1999744, imaxpct=25           =                       sunit=0      swidth=0 blks  naming   =version 2              bsize=4096   ascii-ci=0  log      =internal log           bsize=4096   blocks=2560, version=2           =                       sectsz=512   sunit=0 blks, lazy-count=1  realtime =none                   extsz=4096   blocks=0, rtextents=0  root@node3:~#

添加osd 节点

执行ceph-deploy osd prepare node2:/dev/sdb1 node3:/dev/sdb1

root@client:~/ceph# ceph-deploy osd prepare node2:/dev/sdb1  [ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy osd prepare node2:/dev/sdb1  [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node2:/dev/sdb1:  [node2][DEBUG ] connected to host: node2   [node2][DEBUG ] detect platform information from remote host  [node2][DEBUG ] detect machine type  [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty  [ceph_deploy.osd][DEBUG ] Deploying osd to node2  [node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf  [node2][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add  [ceph_deploy.osd][DEBUG ] Preparing host node2 disk /dev/sdb1 journal None activate False  [node2][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb1  [node2][WARNIN] Error: Partition(s) 1 on /dev/sdb1 have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use.  As a result, the old partition(s) will remain in use.  You should reboot now before making further changes.  [node2][DEBUG ] meta-data=/dev/sdb1              isize=2048   agcount=4, agsize=499936 blks  [node2][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=0  [node2][DEBUG ] data     =                       bsize=4096   blocks=1999744, imaxpct=25  [node2][DEBUG ]          =                       sunit=0      swidth=0 blks  [node2][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0  [node2][DEBUG ] log      =internal log           bsize=4096   blocks=2560, version=2  [node2][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1  [node2][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0  [ceph_deploy.osd][DEBUG ] Host node2 is now ready for osd use.  Unhandled exception in thread started by   sys.excepthook is missing  lost sys.stderr

激活osd 节点

执行 ceph-deploy osd activate node2:/dev/sdb1 node3:/dev/sdb1

root@client:~/ceph# ceph-deploy osd activate node2:/dev/sdb1 node3:/dev/sdb1  [ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy osd activate node2:/dev/sdb1 node3:/dev/sdb1  [ceph_deploy.osd][DEBUG ] Activating cluster ceph disks node2:/dev/sdb1: node3:/dev/sdb1:  [node2][DEBUG ] connected to host: node2   [node2][DEBUG ] detect platform information from remote host  [node2][DEBUG ] detect machine type  [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty  [ceph_deploy.osd][DEBUG ] activating host node2 disk /dev/sdb1  [ceph_deploy.osd][DEBUG ] will use init type: upstart  [node2][INFO  ] Running command: ceph-disk-activate --mark-init upstart --mount /dev/sdb1  [node2][WARNIN] got monmap epoch 1  [node2][WARNIN] 2016-01-12 15:26:22.781005 7f1967a62800 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway  [node2][WARNIN] 2016-01-12 15:26:23.132088 7f1967a62800 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway  [node2][WARNIN] 2016-01-12 15:26:23.136228 7f1967a62800 -1 filestore(/var/lib/ceph/tmp/mnt.LRhdRc) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory  [node2][WARNIN] 2016-01-12 15:26:23.386713 7f1967a62800 -1 created object store /var/lib/ceph/tmp/mnt.LRhdRc journal /var/lib/ceph/tmp/mnt.LRhdRc/journal for osd.0 fsid 3a562301-bd64-45c5-aaa0-ef57e3dfd76f  [node2][WARNIN] 2016-01-12 15:26:23.386942 7f1967a62800 -1 auth: error reading file: /var/lib/ceph/tmp/mnt.LRhdRc/keyring: can't open /var/lib/ceph/tmp/mnt.LRhdRc/keyring: (2) No such file or directory  [node2][WARNIN] 2016-01-12 15:26:23.387056 7f1967a62800 -1 created new key in keyring /var/lib/ceph/tmp/mnt.LRhdRc/keyring  [node2][WARNIN] added key for osd.0  [node3][DEBUG ] connected to host: node3   [node3][DEBUG ] detect platform information from remote host  [node3][DEBUG ] detect machine type  [ceph_deploy.osd][INFO  ] Distro info: Ubuntu 14.04 trusty  [ceph_deploy.osd][DEBUG ] activating host node3 disk /dev/sdb1  [ceph_deploy.osd][DEBUG ] will use init type: upstart  [node3][INFO  ] Running command: ceph-disk-activate --mark-init upstart --mount /dev/sdb1  [node3][WARNIN] got monmap epoch 1  [node3][WARNIN] 2016-01-12 15:26:27.482403 7f5efe4de800 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway  [node3][WARNIN] 2016-01-12 15:26:28.286329 7f5efe4de800 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway  [node3][WARNIN] 2016-01-12 15:26:28.416139 7f5efe4de800 -1 filestore(/var/lib/ceph/tmp/mnt.l2aD7V) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory  [node3][WARNIN] 2016-01-12 15:26:29.444301 7f5efe4de800 -1 created object store /var/lib/ceph/tmp/mnt.l2aD7V journal /var/lib/ceph/tmp/mnt.l2aD7V/journal for osd.1 fsid 3a562301-bd64-45c5-aaa0-ef57e3dfd76f  [node3][WARNIN] 2016-01-12 15:26:29.444558 7f5efe4de800 -1 auth: error reading file: /var/lib/ceph/tmp/mnt.l2aD7V/keyring: can't open /var/lib/ceph/tmp/mnt.l2aD7V/keyring: (2) No such file or directory  [node3][WARNIN] 2016-01-12 15:26:29.444832 7f5efe4de800 -1 created new key in keyring /var/lib/ceph/tmp/mnt.l2aD7V/keyring  [node3][WARNIN] added key for osd.1  Unhandled exception in thread started by   sys.excepthook is missing  lost sys.stderr

在osd 节点上验证激活成功

执行 ps aux | grep ceph-osd|grep -v grep

root@node2:~# ps aux | grep ceph-osd|grep -v grep  root      1752  3.9  2.9 518216 29572 ?        Ssl  15:26   0:03 /usr/bin/ceph-osd --cluster=ceph -i 0 -f  root@node2:~#
查看/dev/sdb1 分区已经挂载成功
root@node2:~# df -TH  Filesystem                      Type      Size  Used Avail Use% Mounted on  /dev/mapper/ubuntu1404--vg-root ext4      6.0G  1.4G  4.3G  24% /  none                            tmpfs     4.1k     0  4.1k   0% /sys/fs/cgroup  udev                            devtmpfs  510M  4.1k  510M   1% /dev  tmpfs                           tmpfs     105M  476k  104M   1% /run  none                            tmpfs     5.3M     0  5.3M   0% /run/lock  none                            tmpfs     521M     0  521M   0% /run/shm  none                            tmpfs     105M     0  105M   0% /run/user  /dev/sda1                       ext2      247M   40M  195M  17% /boot  /dev/sdb1                       xfs       8.2G  5.5G  2.8G  67% /var/lib/ceph/osd/ceph-0

分别在两个osd 节点中,把/dev/sdb1分区加入到/etc/fstab 中开机自动挂载

添加 /dev/sdb1              /var/lib/ceph/osd/ceph-0                 xfs    defaults        0 0

root@node2:~# vi /etc/fstab    # /etc/fstab: static file system information.  #  # Use 'blkid' to print the universally unique identifier for a  # device; this may be used with UUID= as a more robust way to name devices  # that works even if disks are added and removed. See fstab(5).  #  # <file system> <mount point>   <type>  <options>       <dump>  <pass>  /dev/mapper/ubuntu1404--vg-root /               ext4    errors=remount-ro 0       1  # /boot was on /dev/sda1 during installation  UUID=54b04b5b-52ce-485f-95ef-70c667cfd8b3 /boot           ext2    defaults        0       2  /dev/mapper/ubuntu1404--vg-swap_1 none            swap    sw              0       0  /dev/sdb1               /var/lib/ceph/osd/ceph-0                   xfs    defaults        0 0  ~
配置mds服务

在node1 上添加 metadata server

执行ceph-deploy mds create node1

root@client:~/ceph# ceph-deploy mds create node1  [ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy mds create node1  [ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts node1:node1  [node1][DEBUG ] connected to host: node1   [node1][DEBUG ] detect platform information from remote host  [node1][DEBUG ] detect machine type  [ceph_deploy.mds][INFO  ] Distro info: Ubuntu 14.04 trusty  [ceph_deploy.mds][DEBUG ] remote host will use upstart  [ceph_deploy.mds][DEBUG ] deploying mds bootstrap to node1  [node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf  [node1][DEBUG ] create path if it doesn't exist  [node1][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.node1 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-node1/keyring  [node1][INFO  ] Running command: initctl emit ceph-mds cluster=ceph id=node1  Unhandled exception in thread started by   sys.excepthook is missing  lost sys.stderr

检查mds 进程

root@node1:~# ps aux|grep mds  root      2025  0.1  1.6 165288 16280 ?        Ssl  15:34   0:00 /usr/bin/ceph-mds --cluster=ceph -i node1 -f  root      2046  0.0  0.2  11748  2216 pts/0    S+   15:36   0:00 grep --color=auto mds

创建一个ceph 文件系统

root@node1:~# ceph osd pool create cephfs_data 10  pool 'cephfs_data' created  root@node1:~# ceph osd pool create cephfs_metadata 10  pool 'cephfs_metadata' created  root@node1:~# ceph fs new cephfs cephfs_metadata cephfs_data  no valid command found; 10 closest matches:  fsid  Error EINVAL: invalid command  root@node1:~# ceph fs ls  no valid command found; 10 closest matches:  fsid  Error EINVAL: invalid command  root@node1:~# ceph mds stat  e4: 1/1/1 up {0=node1=up:active}  root@node1:~#


同步配置文件

执行 ceph-deploy admin node1 node2 node3

root@client:~/ceph# ceph-deploy admin node1 node2 node3  [ceph_deploy.cli][INFO  ] Invoked (1.4.0): /usr/bin/ceph-deploy admin node1 node2 node3  [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node1  [node1][DEBUG ] connected to host: node1   [node1][DEBUG ] detect platform information from remote host  [node1][DEBUG ] detect machine type  [node1][DEBUG ] get remote short hostname  [node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf  [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node2  [node2][DEBUG ] connected to host: node2   [node2][DEBUG ] detect platform information from remote host  [node2][DEBUG ] detect machine type  [node2][DEBUG ] get remote short hostname  [node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf  [ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node3  [node3][DEBUG ] connected to host: node3   [node3][DEBUG ] detect platform information from remote host  [node3][DEBUG ] detect machine type  [node3][DEBUG ] get remote short hostname  [node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
查看集群健康状态

root@node1:~# ceph health  HEALTH_OK

使用rbd

新建一个ceph pool  
rados mkpool test
在pool中新建一个镜像(ip 为mon节点)
rbd create test-1 --size 4096 -p test -m 172.17.106.122
把镜像映射到pool块设备中
rbd map test-1 -p test --name client.admin
查看rbd的映射关系
rbd showmapped

root@node1:~# rados mkpool test  successfully created pool test  root@node1:~# lsmod |grep rbd  root@node1:~# rbd create test-1 --size 4096 -p test -m 192.168.3.8  root@node1:~# rbd map test-1 -p test --name client.admin  root@node1:~# rbd showmapped  id pool image  snap device      0  test test-1 -    /dev/rbd0

把新建的镜像ceph块进行格式化

root@node1:~# mkfs.ext4 -m0 /dev/rbd0  mke2fs 1.42.9 (4-Feb-2014)  Discarding device blocks: done                              Filesystem label=  OS type: Linux  Block size=4096 (log=2)  Fragment size=4096 (log=2)  Stride=1024 blocks, Stripe width=1024 blocks  262144 inodes, 1048576 blocks  0 blocks (0.00%) reserved for the super user  First data block=0  Maximum filesystem blocks=1073741824  32 block groups  32768 blocks per group, 32768 fragments per group  8192 inodes per group  Superblock backups stored on blocks:           32768, 98304, 163840, 229376, 294912, 819200, 884736    Allocating group tables: done                              Writing inode tables: done                              Creating journal (32768 blocks): done  Writing superblocks and filesystem accounting information: done

挂载

root@node1:~# mount /dev/rbd0 /mnt

root@node1:~# df -TH  Filesystem                      Type      Size  Used Avail Use% Mounted on  /dev/mapper/ubuntu1404--vg-root ext4      6.0G  1.4G  4.3G  24% /  none                            tmpfs     4.1k     0  4.1k   0% /sys/fs/cgroup  udev                            devtmpfs  510M  4.1k  510M   1% /dev  tmpfs                           tmpfs     105M  472k  104M   1% /run  none                            tmpfs     5.3M     0  5.3M   0% /run/lock  none                            tmpfs     521M     0  521M   0% /run/shm  none                            tmpfs     105M     0  105M   0% /run/user  /dev/sda1                       ext2      247M   40M  195M  17% /boot  /dev/rbd0                       ext4      4.1G  8.4M  4.1G   1% /mnt
现在可以使用/mnt 目录了