CentOS 7.2安装Kubernetes 1.03
lieee
9年前
截止2015年9月1日,CentOS 已经把 Kubernetes 加入官方源,所以现在安装Kubernetes已经方便很多。
各组件版本如下:
Kubernetes-1.03 docker-1.8.2 flannel-0.5.3 etcd-2.1.1Kubernetes部署环境角色如下:
CentOS 7.2 64位系统,3台虚拟机: master:192.168.32.15 minion1:192.168.32.16 minion2:192.168.32.17
1. 预处理
每台机器禁用iptables 避免和docker 的iptables冲突:
systemctl stop firewalld systemctl disable firewalld禁用selinux:
vim /etc/selinux/config #SELINUX=enforcing SELINUX=disabled在2个minions机器安装docker:
yum -y install docker yum -y update rebootCentOS系统,使用devicemapper作为存储后端,初始安装docker 会使用loopback, 导致docker启动报错,需要update之后再启动。
2. master结点的安装与配置
安装etcd与kubernetes-master:
yum -y install etcd kubernetes-master修改etcd配置文件:
# egrep -v “^#” /etc/etcd/etcd.conf ETCD_NAME=default ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" ETCD_ADVERTISE_CLIENT_URLS="http://192.168.32.15:2379"修改kube-master配置文件:
# egrep -v ‘^#’ /etc/kubernetes/apiserver | grep -v ‘^$’ KUBE_API_ADDRESS="--address=0.0.0.0" KUBE_ETCD_SERVERS="--etcd_servers=http://192.168.32.15:2379" KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16" KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota" KUBE_API_ARGS=""
# egrep -v ‘^#’ /etc/kubernetes/controller-manager |grep -v ‘^$’ KUBE_CONTROLLER_MANAGER_ARGS="--node-monitor-grace-period=10s --pod-eviction-timeout=10s"
[root@localhost ~]# egrep -v ‘^#’ /etc/kubernetes/config | egrep -v ‘^$’ KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=0" KUBE_ALLOW_PRIV="--allow_privileged=false" KUBE_MASTER="--master=http://192.168.32.15:8080"启动服务:
systemctl enable etcd kube-apiserver kube-scheduler kube-controller-manager systemctl start etcd kube-apiserver kube-scheduler kube-controller-manager
定义flannel网络配置到etcd,这个配置会推送到各个minions的flannel服务上:
etcdctl mk /coreos.com/network/config '{"Network":"172.17.0.0/16"}'3. minion结点的安装与配置
yum -y install kubernetes-node flannel修改kube-node和flannel配置文件:
# egrep -v ‘^#’ /etc/kubernetes/config | grep -v ‘^$’ KUBE_LOGTOSTDERR="--logtostderr=true" KUBE_LOG_LEVEL="--v=0" KUBE_ALLOW_PRIV="--allow_privileged=false" KUBE_MASTER="--master=http://192.168.32.15:8080"
# egrep -v ‘^#’ /etc/kubernetes/kubelet | grep -v ‘^$’ KUBELET_ADDRESS="--address=127.0.0.1" KUBELET_HOSTNAME="--hostname_override=192.168.32.16" KUBELET_API_SERVER="--api_servers=http://192.168.32.15:8080" KUBELET_ARGS="--pod-infra-container-image=kubernetes/pause"为etcd服务配置flannel,修改配置文件 /etc/sysconfig/flanneld:
FLANNEL_ETCD="http://192.168.32.15:2379" FLANNEL_ETCD_KEY="/coreos.com/network"启动服务:
systemctl enable flanenld kubelet kube-proxy systemctl restart flanneld docker systemctl start kubelet kube-proxy在每个minions可以看到2块网卡:docker0和flannel0,这2块网卡的ip在不同的机器ip地址不同:
#minion1 4: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500 link/none inet 172.17.98.0/16 scope global flannel0 valid_lft forever preferred_lft forever 5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:9a:01:ca:99 brd ff:ff:ff:ff:ff:ff inet 172.17.98.1/24 scope global docker0 valid_lft forever preferred_lft forever #minion2 4: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc pfifo_fast state UNKNOWN qlen 500 link/none inet 172.17.67.0/16 scope global flannel0 valid_lft forever preferred_lft forever 5: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN link/ether 02:42:25:be:ba:64 brd ff:ff:ff:ff:ff:ff inet 172.17.67.1/24 scope global docker0 valid_lft forever preferred_lft forever4. 检查状态
登陆master,确认minions的状态:
[root@master ~]# kubectl get nodes NAME LABELS STATUS 192.168.32.16 kubernetes.io/hostname=192.168.32.16 Ready 192.168.32.17 kubernetes.io/hostname=192.168.32.17 Readykubernetes的集群就配置完成,下面就是搞pod了,后续会继续试验。