본문 바로가기

Micro Service Architecture/Docker

7. [docker] Orchestration (Docker Swarm) 구축 : Manager, Worker Node 구성

반응형

1. 테스트 구성
- VM 3대를 이용하여 Manager + Worker Node 를 동일 구성

## docker1, docker2 docker3 의 호스트네임으로 3대 구성
[root@docker1 bin]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
 
192.168.6.134   docker1.plo.plo docker1
192.168.6.135   docker2.plo.plo docker2
192.168.6.136   docker3.plo.plo docker3
 
 
## CentOS 7 버전 3대 동일 구성
 
[root@docker1 bin]# cat /etc/redhat-release
CentOS Linux release 7.8.2003 (Core)
[root@docker1 bin]# uname -a
Linux docker1 3.10.0-1127.el7.x86_64 #1 SMP Tue Mar 31 23:36:51 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
 
 
## docker 1.13 버전 VM 3대 동일 구성
## 설치는 yum install docker
[root@docker1 bin]# docker version
Client:
Version: 1.13.1
API version: 1.26
Package version: docker-1.13.1-162.git64e9980.el7.centos.x86_64
Go version: go1.10.3
Git commit: 64e9980/1.13.1
Built: Wed Jul 1 14:56:42 2020
OS/Arch: linux/amd64
 
Server:
Version: 1.13.1
API version: 1.26 (minimum version 1.12)
Package version: docker-1.13.1-162.git64e9980.el7.centos.x86_64
Go version: go1.10.3
Git commit: 64e9980/1.13.1
Built: Wed Jul 1 14:56:42 2020
OS/Arch: linux/amd64
Experimental: false

 

2. Manager Node + Worker Node 구성

## docker1 에 swarm 초기화 하고 manager node를 생성
## manager node 광고 주소는 docker1 로 설정
## 참조 : docker를 설치 하게 되면 swarm 이 포함되어 있어 별도의 추가 설치가 필요없다.
[root@docker1 bin]# docker swarm init --advertise-addr 192.168.6.134
Swarm initialized: current node (b7mvawujkbh9wv11rypk5z6k3) is now a manager.
 
To add a worker to this swarm, run the following command:
 
docker swarm join \
--token SWMTKN-1-329iw3wouyvp3rlxvxwkkv5uu5hemia8y8p8y1ppybsb7atctm-bqiopme5khbiy4xjovx1f0vzb \
192.168.6.134:2377
 
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
 
 
## 초기화 시 Worker Node 추가를 위한 token이 자동 생성 되며 친절히 복붙 할수 있는 명령어도 제공
## docker2, docker3 을 worker node 추가
[root@docker2 ~]# docker swarm join \
>     --token SWMTKN-1-329iw3wouyvp3rlxvxwkkv5uu5hemia8y8p8y1ppybsb7atctm-bqiopme5khbiy4xjovx1f0vzb \
>     192.168.6.134:2377
This node joined a swarm as a worker.
 
 
[root@docker3 ~]# docker swarm join \
>     --token SWMTKN-1-329iw3wouyvp3rlxvxwkkv5uu5hemia8y8p8y1ppybsb7atctm-bqiopme5khbiy4xjovx1f0vzb \
>     192.168.6.134:2377
This node joined a swarm as a worker.
 
 
## 확인은 docker node ls 이용 현재 cluster에 join된 node 정보를 볼수 있다.
## 현재 docker1만 manager node로 "Leader" 로 선출 되어 있다.
## manager node 의 경우 "MANAGER" 에 표시되며 아무것도 없을 시 Manager node 가 아니다.
## "AVAILABILITY" 항목은 Worker node로서 활성화 되어 있음을 표시
[root@docker1 bin]# docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
b7mvawujkbh9wv11rypk5z6k3 *  docker1   Ready   Active        Leader
uh0oje50u4ye5wefng70g2b9d    docker2   Ready   Active       
z0x4c075acmmc8h1sfhwqy3ag    docker3   Ready   Active       
 
## 추후 worker node나 manager node을 추가 하기위해 token 재확인 방법
## Worker node 를 manager node로 만들고 싶다면 아래 복붙.
[root@docker1 bin]# docker swarm join-token manager
To add a manager to this swarm, run the following command:
 
    docker swarm join \
    --token SWMTKN-1-329iw3wouyvp3rlxvxwkkv5uu5hemia8y8p8y1ppybsb7atctm-7fkcbdf0sr79hned0ozhnhwfx \
    192.168.6.134:2377
 
## worker node token 확인
[root@docker1 bin]# docker swarm join-token worker
To add a worker to this swarm, run the following command:
 
    docker swarm join \
    --token SWMTKN-1-329iw3wouyvp3rlxvxwkkv5uu5hemia8y8p8y1ppybsb7atctm-bqiopme5khbiy4xjovx1f0vzb \
    192.168.6.134:2377

 

3. Swarm Node 확인

## Manager Node Info 확인
[root@docker1 bin]# docker info
-- 중략 --
Swarm: active
 NodeID: b7mvawujkbh9wv11rypk5z6k3
 Is Manager: true ## Manager Node로 구성 됨
 ClusterID: zn7zyn9fsl1obxkblkks0zj5u
 Managers: 1 ## Manager Node 1개
 Nodes: 3 ## Worker Node 3개
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: 192.168.6.134
 Manager Addresses:
  192.168.6.134:2377 ## Manager Node 주소
-- 중략 --
 
## Worker Node Info 확인
[root@docker2 ~]# docker info
-- 중략 --
Swarm: active
 NodeID: uh0oje50u4ye5wefng70g2b9d
 Is Manager: false  ## Manager Node 가 아님
 Node Address: 192.168.6.135
 Manager Addresses:
  192.168.6.134:2377
-- 중략 --

 

4. Worker Node를 Manager Node 로 승격

## Manager Node token을 Worker Node 인 docker2에 실행하면 다음과 같은 에러가 발생한다.
## 이미 swarm에 조인되어 있고 다른 swarm에 조인하고 싶으면 기존 swarm에서 나와 다시 조인해야한다는 뜻
[root@docker2 ~]# docker swarm join \
>     --token SWMTKN-1-329iw3wouyvp3rlxvxwkkv5uu5hemia8y8p8y1ppybsb7atctm-7fkcbdf0sr79hned0ozhnhwfx \
>     192.168.6.134:2377
Error response from daemon: This node is already part of a swarm. Use "docker swarm leave" to leave this swarm and join another one.
 
 
## 최초 docker의 역할을 정할때 Worker 나 Manager의 역할을 택일 하게되며 Worker -> Manager로 승격시 promote 명령을 내려야한다.
## Manager -> Worker 변경은 demote 명령을 Leader Manager Node에서 실행할 수 있다.
 
 
## docker3의 swarm 탈퇴
[root@docker3 ~]# docker swarm leave
Node left the swarm.
 
 
## docker1 에서 Node 확인
## docker3이 swarm에서 삭제 되지 않고 STATUS가 Down 상태로 변경된다.
[root@docker1 bin]# docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
b7mvawujkbh9wv11rypk5z6k3 *  docker1   Ready   Active        Leader
uh0oje50u4ye5wefng70g2b9d    docker2   Ready   Active       
z0x4c075acmmc8h1sfhwqy3ag    docker3   Down    Active       
 
## docker3 의 Manager로의 재가입
[root@docker3 ~]#  docker swarm join     --token SWMTKN-1-329iw3wouyvp3rlxvxwkkv5uu5hemia8y8p8y1ppybsb7atctm-7fkcbdf0sr79hned0ozhnhwfx     192.168.6.134:2377
This node joined a swarm as a manager.
 
 
## docker1에서 Node 확인
## 새로운 ID가 생성되며 MANAGER 항목이 Reachable 상태로 변경된다.
## Leader는 아니지만 Manager로의 역할을 수행할 수 있는 상태라는 뜻
[root@docker1 bin]# docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
b7mvawujkbh9wv11rypk5z6k3 *  docker1   Ready   Active        Leader
miuah480u64gb1ppxzuiullun    docker3   Ready   Active        Reachable
uh0oje50u4ye5wefng70g2b9d    docker2   Ready   Active       
z0x4c075acmmc8h1sfhwqy3ag    docker3   Down    Active       
 
 
## docker2를 Manager로 승격 시켜보자
## Leader에서 실행
[root@docker1 bin]# docker node promote docker2
Node docker2 promoted to a manager in the swarm.
[root@docker1 bin]# docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
b7mvawujkbh9wv11rypk5z6k3 *  docker1   Ready   Active        Leader
miuah480u64gb1ppxzuiullun    docker3   Ready   Active        Reachable
uh0oje50u4ye5wefng70g2b9d    docker2   Ready   Active        Reachable
z0x4c075acmmc8h1sfhwqy3ag    docker3   Down    Active       
 
## docker2에서 Info 확인
[root@docker2 ~]# docker info
-- 중략 --
Swarm: active
 NodeID: uh0oje50u4ye5wefng70g2b9d
 Is Manager: true ## Manager로 승격됨
 ClusterID: zn7zyn9fsl1obxkblkks0zj5u
 Managers: 3
 Nodes: 4 ## swarm에서 탈퇴한 docker3을 포함
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: 192.168.6.135
 Manager Addresses:
  0.0.0.0:2377
  192.168.6.134:2377
  192.168.6.135:2377
  192.168.6.136:2377
 
## docker2를 Worker로 강등 시키고 docker3의 Down 기록을 삭제해보자
[root@docker1 bin]# docker node demote docker2
Manager docker2 demoted in the swarm.
 
[root@docker1 bin]# docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
b7mvawujkbh9wv11rypk5z6k3 *  docker1   Ready   Active        Leader
miuah480u64gb1ppxzuiullun    docker3   Ready   Active        Reachable
uh0oje50u4ye5wefng70g2b9d    docker2   Ready   Active       
z0x4c075acmmc8h1sfhwqy3ag    docker3   Down    Active       
[root@docker1 bin]# docker node rm z0x4c075acmmc8h1sfhwqy3ag
z0x4c075acmmc8h1sfhwqy3ag
 
[root@docker1 bin]# docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
b7mvawujkbh9wv11rypk5z6k3 *  docker1   Ready   Active        Leader
miuah480u64gb1ppxzuiullun    docker3   Ready   Active        Reachable
uh0oje50u4ye5wefng70g2b9d    docker2   Ready   Active       
 
 
## Manager Node 구성은 홀수로 구성을 권장함
## 클러스터가 유지될 수 있는 의사결정에 필요한 최소한의 Manager Node 수는 3개이다.
## zookeeper 와 마찬가지로 3개 -> 2개로 수가 변화 되면 서로의 Manager는 신뢰할 수 있는 근거가 부족하게 됨
## 2개의 상태에서 Leader가 죽게 되면 남은 Manager가 Leader가 될것 같지만 Leader 선출은 실패하고 swarm은 깨지게 된다.
 
 
## docker2를 Manager로 승격 시키고 Info 보기
[root@docker1 bin]# docker node promote docker2
Node docker2 promoted to a manager in the swarm.
[root@docker1 bin]# docker node ls
ID                           HOSTNAME  STATUS  AVAILABILITY  MANAGER STATUS
b7mvawujkbh9wv11rypk5z6k3 *  docker1   Ready   Active        Leader
miuah480u64gb1ppxzuiullun    docker3   Ready   Active        Reachable
uh0oje50u4ye5wefng70g2b9d    docker2   Ready   Active        Reachable
  
[root@docker2 ~]# docker info
-- 중략 --
Swarm: active
 NodeID: uh0oje50u4ye5wefng70g2b9d
 Is Manager: true
 ClusterID: zn7zyn9fsl1obxkblkks0zj5u
 Managers: 3
 Nodes: 3
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: 192.168.6.135
 Manager Addresses:
  192.168.6.134:2377
  192.168.6.135:2377
  192.168.6.136:2377
반응형

'Micro Service Architecture > Docker' 카테고리의 다른 글

9. [docker] Update(Rolling) & Rollback  (0) 2021.03.16
8. [docker] swarm : service create  (0) 2021.03.16
5. [docker] container run  (0) 2021.03.16
4. [docker] container, image 삭제  (0) 2021.03.16
3. [docker] dockerfile build  (0) 2021.03.16