Search
Duplicate
🧾

포팅 메뉴얼

사용 프로그램 버전

프로젝트에서 사용한 프로그램의 버전을 정리합니다.
JVM : 11
Spring boot : 2.7.10
InfluxDB : 2.7
Kafka : 3.2
gradle : 7.6
Portainer : 2.16.2
NginxProxyManager : 2.29.2
Terraform : 1.4.6
React : 8.2.0
yarn : 1.22.19

배포환경

서버구성에 사용된 코드를 정리합니다

서버구성(AWS)

DataInstance : x4 (t3.medium)

Influxdb 8086:8086
SpringbootData1 8084:8084
SpringbootData2 8085:8085

KafakInstance : x5 (c5.xlarge x3, t3.xlarge x2)

kafka 9092:9092
zookeper(client와 통신) 2181:2181
zookeeper(zookeeper간 통신) 2888:2888
zookeeper(리더 선출) 3888:3888

MainInsance : x1 (t3.xlarge)

api 8091:8091
dataDivision 9999:9999, 9998:9998, 9997:9997 8090:8090
react 3000:3000

JenkinsInsance : x1 (t3.large)

nginx_porxy_manager 80:80, 443:443, 8000:81
portainer 9000:9000
grafana 7000:7000
prometheus 9090:9090
jenkins 8080:8080

InfluxInsance : x1 (c5.xlarge)

influxDB 8086:8086

서버 생성(Terraform)

main.tf

# AWS 공급자와, 리전 설정 provider "aws" { region = "ap-northeast-2" } # 로컬변수 설정 locals { data_instance_count = 6 kafka_instance_count = 3 ami_id = "ami-04cebc8d6c4f297a3" pem_key = "semsekey2" common_tags = { Terraform = "true" } } # VPC 모듈 # 경로에 설정된 모듈을 사용하여 VPC 구성 module "vpc" { source = "./modules/vpc" } # 보안그룹 모듈 # 경로에 설정된 모듈을 사용하여 보안그룹 생성 module "security_group" { source = "./modules/security_group" vpc_id = module.vpc.vpc_id } # 데이터 생성 인스턴스 모듈 # 인스턴스 개수, 이미지, key 이름, 서브넷 아이디, 보안그룹 아이디 module "data_instances" { source = "./modules/ec2_instances" instance_count = local.data_instance_count ami_id = local.ami_id key_name = local.pem_key subnet_id = module.vpc.subnet_id security_group_id = module.security_group.security_group_id instance_name_prefix = "data-instance" } # 카프가 생성 인스턴스 모듈 # 인스턴스 개수, 이미지, key 이름, 서브넷 아이디, 보안그룹 아이디 module "kafka_instances" { source = "./modules/ec2_instances" instance_count = local.kafka_instance_count ami_id = local.ami_id key_name = local.pem_key subnet_id = module.vpc.subnet_id security_group_id = module.security_group.security_group_id instance_name_prefix = "kafka-instance" } # 젠킨스 생성 인스턴스 # jenkins + anislbe로 9대의 서버의 배포를 담당한다. resource "aws_instance" "jenkins" { ami = local.ami_id instance_type = "t3.large" key_name = local.pem_key subnet_id = module.vpc.subnet_id user_data = "" root_block_device { volume_type = "gp3" volume_size = 200 } vpc_security_group_ids = [ module.security_group.security_group_id ] tags ={ Name="jenkins-instance" } } # 데이터 디비전 서버 # DataDivision에 사용됨 resource "aws_instance" "DataDivision" { ami = local.ami_id instance_type = "t3.large" key_name = local.pem_key subnet_id = module.vpc.subnet_id user_data = "" root_block_device { volume_type = "gp3" volume_size = 200 } vpc_security_group_ids = [ module.security_group.security_group_id ] tags ={ Name="data_division-instance" } } # Elastic IP for Jenkins instance resource "aws_eip" "jenkins" { instance = aws_instance.jenkins.id tags ={ Name = "jenkins-instance-eip" } } # Elastic IP for DataDivision instance resource "aws_eip" "DataDivision" { instance = aws_instance.DataDivision.id tags ={ Name = "data_division-instance-eip" } } output "jenkins_instance_public_ip" { value = aws_instance.jenkins.public_ip }
Bash
복사
main.tf (module은 생략)

ec2_instances

# 입력변수 # 모듈을 사용할 때 이러한 변수에 대한 값을 제공한다. variable "instance_count" { type = number } variable "ami_id" { type = string } variable "key_name" { type = string } variable "subnet_id" { type = string } variable "security_group_id" { type = string } variable "instance_name_prefix" { type = string } # instance 생성 # for_each루프를 사용해, AMI_ID, 인스턴스 유형, 키 이름, 서브넷 ID로 인스턴스 생성 resource "aws_instance" "this" { for_each = toset([for idx in range(var.instance_count) : tostring(idx)]) ami = var.ami_id instance_type = "t3.medium" key_name = var.key_name subnet_id = var.subnet_id vpc_security_group_ids = [var.security_group_id] user_data = "" tags = { Name = "${var.instance_name_prefix}-${each.key + 1}" Terraform = "true" } root_block_device { volume_type = "gp3" volume_size = 50 } }
Bash
복사

vpc

# VPC # CIDR 블록 "10.0.0.0/16"은 10.0.0.0에서 10.0.255.255까지의 IP 주소 범위를 나타냅니다. resource "aws_vpc" "this" { cidr_block = "10.0.0.0/16" enable_dns_support = true enable_dns_hostnames = true tags = { Name = "ssafy-semse-vpc" } } # Subnet # CIDR 블록 "10.0.1.0/24"는 10.0.1.0에서 10.0.1.255까지의 IP 주소 범위를 나타냅니다. resource "aws_subnet" "this" { vpc_id = aws_vpc.this.id cidr_block = "10.0.1.0/24" map_public_ip_on_launch = true # 자동 public ip 연결 tags = { Name = "ssafy-semse-subnet" } } # Internet Gateway resource "aws_internet_gateway" "this" { vpc_id = aws_vpc.this.id tags = { Name = "ssafy-semse-igw" } } # Route table resource "aws_route_table" "this" { vpc_id = aws_vpc.this.id tags = { Name = "ssafy-semse-route-table" } } # Route for internet-bound traffic resource "aws_route" "internet_access" { route_table_id = aws_route_table.this.id destination_cidr_block = "0.0.0.0/0" gateway_id = aws_internet_gateway.this.id } # Associate the route table with the subnet resource "aws_route_table_association" "this" { subnet_id = aws_subnet.this.id route_table_id = aws_route_table.this.id } # 출력변수 # 다른 구성이나 모듈에서 사용할 수 있는 출력 변수로 내보낸다. output "vpc_id" { value = aws_vpc.this.id } output "subnet_id" { value = aws_subnet.this.id }
Bash
복사

SecurityGroup

# 변수정의 variable "vpc_id" { type = string } # 보안그룹 생성 resource "aws_security_group" "this" { name = "ssafy-semse" description = "Security group for ssafy-semse instances" vpc_id = var.vpc_id } # SSH 접속허용 resource "aws_security_group_rule" "ssh_inbound" { security_group_id = aws_security_group.this.id type = "ingress" from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # 80 허용 resource "aws_security_group_rule" "nginx_inbound" { security_group_id = aws_security_group.this.id type = "ingress" from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # 443 허용 resource "aws_security_group_rule" "https_inbound" { security_group_id = aws_security_group.this.id type = "ingress" from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # 3000 허용 resource "aws_security_group_rule" "react_inbound" { security_group_id = aws_security_group.this.id type = "ingress" from_port = 3000 to_port = 3000 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # 그라파나 인바운드 resource "aws_security_group_rule" "grafana_inbound" { security_group_id = aws_security_group.this.id type = "ingress" from_port = 7000 to_port = 7000 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # proxyManager 접속 허용 resource "aws_security_group_rule" "proxyManager_inbound" { security_group_id = aws_security_group.this.id type = "ingress" from_port = 8000 to_port = 8000 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # cadvisor 접속 허용 resource "aws_security_group_rule" "cadvisor_inbound" { security_group_id = aws_security_group.this.id type = "ingress" from_port = 8080 to_port = 8080 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # jenkins 접속 허용 resource "aws_security_group_rule" "jenkins_inbound" { security_group_id = aws_security_group.this.id type = "ingress" from_port = 8100 to_port = 8100 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # influxdb 접속 허용 resource "aws_security_group_rule" "influxdb_inbound" { security_group_id = aws_security_group.this.id type = "ingress" from_port = 8086 to_port = 8086 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # 데이터 division resource "aws_security_group_rule" "data_division" { security_group_id = aws_security_group.this.id type = "ingress" from_port = 8090 to_port = 8090 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # 데이터 API resource "aws_security_group_rule" "data_api" { security_group_id = aws_security_group.this.id type = "ingress" from_port = 8091 to_port = 8091 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # portainer 접속 허용 resource "aws_security_group_rule" "portainer_inbound" { security_group_id = aws_security_group.this.id type = "ingress" from_port = 9000 to_port = 9000 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # 프로메테우스 인바운드 resource "aws_security_group_rule" "prometheus_inbound" { security_group_id = aws_security_group.this.id type = "ingress" from_port = 9090 to_port = 9090 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # 카프카 인바운드 테스트 resource "aws_security_group_rule" "kafka_text_inbound" { security_group_id = aws_security_group.this.id type = "ingress" from_port = 9092 to_port = 9092 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # Docker config 연결 resource "aws_security_group_rule" "docker_config_inbound" { security_group_id = aws_security_group.this.id type = "ingress" from_port = 9323 to_port = 9323 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # kafka jmx 통신 resource "aws_security_group_rule" "jmx_inbound" { security_group_id = aws_security_group.this.id type = "ingress" from_port = 9404 to_port = 9404 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # kafka client와 통신 resource "aws_security_group_rule" "zookeeper_inbound_2181" { security_group_id = aws_security_group.this.id type="ingress" from_port = 2181 to_port = 2181 protocol="tcp" cidr_blocks = ["0.0.0.0/0"] } # zookeeper간 통신 resource "aws_security_group_rule" "zookeeper_inbound_2888" { security_group_id = aws_security_group.this.id type="ingress" from_port = 2888 to_port = 2888 protocol="tcp" cidr_blocks = ["0.0.0.0/0"] } # 카프카 리더 선출에 사용되는 포트 resource "aws_security_group_rule" "zookeeper_inbound_3888" { security_group_id = aws_security_group.this.id type="ingress" from_port = 3888 to_port = 3888 protocol="tcp" cidr_blocks = ["0.0.0.0/0"] } # 도커 API (Portainer) # Allow tcp traffic for 2375. resource "aws_security_group_rule" "docker_api_inbound" { security_group_id = aws_security_group.this.id type = "ingress" from_port = 2375 to_port = 2375 protocol = "tcp" cidr_blocks = ["10.0.1.0/24", "43.201.55.255/32", "3.36.77.155/32"] } # ping 테스트를 위한 icmp 허용 resource "aws_security_group_rule" "icmp_inbound" { security_group_id = aws_security_group.this.id type = "ingress" from_port = -1 to_port = -1 protocol = "icmp" cidr_blocks = ["10.0.1.0/24"] } # Allow all outbound traffic resource "aws_security_group_rule" "all_outbound" { security_group_id = aws_security_group.this.id type = "egress" from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] } output "security_group_id" { value = aws_security_group.this.id }
Bash
복사

서버환경 설정

서버 방화벽 설정

# ufw 설치명령 sudo apt-get install ufw # ufw 상태확인 명령 sudo ufw status verbose sudo ufw status sudo ufw allow 22 # ssh (이거하고 활성화해야함!) sudo ufw enable # ... sudo ufw status
Bash
복사
ufw 설정 (외부서버에서만 적용)

전체 서버 환경설정

# 플레이북에는 하나 이상의 플레이가 있을 수 있습니다. - name: 자바 11설치 hosts: main_server # 이 플레이의 대상 호스트 (인벤토리 파일에서 정의됨) become: yes # 필요한 경우 권한을 상승시킵니다 (sudo로 권한 상승) tasks: # 이 플레이에 대해 실행할 작업 목록 - name: apt update apt: update_cache: yes - name: Java OpenJDK 11 Install apt: name: openjdk-11-jdk state: present - name: Java version check ansible.builtin.command: _raw_params: java --version - name: 원격 호스트에 Docker & Docker-compose 설치 및 구성 hosts: main_server become: yes tasks: - name: Docker 설치를 위한 필수 패키지 설치 apt: name: - apt-transport-https - ca-certificates - curl - gnupg-agent - software-properties-common state: present - name: Docker 공식 GPG 키 추가 ansible.builtin.apt_key: url: https://download.docker.com/linux/ubuntu/gpg state: present - name: Docker APT 리포지토리 추가 ansible.builtin.apt_repository: repo: deb [arch=amd64] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable state: present - name: Docker 설치 ansible.builtin.apt: name: docker-ce state: present update_cache: yes - name: Docker 서비스 활성화 및 시작 ansible.builtin.service: name: docker state: started enabled: yes - name: Update Docker service configuration file ansible.builtin.lineinfile: path: /lib/systemd/system/docker.service regexp: "^ExecStart=" line: "ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375" become: yes - name: 도커 리부팅 ansible.builtin.systemd: daemon_reload: yes become: yes - name: 도커 재시작 ansible.builtin.service: name: docker state: restarted become: yes - name: 도커 컴포즈 링크 가져오기 ansible.builtin.shell: echo "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" register: docker_compose_url - name: 도커 컴포즈 설치 ansible.builtin.get_url: url: "{{ docker_compose_url.stdout }}" dest: /usr/local/bin/docker-compose mode: "0755" - name: Check docker-compose version ansible.builtin.command: cmd: docker-compose --version - name: 도커 추가 설정 진행 hosts: main_server become: yes tasks: - name: 도커 네트워크가 존재하는지 확인 ansible.builtin.command: cmd: docker network ls --filter name=br_app register: br_app_network changed_when: false - name: 도커 네트워크 생성 ansible.builtin.command: cmd: docker network create br_app when: "'br_app' not in br_app_network.stdout" - name: 도커 그룹 생성 become: yes group: name: docker state: present - name: 현재 유저를 도커 그룹에 추가 ansible.builtin.user: name: "{{ ansible_user }}" group: docker append: yes - name: 도커소켓 권한 수정 become: yes file: path: /var/run/docker.sock mode: "0666" - name: 도커 재실행 ansible.builtin.systemd: name: docker state: restarted - name: 파이썬 라이브러리 설치 hosts: main_server become: yes tasks: - name: Install python3-apt apt: name: python3-apt update_cache: yes - name: Install python3-pip ansible.builtin.package: name: python3-pip state: present - name: 도커 컴포즈 설치 ansible.builtin.pip: name: docker state: present - name: 도커 컴포즈 라이브러리 설치 ansible.builtin.pip: name: docker-compose state: present - name: 한국시간 변경 hosts: all become: yes tasks: - name: Set timezone command: timedatectl set-timezone Asia/Seoul args: warn: false
YAML
복사
Ansible을 활용한 서버 전체 설치

젠킨스 설치

# 1. 자바 설치 sudo apt install openjdk-11-jre -y java -version # 2. Jenkins 저장소 Key 다운로드 wget -q -O - https://pkg.jenkins.io/debian/jenkins-ci.org.key | sudo apt-key add - # 3. sources.list 에 추가 echo deb http://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list # 4. 키 등록 sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 5BA31D57EF5975CA # 5. 업데이트 & 설치 sudo apt-get update sudo apt install jenkins -y # 6. 젠킨스 실행확인 sudo systemctl status jenkins # 7. 부팅 시 젠킨스 시작되는지 확인 sudo systemctl is-enabled jenkins # 8. 젠킨스 초기 비밀번호 확인 sudo cat /var/lib/jenkins/secrets/initialAdminPassword
YAML
복사
젠킨스 설치

자동배포

데이터 생성서버

- name: Docker 컨테이너로 Spring Boot 애플리케이션 배포 hosts: data_instance_server become: yes vars: app_image: scofe/data_generator github_token: "{{ token }}" # 이 토큰은 노출되지 않도록 주의해주세요! tasks: - name: Install required packages apt: name: - git state: present - name: Clone the repository git: repo: "https://scofe97:{{ github_token }}@github.com/Projcet-E201/DataGenerator.git" dest: "/home/ubuntu/DataGenerator" clone: yes update: yes force: yes - name: Copy application-secret.yaml on the remote host command: cmd: cp /home/ubuntu/secret/application-secret.yaml /home/ubuntu/DataGenerator/src/main/resources - name: Grant execution permissions to Gradle wrapper ansible.builtin.file: path: /home/ubuntu/DataGenerator/gradlew mode: "0755" - name: Build the project with Gradle command: ./gradlew clean bootjar args: chdir: /home/ubuntu/DataGenerator - name: Build Docker image command: docker build -t {{ app_image }} /home/ubuntu/DataGenerator # 컨테이너 제거 - name: Remove existing container 1 if it's running ansible.builtin.docker_container: name: spring_boot_app_container1 state: absent ignore_errors: yes - name: Remove existing container 2 if it's running ansible.builtin.docker_container: name: spring_boot_app_container2 state: absent ignore_errors: yes - name: Deploy container 1 include_tasks: deploy_container.yml vars: app_container_name: spring_boot_app_container1 app_profile: "{{ spring_profiles.container1[inventory_hostname] }}" app_port: 8084 - name: Deploy container 2 include_tasks: deploy_container.yml vars: app_container_name: spring_boot_app_container2 app_profile: "{{ spring_profiles.container2[inventory_hostname] }}" app_port: 8085
YAML
복사
data_generator_playbook.yaml
- name: Remove existing container if it's running ansible.builtin.docker_container: name: "{{ app_container_name }}" state: absent ignore_errors: yes - name: Ensure network exists ansible.builtin.docker_network: name: br_app state: present - name: Run and configure Spring Boot Docker container ansible.builtin.docker_container: name: "{{ app_container_name }}" image: "{{ app_image }}" state: started recreate: yes exposed_ports: - "{{ app_port }}" ports: - "127.0.0.1:{{ app_port }}:{{ app_port }}" restart_policy: always networks: - name: br_app networks_cli_compatible: yes env: PROFILE: "{{ app_profile }}"
YAML
복사
depoly.container.yml

데이터 처리

- name: data_division 배포 hosts: data_division_server vars: compose_file: "./data_division" container_name: "data_division" image_name: "scofe/data_division" tasks: - name: Create data_division directory file: path: ~/data_division state: directory - name: Copy docker-compose.yml copy: src: "{{ compose_file }}/docker-compose.yaml" dest: ~/data_division/docker-compose.yaml - name: Stop and remove existing Docker container docker_container: name: "{{ container_name }}" state: absent force_kill: yes register: stop_result ignore_errors: yes - name: Ensure Docker is running service: name: docker state: started - name: data_division 배포 docker_compose: project_src: "~/data_division" state: present pull: yes remove_orphans: yes restarted: yes
YAML
복사
data_division_playbook.yaml
version: "3" services: dev-back-blue: container_name: dataDivision image: scofe/data_division:latest ports: - "8090:8090" - "9997:9997" - "9998:9998" - "9999:9999" environment: PROFILE: "prod" networks: - br_app networks: br_app: external: true
YAML
복사
docker-compose.yaml

프론트

- name: 리액트 앱 빌드 hosts: main_server vars: container_name: "react" image_tag: "scofe/react:latest" docker_network: br_app tasks: - name: Ensure Docker is running service: name: docker state: started - name: Deploy container docker_container: name: "{{ container_name }}" image: "{{ image_tag }}" state: started recreate: yes pull: yes published_ports: - 3000:3000 networks: - name: "{{ docker_network }}"
YAML
복사
react_playbook.yaml

배포 시 특이사항

없음

DB접속 프로퍼티

version: '3' services: influxdb: image: influxdb:2.7.0 container_name: influxdb ports: - "8086:8086" volumes: - ./config.yml:/etc/influxdb2/config.yml - ./influxdb-data:/var/lib/influxdb2 networks: - br_app networks: br_app: external: true
YAML
복사