테라폼 공식 홈페이지 예제에서 나온대로 GKE에 노드를 프로비저닝 할때 에러가 있어서

정리하는 겸 해서 만들어봤다.

이전 글 https://jjongguet.tistory.com/170 와 연관됩니다.

 

 

공식링크: https://developer.hashicorp.com/terraform/tutorials/kubernetes/gke

0. GCP계정 생성

  • GCP 에 계정 및 project는 미리 생성되어있다고 가정함
  • 예시에서 사용하는 Project ID 는 jjongjjong 이라고 지정함

1. gcloud SDK 설치 & kubectl 설치

brew install --cask google-cloud-sdk
gcloud init #만약 기존에 사용중인 gcloud 계정이 있다면, 새로운 계정을 먼저 등록해야한다.
gcloud auth application-default login
brew install kubernetes-cli

2. clone Repo

git clone https://github.com/hashicorp/learn-terraform-provision-gke-cluster
cd learn-terraform-provision-gke-cluster
gcloud config get-value project

3. 코드수정

terraform.tfvars

#원리 이렇게 하라고 나와있는데
#project_id = "REPLACE_ME"
#region     = "us-central1" 

#이렇게 지정했다
project_id = "jjongjjong"
region     = "asia-northeast3"

4. terraform init

terraform init

5. terraform apply(에러)

terraform apply

# 텍스트 입력창이 나오면
yes

6-1. ERROR 발생

google_container_node_pool.primary_nodes: Creating...
╷
│ Error: error creating NodePool: googleapi: Error 403: Insufficient regional quota to satisfy request: resource "SSD_TOTAL_GB": request requires '600.0' and is short '100.0'. project has a quota of '500.0' with '500.0' available. View and manage quotas at https://console.cloud.google.com/iam-admin/quotas?usage=USED&project=jjongjjong.
│ Details:
│ [
│   {
│     "@type": "type.googleapis.com/google.rpc.RequestInfo",
│     "requestId": "0x79f9257787439f87"
│   }
│ ]
│ , forbidden
│
│   with google_container_node_pool.primary_nodes,
│   on gke.tf line 40, in resource "google_container_node_pool" "primary_nodes":
│   40: resource "google_container_node_pool" "primary_nodes" {
│
╵

6-2. Error 원인과 대응

에러가 발생한 이유: region에 지정된 resource quota는 500GB로 지정되어있는데, region & nodepool까지 하다보니 600GB가 필요해서, 100GB가 모자라다고 하는것같다. (무료계정이라 그런가…)

 

해결법1: 파일 내의 region과 location에 대한 설정을 직접해준다.

해결법2: region의 resource quota를 바꾼다.

두개 중 해결법1으로 해결하고자한다.

 

region은 “asia-northeast3”

location은 “asia-northeast3-b” 로 직접 지정해주었다.

#terraform.tfvars
project_id = "jjongjjong"
region     = "asia-northeast3"
#gke.tf
variable "gke_username" {
  default     = ""
  description = "gke username"
}

variable "gke_password" {
  default     = ""
  description = "gke password"
}

variable "gke_num_nodes" {
  default     = 2
  description = "number of gke nodes"
}

# GKE cluster
data "google_container_engine_versions" "gke_version" {
  location       = "asia-northeast3-b"
  version_prefix = "1.27."
}

resource "google_container_cluster" "primary" {
  name     = "${var.project_id}-gke"
  location = "asia-northeast3-b"

  # We can't create a cluster with no node pool defined, but we want to only use
  # separately managed node pools. So we create the smallest possible default
  # node pool and immediately delete it.
  remove_default_node_pool = true
  initial_node_count       = 1

  network    = google_compute_network.vpc.name
  subnetwork = google_compute_subnetwork.subnet.name
}

# Separately Managed Node Pool
resource "google_container_node_pool" "primary_nodes" {
  name     = google_container_cluster.primary.name
  location = "asia-northeast3-b"
  cluster  = google_container_cluster.primary.name

  version    = data.google_container_engine_versions.gke_version.release_channel_latest_version["STABLE"]
  node_count = var.gke_num_nodes

  node_config {
    oauth_scopes = [
      "https://www.googleapis.com/auth/logging.write",
      "https://www.googleapis.com/auth/monitoring",
    ]

    labels = {
      env = var.project_id
    }

    # preemptible  = true
    machine_type = "n1-standard-1"
    tags         = ["gke-node", "${var.project_id}-gke"]
    metadata = {
      disable-legacy-endpoints = "true"
    }
  }
}

참고링크: https://stackoverflow.com/questions/74836213/error-403-insufficient-regional-quota-to-satisfy-request-resource-ssd-total?rq=3

 

`Error 403: Insufficient regional quota to satisfy request: resource "SSD_TOTAL_GB"` when creating kubernetes cluster with terra

Hi I am playing around with kubernetes and terraform in a google cloud free tier account (trying to use the free 300$). Here is my terraform resource declaration, it is something very standard I co...

stackoverflow.com

6-2. 다시 실행

terraform apply

#텍스트 입력창이 나오면
yes

순조롭게 진행되다보면 아래와같은 로그가 나온다

google_container_cluster.primary: Creation complete after 8m30s [id=projects/jjongjjong/locations/asia-northeast3-b/clusters/jjongjjong-gke]
google_container_node_pool.primary_nodes: Creating...
google_container_node_pool.primary_nodes: Still creating... [10s elapsed]
google_container_node_pool.primary_nodes: Still creating... [20s elapsed]
google_container_node_pool.primary_nodes: Still creating... [30s elapsed]
google_container_node_pool.primary_nodes: Still creating... [40s elapsed]
google_container_node_pool.primary_nodes: Still creating... [50s elapsed]
google_container_node_pool.primary_nodes: Still creating... [1m0s elapsed]
google_container_node_pool.primary_nodes: Still creating... [1m10s elapsed]
google_container_node_pool.primary_nodes: Creation complete after 1m13s [id=projects/jjongjjong/locations/asia-northeast3-b/clusters/jjongjjong-gke/nodePools/jjongjjong-gke]

Apply complete! Resources: 2 added, 0 changed, 1 destroyed.

Outputs:

kubernetes_cluster_host = "34.64.43.199"
kubernetes_cluster_name = "jjongjjong-gke"
project_id = "jjongjjong"
region = "asia-northeast3"

7. GKE클러스터의 인증플러그인 설치

gcloud components install gke-gcloud-auth-plugin

#텍스트입력창 뜨면
y

8-1. GKE클러스터의 정보 연동

gcloud container clusters get-credentials $(terraform output -raw kubernetes_cluster_name) --region $(terraform output -raw region)

8-2. Error발생: 잘못된 정보입력

gcloud container clusters get-credentials $(terraform output -raw kubernetes_cluster_name) --region $(terraform output -raw region)
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials) ResponseError: code=404, message=Not found: projects/jjongjjong/locations/asia-northeast3/clusters/jjongjjong-gke.
Could not find [jjongjjong-gke] in [asia-northeast3].
Did you mean [jjongjjong-gke] in [asia-northeast3-b]?

원인: region이 아니라 location으로 지정해주어야하는것같다.

8-3. Error 대응: 수동입력

중간의 $(terraform output -raw kubernetes_cluster_name) 은 에러가 필요하기때문에 echo 를 사용해서, 어떤명령어인지 내용을 확인한다.

echo $(terraform output -raw kubernetes_cluster_name)
#jjongjjong-gke

echo $(terraform output -raw region)
#asia-northeast-b

내용을 참조하여, get-credentials 뒤에 추가해주고, region도 직접 명세해준다

gcloud container clusters get-credentials jjongjjong-gke --region asia-northeast3-b

9. 완료!

잘 떠있는지 확인하기 위해서 kubectl 명령어로 확인해본다

#kubectl get nodes

NAME                                                STATUS   ROLES    AGE     VERSION
gke-jjongjjong-gke-jjongjjong-gke-21ca5907-np7j   Ready    <none>   8m27s   v1.27.4-gke.900
gke-jjongjjong-gke-jjongjjong-gke-21ca5907-pxnz   Ready    <none>   8m25s   v1.27.4-gke.900

10. 추가사항(K9s)

관리시 편의성을 위해서 K9s를 설치한다

brew install k9s

#k9s

 

 

 

11. 제거

terraform destroy

#입력창 나오면
yes
jjongguet