tjun月1日記

なんでもいいので毎月書きたい

GKEにstaticなegressのIPアドレスを割り当てる

タイトルのとおりですが、こんなことする人はあまりいないと思います。ingressをstaticにするのは簡単ですが、egressはやり方調べても情報がなくて苦労しました。

今回のケースでは、GKEである処理を行うworkerを作っていて、その処理の途中で外部のサーバへ接続してデータを取ってくる必要があるのですが、その外部のサーバがIPアドレスによるアクセス制限をかけていました。 そのため、アクセスするIPアドレスを申請する必要があり、どのnodeからリクエストしてもそのegressのIPアドレスを固定したいという状況です。

やり方を一言でいうと、NAT用のinstanceを立てる、です。

 johnlabarge/gke-nat-example を参考にしました。

IP, network, subnet, NAT用instanceなどいろいろと作らなくてはいけなくて大変なので、deployment-managerを使って設定を書いていきます。

メインのyaml myapp.yaml

imports:
- path: myapp-with-nat.jinja

resources:
- name: myapp-with-nat
  type: myapp-with-nat.jinja
  properties:
    region: asia-northeast1
    zone: asia-northeast1-a
    cluster_name: myapp
    num_nodes: 3

myapp-with-nat.jinja.scheme

info:
  title: MyApp GKE cluster with NAT  
  description: Creates a MyApp GKE Cluster with a nat route

required:
  - zone
  - cluster_name
  - num_nodes

properties:
  region:
    type: string
    description: GCP region
    default: asia-northeast1

  zone:
    type: string
    description: GCP zone
    default: asia-northeast1-a

  cluster_name:
    type: string
    description: Cluster Name
    default: "myapp"

  num_nodes:
    type: integer
    description: Number of nodes
    default: 3

myapp-with-nat.jinja

resources:
######## Static IP ########
- name: {{ properties["cluster_name"] }}-static-address
  type: compute.v1.address
  properties:
    region: {{ properties["region"] }}

######## Network ############
- name: {{ properties["cluster_name"] }}-nat-network
  type: compute.v1.network
  properties: 
    autoCreateSubnetworks: false
######### Subnets ##########
######### For Cluster #########
- name: {{ properties["cluster_name"] }}-cluster-subnet 
  type: compute.v1.subnetwork
  properties:
    network: $(ref.{{ properties["cluster_name"] }}-nat-network.selfLink)
    ipCidrRange: 172.16.0.0/12
    region: {{ properties["region"] }}
########## NAT Subnet ##########
- name: nat-subnet
  type: compute.v1.subnetwork
  properties: 
    network: $(ref.{{ properties["cluster_name"] }}-nat-network.selfLink)
    ipCidrRange: 10.1.1.0/24
    region: {{ properties["region"] }}
########## NAT VM ##########
- name: nat-vm
  type: compute.v1.instance 
  properties:
    zone: {{ properties["zone"] }}
    canIpForward: true
    tags:
      items:
      - nat-to-internet
    machineType: https://www.googleapis.com/compute/v1/projects/{{ env["project"] }}/zones/{{ properties["zone"] }}/machineTypes/f1-micro
    disks:
      - deviceName: boot
        type: PERSISTENT
        boot: true
        autoDelete: true
        initializeParams:
          sourceImage: https://www.googleapis.com/compute/v1/projects/debian-cloud/global/images/debian-7-wheezy-v20150423
    networkInterfaces:
    - network: projects/{{ env["project"] }}/global/networks/{{ properties["cluster_name"] }}-nat-network
      subnetwork: $(ref.nat-subnet.selfLink)
      accessConfigs:
      - name: External NAT
        type: ONE_TO_ONE_NAT
        natIP: $(ref.{{ properties["cluster_name"] }}-static-address.address)
    metadata:
      items:
      - key: startup-script
        value: |
          #!/bin/sh
          # --
          # ---------------------------
          # Install TCP DUMP
          # Start nat; start dump
          # ---------------------------
          apt-get update
          apt-get install -y tcpdump
          apt-get install -y tcpick 
          iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
          nohup tcpdump -e -l -i eth0 -w /tmp/nat.pcap &
          nohup tcpdump -e -l -i eth0 > /tmp/nat.txt &
          echo 1 | tee /proc/sys/net/ipv4/ip_forward
########## FIREWALL RULES FOR NAT VM ##########
- name: nat-vm-firewall 
  type: compute.v1.firewall
  properties: 
    allowed:
    - IPProtocol : tcp
      ports: []
    sourceTags: 
    - route-through-nat
    network: $(ref.{{ properties["cluster_name"] }}-nat-network.selfLink)
- name: nat-vm-ssh
  type: compute.v1.firewall
  properties: 
    allowed:
    - IPProtocol : tcp
      ports: [22]
    sourceRanges: 
    - 0.0.0.0/0
    network: $(ref.{{ properties["cluster_name"] }}-nat-network.selfLink)
########## GKE CLUSTER CREATION ##########
- name: {{ properties["cluster_name"] }}
  type: container.v1.cluster
  metadata: 
   dependsOn:
   - {{ properties["cluster_name"] }}-nat-network 
   - {{ properties["cluster_name"] }}-cluster-subnet
  properties: 
    cluster: 
      name: {{ properties["cluster_name"] }}
      initialNodeCount: {{ properties["num_nodes"] }}
      network: {{ properties["cluster_name"] }}-nat-network
      subnetwork: {{ properties["cluster_name"] }}-cluster-subnet
      nodeConfig:
        oauthScopes:
        - https://www.googleapis.com/auth/compute
        - https://www.googleapis.com/auth/devstorage.read_write
        - https://www.googleapis.com/auth/logging.write
        - https://www.googleapis.com/auth/monitoring
        - https://www.googleapis.com/auth/bigquery
        tags:
        - route-through-nat
    zone: {{ properties["zone"] }}
########## GKE MASTER ROUTE ##########
- name: master-route
  type: compute.v1.route
  properties:
    destRange: $(ref.{{ properties["cluster_name"] }}.endpoint)
    network: $(ref.{{ properties["cluster_name"] }}-nat-network.selfLink)
    nextHopGateway: projects/{{ env["project"] }}/global/gateways/default-internet-gateway
    priority: 100
    tags:
    - route-through-nat
########## NAT ROUTE ##########
- name: {{ properties["cluster_name"] }}-route-through-nat
  metadata: 
    dependsOn:
    - {{ properties["cluster_name"] }}
    - {{ properties["cluster_name"] }}-nat-network
  type: compute.v1.route
  properties: 
    network: $(ref.{{ properties["cluster_name"] }}-nat-network.selfLink)
    destRange: 0.0.0.0/0
    description: "route all other traffic through nat"
    nextHopInstance: $(ref.nat-vm.selfLink)
    tags:
    - route-through-nat
    priority: 800

長いので説明は省きますが、読めばなんとなく分かると思います。

これで、

deployment-manager deployments create myapp --config myapp.yml

とすると、NAT経由でリクエストを投げられるGKEクラスタを作ることができます。