Browse Source

Merge pull request #2486 from ahhda/deploy-storage-node

DevOps - Storage node deployment on EKS with Kubernetes and Pulumi
Mokhtar Naamani 3 năm trước cách đây
mục cha
commit
b33f198d98

+ 1 - 0
.dockerignore

@@ -6,3 +6,4 @@ query-node/**/dist
 query-node/lib
 cli/
 tests/
+devops/

+ 16 - 0
colossus.Dockerfile

@@ -0,0 +1,16 @@
+FROM --platform=linux/x86-64 node:14 as builder
+
+WORKDIR /joystream
+COPY . /joystream
+RUN  rm -fr /joystream/pioneer
+
+EXPOSE 3001
+
+RUN yarn --frozen-lockfile
+
+RUN yarn workspace @joystream/types build
+RUN yarn workspace storage-node build
+
+RUN yarn
+
+ENTRYPOINT yarn colossus --dev --ws-provider $WS_PROVIDER_ENDPOINT_URI

+ 5 - 0
devops/infrastructure/storage-node/.gitignore

@@ -0,0 +1,5 @@
+/bin/
+/node_modules/
+kubeconfig.yml
+package-lock.json
+Pulumi.*.yaml

+ 33 - 0
devops/infrastructure/storage-node/Pulumi.yaml

@@ -0,0 +1,33 @@
+name: eks-cluster
+runtime: nodejs
+description: A Pulumi program to deploy storage node to cloud environment
+template:
+  config:
+    aws:profile:
+      default: joystream-user
+    aws:region:
+      default: us-east-1
+    wsProviderEndpointURI:
+      description: Chain RPC endpoint
+      default: 'wss://rome-rpc-endpoint.joystream.org:9944/'
+    isAnonymous:
+      description: Whether you are deploying an anonymous storage node
+      default: true
+    isLoadBalancerReady:
+      description: Whether the load balancer service is ready and has been assigned an IP
+      default: false
+    colossusPort:
+      description: Port that is exposed for the colossus container
+      default: 3000
+    storage:
+      description: Amount of storage in gigabytes for ipfs volume
+      default: 40
+    providerId:
+      description: StorageProviderId assigned to you in working group
+    keyFile:
+      description: Path to JSON key export file to use as the storage provider (role account)
+    publicURL:
+      description: API Public URL to announce
+    passphrase:
+      description: Optional passphrase to use to decrypt the key-file
+      secret: true

+ 120 - 0
devops/infrastructure/storage-node/README.md

@@ -0,0 +1,120 @@
+# Amazon EKS Cluster: Hello World!
+
+This example deploys an EKS Kubernetes cluster with custom ipfs image
+
+## Deploying the App
+
+To deploy your infrastructure, follow the below steps.
+
+### Prerequisites
+
+1. [Install Pulumi](https://www.pulumi.com/docs/get-started/install/)
+1. [Install Node.js](https://nodejs.org/en/download/)
+1. Install a package manager for Node.js, such as [npm](https://www.npmjs.com/get-npm) or [Yarn](https://yarnpkg.com/en/docs/install).
+1. [Configure AWS Credentials](https://www.pulumi.com/docs/intro/cloud-providers/aws/setup/)
+1. Optional (for debugging): [Install kubectl](https://kubernetes.io/docs/tasks/tools/)
+
+### Steps
+
+After cloning this repo, from this working directory, run these commands:
+
+1. Install the required Node.js packages:
+
+   This installs the dependent packages [needed](https://www.pulumi.com/docs/intro/concepts/how-pulumi-works/) for our Pulumi program.
+
+   ```bash
+   $ npm install
+   ```
+
+1. Create a new stack, which is an isolated deployment target for this example:
+
+   This will initialize the Pulumi program in TypeScript.
+
+   ```bash
+   $ pulumi stack init
+   ```
+
+1. Set the required configuration variables in `Pulumi.<stack>.yaml`
+
+   ```bash
+   $ pulumi config set-all --plaintext aws:region=us-east-1 --plaintext aws:profile=joystream-user \
+    --plaintext wsProviderEndpointURI='wss://rome-rpc-endpoint.joystream.org:9944/' \
+    --plaintext isAnonymous=true
+   ```
+
+   If running for production use the below mentioned config
+
+   ```bash
+   $ pulumi config set-all --plaintext aws:region=us-east-1 --plaintext aws:profile=joystream-user \
+    --plaintext wsProviderEndpointURI='wss://rome-rpc-endpoint.joystream.org:9944/' --plaintext isAnonymous=false \
+    --plaintext providerId=<ID> --plaintext keyFile=<PATH> --plaintext publicURL=<DOMAIN> --secret passphrase=<PASSPHRASE>
+   ```
+
+   You can also set the `storage` and the `colossusPort` config parameters if required
+
+1. Stand up the EKS cluster:
+
+   Running `pulumi up -y` will deploy the EKS cluster. Note, provisioning a
+   new EKS cluster takes between 10-15 minutes.
+
+1. Once the stack if up and running, we will modify the Caddy config to get SSL certificate for the load balancer
+
+   Modify the config variable `isLoadBalancerReady`
+
+   ```bash
+   $ pulumi config set isLoadBalancerReady true
+   ```
+
+   Run `pulumi up -y` to update the Caddy config
+
+1. Access the Kubernetes Cluster using `kubectl`
+
+   To access your new Kubernetes cluster using `kubectl`, we need to set up the
+   `kubeconfig` file and download `kubectl`. We can leverage the Pulumi
+   stack output in the CLI, as Pulumi facilitates exporting these objects for us.
+
+   ```bash
+   $ pulumi stack output kubeconfig --show-secrets > kubeconfig
+   $ export KUBECONFIG=$PWD/kubeconfig
+   $ kubectl get nodes
+   ```
+
+   We can also use the stack output to query the cluster for our newly created Deployment:
+
+   ```bash
+   $ kubectl get deployment $(pulumi stack output deploymentName) --namespace=$(pulumi stack output namespaceName)
+   $ kubectl get service $(pulumi stack output serviceName) --namespace=$(pulumi stack output namespaceName)
+   ```
+
+   To get logs
+
+   ```bash
+   $ kubectl config set-context --current --namespace=$(pulumi stack output namespaceName)
+   $ kubectl get pods
+   $ kubectl logs <PODNAME> --all-containers
+   ```
+
+   To run a command on a pod
+
+   ```bash
+   $ kubectl exec ${POD_NAME} -c ${CONTAINER_NAME} -- ${CMD} ${ARG1}
+   ```
+
+   To see complete pulumi stack output
+
+   ```bash
+   $ pulumi stack output
+   ```
+
+   To execute a command
+
+   ```bash
+   $ kubectl exec --stdin --tty <PODNAME> -c colossus -- /bin/bash
+   ```
+
+1. Once you've finished experimenting, tear down your stack's resources by destroying and removing it:
+
+   ```bash
+   $ pulumi destroy --yes
+   $ pulumi stack rm --yes
+   ```

+ 283 - 0
devops/infrastructure/storage-node/index.ts

@@ -0,0 +1,283 @@
+import * as awsx from '@pulumi/awsx'
+import * as aws from '@pulumi/aws'
+import * as eks from '@pulumi/eks'
+import * as k8s from '@pulumi/kubernetes'
+import * as pulumi from '@pulumi/pulumi'
+import * as fs from 'fs'
+
+const dns = require('dns')
+
+const awsConfig = new pulumi.Config('aws')
+const config = new pulumi.Config()
+
+const wsProviderEndpointURI = config.require('wsProviderEndpointURI')
+const isAnonymous = config.require('isAnonymous') === 'true'
+const lbReady = config.get('isLoadBalancerReady') === 'true'
+const name = 'storage-node'
+const colossusPort = parseInt(config.get('colossusPort') || '3000')
+const storage = parseInt(config.get('storage') || '40')
+
+let additionalParams: string[] | pulumi.Input<string>[] = []
+let volumeMounts: pulumi.Input<pulumi.Input<k8s.types.input.core.v1.VolumeMount>[]> = []
+let caddyVolumeMounts: pulumi.Input<pulumi.Input<k8s.types.input.core.v1.VolumeMount>[]> = []
+let volumes: pulumi.Input<pulumi.Input<k8s.types.input.core.v1.Volume>[]> = []
+
+// Create a VPC for our cluster.
+const vpc = new awsx.ec2.Vpc('vpc', { numberOfAvailabilityZones: 2 })
+
+// Create an EKS cluster with the default configuration.
+const cluster = new eks.Cluster('eksctl-my-cluster', {
+  vpcId: vpc.id,
+  subnetIds: vpc.publicSubnetIds,
+  instanceType: 't2.micro',
+  providerCredentialOpts: {
+    profileName: awsConfig.get('profile'),
+  },
+})
+
+// Export the cluster's kubeconfig.
+export const kubeconfig = cluster.kubeconfig
+
+// Create a repository
+const repo = new awsx.ecr.Repository('colossus-image')
+
+// Build an image and publish it to our ECR repository.
+export const colossusImage = repo.buildAndPushImage({
+  dockerfile: '../../../colossus.Dockerfile',
+  context: '../../../',
+})
+
+// Create a Kubernetes Namespace
+const ns = new k8s.core.v1.Namespace(name, {}, { provider: cluster.provider })
+
+// Export the Namespace name
+export const namespaceName = ns.metadata.name
+
+const appLabels = { appClass: name }
+
+const pvc = new k8s.core.v1.PersistentVolumeClaim(
+  `${name}-pvc`,
+  {
+    metadata: {
+      labels: appLabels,
+      namespace: namespaceName,
+      name: `${name}-pvc`,
+    },
+    spec: {
+      accessModes: ['ReadWriteOnce'],
+      resources: {
+        requests: {
+          storage: `${storage}Gi`,
+        },
+      },
+    },
+  },
+  { provider: cluster.provider }
+)
+
+volumes.push({
+  name: 'ipfs-data',
+  persistentVolumeClaim: {
+    claimName: `${name}-pvc`,
+  },
+})
+
+// Create a LoadBalancer Service for the Deployment
+const service = new k8s.core.v1.Service(
+  name,
+  {
+    metadata: {
+      labels: appLabels,
+      namespace: namespaceName,
+    },
+    spec: {
+      type: 'LoadBalancer',
+      ports: [
+        { name: 'http', port: 80 },
+        { name: 'https', port: 443 },
+      ],
+      selector: appLabels,
+    },
+  },
+  {
+    provider: cluster.provider,
+  }
+)
+
+// Export the Service name and public LoadBalancer Endpoint
+export const serviceName = service.metadata.name
+// When "done", this will print the hostname
+export let serviceHostname: pulumi.Output<string>
+serviceHostname = service.status.loadBalancer.ingress[0].hostname
+
+export let appLink: pulumi.Output<string>
+
+if (lbReady) {
+  async function lookupPromise(url: string) {
+    return new Promise((resolve, reject) => {
+      dns.lookup(url, (err: any, address: any) => {
+        if (err) reject(err)
+        resolve(address)
+      })
+    })
+  }
+
+  const lbIp = serviceHostname.apply((dnsName) => {
+    return lookupPromise(dnsName)
+  })
+
+  const caddyConfig = pulumi.interpolate`${lbIp}.nip.io {
+  reverse_proxy localhost:${colossusPort}
+}`
+
+  const keyConfig = new k8s.core.v1.ConfigMap(name, {
+    metadata: { namespace: namespaceName, labels: appLabels },
+    data: { 'fileData': caddyConfig },
+  })
+  const keyConfigName = keyConfig.metadata.apply((m) => m.name)
+
+  caddyVolumeMounts.push({
+    mountPath: '/etc/caddy/Caddyfile',
+    name: 'caddy-volume',
+    subPath: 'fileData',
+  })
+
+  volumes.push({
+    name: 'caddy-volume',
+    configMap: {
+      name: keyConfigName,
+    },
+  })
+
+  appLink = pulumi.interpolate`https://${lbIp}.nip.io`
+
+  lbIp.apply((value) => console.log(`You can now access the app at: ${value}.nip.io`))
+
+  if (!isAnonymous) {
+    const remoteKeyFilePath = '/joystream/key-file.json'
+    const providerId = config.require('providerId')
+    const keyFile = config.require('keyFile')
+    const publicUrl = config.get('publicURL') ? config.get('publicURL')! : appLink
+
+    const keyConfig = new k8s.core.v1.ConfigMap('key-config', {
+      metadata: { namespace: namespaceName, labels: appLabels },
+      data: { 'fileData': fs.readFileSync(keyFile).toString() },
+    })
+    const keyConfigName = keyConfig.metadata.apply((m) => m.name)
+
+    additionalParams = ['--provider-id', providerId, '--key-file', remoteKeyFilePath, '--public-url', publicUrl]
+
+    volumeMounts.push({
+      mountPath: remoteKeyFilePath,
+      name: 'keyfile-volume',
+      subPath: 'fileData',
+    })
+
+    volumes.push({
+      name: 'keyfile-volume',
+      configMap: {
+        name: keyConfigName,
+      },
+    })
+
+    const passphrase = config.get('passphrase')
+    if (passphrase) {
+      additionalParams.push('--passphrase', passphrase)
+    }
+  }
+}
+
+if (isAnonymous) {
+  additionalParams.push('--anonymous')
+}
+
+// Create a Deployment
+const deployment = new k8s.apps.v1.Deployment(
+  name,
+  {
+    metadata: {
+      namespace: namespaceName,
+      labels: appLabels,
+    },
+    spec: {
+      replicas: 1,
+      selector: { matchLabels: appLabels },
+      template: {
+        metadata: {
+          labels: appLabels,
+        },
+        spec: {
+          hostname: 'ipfs',
+          containers: [
+            {
+              name: 'ipfs',
+              image: 'ipfs/go-ipfs:latest',
+              ports: [{ containerPort: 5001 }, { containerPort: 8080 }],
+              command: ['/bin/sh', '-c'],
+              args: [
+                'set -e; \
+                /usr/local/bin/start_ipfs config profile apply lowpower; \
+                /usr/local/bin/start_ipfs config --json Gateway.PublicGateways \'{"localhost": null }\'; \
+                /usr/local/bin/start_ipfs config Datastore.StorageMax 200GB; \
+                /sbin/tini -- /usr/local/bin/start_ipfs daemon --migrate=true',
+              ],
+              volumeMounts: [
+                {
+                  name: 'ipfs-data',
+                  mountPath: '/data/ipfs',
+                },
+              ],
+            },
+            // {
+            //   name: 'httpd',
+            //   image: 'crccheck/hello-world',
+            //   ports: [{ name: 'hello-world', containerPort: 8000 }],
+            // },
+            {
+              name: 'caddy',
+              image: 'caddy',
+              ports: [
+                { name: 'caddy-http', containerPort: 80 },
+                { name: 'caddy-https', containerPort: 443 },
+              ],
+              volumeMounts: caddyVolumeMounts,
+            },
+            {
+              name: 'colossus',
+              image: colossusImage,
+              env: [
+                {
+                  name: 'WS_PROVIDER_ENDPOINT_URI',
+                  // example 'wss://18.209.241.63.nip.io/'
+                  value: wsProviderEndpointURI,
+                },
+                {
+                  name: 'DEBUG',
+                  value: 'joystream:*',
+                },
+              ],
+              volumeMounts,
+              command: [
+                'yarn',
+                'colossus',
+                '--ws-provider',
+                wsProviderEndpointURI,
+                '--ipfs-host',
+                'ipfs',
+                ...additionalParams,
+              ],
+              ports: [{ containerPort: colossusPort }],
+            },
+          ],
+          volumes,
+        },
+      },
+    },
+  },
+  {
+    provider: cluster.provider,
+  }
+)
+
+// Export the Deployment name
+export const deploymentName = deployment.metadata.name

+ 13 - 0
devops/infrastructure/storage-node/package.json

@@ -0,0 +1,13 @@
+{
+  "name": "eks-cluster",
+  "devDependencies": {
+    "@types/node": "^10.0.0"
+  },
+  "dependencies": {
+    "@pulumi/aws": "^4.0.0",
+    "@pulumi/awsx": "^0.30.0",
+    "@pulumi/eks": "^0.31.0",
+    "@pulumi/kubernetes": "^3.0.0",
+    "@pulumi/pulumi": "^3.0.0"
+  }
+}

+ 18 - 0
devops/infrastructure/storage-node/tsconfig.json

@@ -0,0 +1,18 @@
+{
+    "compilerOptions": {
+        "strict": true,
+        "outDir": "bin",
+        "target": "es2016",
+        "module": "commonjs",
+        "moduleResolution": "node",
+        "sourceMap": true,
+        "experimentalDecorators": true,
+        "pretty": true,
+        "noFallthroughCasesInSwitch": true,
+        "noImplicitReturns": true,
+        "forceConsistentCasingInFileNames": true
+    },
+    "files": [
+        "index.ts"
+    ]
+}