Mokhtar Naamani 94570aab23 Merge branch 'giza_staging' into giza_staging_k8_support_kind_ingress 2 years ago
..
docker_dummy d9f079d3b0 Remove docker image build from deployment, add config for docker image 2 years ago
.gitignore c8f77d2059 Update gitignore, filenames, CF userdata 2 years ago
Pulumi.yaml a5e28adc1b pulumi query-node refactor 2 years ago
README.md a5e28adc1b pulumi query-node refactor 2 years ago
configMap.ts a5e28adc1b pulumi query-node refactor 2 years ago
index.ts b45aae2432 query-node deployment: allow only cluster and optionally ECR 2 years ago
indexerDeployment.ts 59efc3fde4 query-node deployment: only read indexer config options if deploying 2 years ago
ingress.yaml 247fd5e845 add minikube and kind cluster howto and ingress deployments 2 years ago
package.json 9fd1bade66 devops: separate ansible and aws from kubernetes deployments 2 years ago
processorDeployment.ts 0cb3544639 query-node deployment: fix start command for processor 2 years ago
s3Helpers.ts 9fd1bade66 devops: separate ansible and aws from kubernetes deployments 2 years ago
tsconfig.json 9fd1bade66 devops: separate ansible and aws from kubernetes deployments 2 years ago

README.md

Query Node automated deployment

Deploys an EKS Kubernetes cluster with query node

Deploying the App

To deploy your infrastructure, follow the below steps.

Prerequisites

  1. Install Pulumi
  2. Install Node.js
  3. Install a package manager for Node.js, such as npm or Yarn.
  4. Configure AWS Credentials
  5. Optional (for debugging): Install kubectl

Steps

After cloning this repo, from this working directory, run these commands:

  1. Install the required Node.js packages:

This installs the dependent packages needed for our Pulumi program.

   $ npm install
  1. Create a new stack, which is an isolated deployment target for this example:

This will initialize the Pulumi program in TypeScript.

   $ pulumi stack init
  1. Set the required configuration variables in Pulumi.<stack>.yaml

    $ pulumi config set-all --plaintext aws:region=us-east-1 --plaintext aws:profile=joystream-user \
    --plaintext dbPassword=password --plaintext blockHeight=0 \
    --plaintext joystreamWsEndpoint=ws://endpoint.somewhere.net:9944 \
    --plaintext isMinikube=true --plaintext skipProcessor=false
    

If you want to build the stack on AWS set the isMinikube config to false

   $ pulumi config set isMinikube false

If you want to use an existing Indexer and not deploy a new one set externalIndexerUrl

   $ pulumi config set externalIndexerUrl <URL>

You must have a valid docker image of joystream/apps either on Docker hub or your local to deploy the infrastructure. If the image exists locally & you are running on minikube, run

   $ pulumi config set-all --plaintext useLocalRepo=true --plaintext appsImage=<IMAGE_NAME>

NOTE: The docker deamon for minikube is different from that of the docker desktop. To connect your Docker CLI to the docker daemon inside the VM you need to run: eval $(minikube docker-env). To copy the image from your local deamon to minikube run minikube image load joystream/apps:latest --daemon.

If not using minikube, just specify the appsImage config.

  1. Stand up the Kubernetes cluster:

Running pulumi up -y will deploy the EKS cluster. Note, provisioning a new EKS cluster takes between 10-15 minutes.

  1. Once the stack is up and running, we will modify the Caddy config to get SSL certificate for the load balancer

Modify the config variable isLoadBalancerReady

   $ pulumi config set isLoadBalancerReady true

Run pulumi up -y to update the Caddy config

  1. You can now access the endpoints using pulumi stack output endpoint1 or pulumi stack output endpoint2

The GraphQl server is accessible at https://<ENDPOINT>/server/graphql and indexer at https://<ENDPOINT>/indexer/graphql

  1. If you are using Minikube, run minikube service graphql-server -n $(pulumi stack output namespaceName)

This will setup a proxy for your query-node service, which can then be accessed at the URL given in the output

  1. Access the Kubernetes Cluster using kubectl

To access your new Kubernetes cluster using kubectl, we need to set up the kubeconfig file and download kubectl. We can leverage the Pulumi stack output in the CLI, as Pulumi facilitates exporting these objects for us.

   $ pulumi stack output kubeconfig --show-secrets > kubeconfig
   $ export KUBECONFIG=$PWD/kubeconfig
   $ kubectl get nodes

We can also use the stack output to query the cluster for our newly created Deployment:

   $ kubectl get deployment $(pulumi stack output deploymentName) --namespace=$(pulumi stack output namespaceName)
   $ kubectl get service $(pulumi stack output serviceName) --namespace=$(pulumi stack output namespaceName)

To get logs

   $ kubectl config set-context --current --namespace=$(pulumi stack output namespaceName)
   $ kubectl get pods
   $ kubectl logs <PODNAME> --all-containers

To see complete pulumi stack output

   $ pulumi stack output

To execute a command

   $ kubectl exec --stdin --tty <PODNAME> -c colossus -- /bin/bash
  1. Once you've finished experimenting, tear down your stack's resources by destroying and removing it:

    $ pulumi destroy --yes
    $ pulumi stack rm --yes