installing and configuring Torus for secret key management


root@ip-172-31-25-91:~/.ssh# DISTRO=$(lsb_release -i | awk '{print tolower($3)}')
root@ip-172-31-25-91:~/.ssh# CODENAME=$(lsb_release -c | awk '{print $2}')
root@ip-172-31-25-91:~/.ssh# sudo tee /etc/apt/sources.list.d/torus.list <<< "deb https://get.torus.sh/$DISTRO/ $CODENAME main"
deb https://get.torus.sh/ubuntu/ xenial main
root@ip-172-31-25-91:~/.ssh# apt-get update
Hit:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial-backports InRelease [102 kB]
Ign:4 http://pkg.jenkins.io/debian-stable binary/ InRelease
Hit:5 http://pkg.jenkins.io/debian-stable binary/ Release
Get:6 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Hit:7 https://download.docker.com/linux/ubuntu xenial InRelease
Ign:8 https://get.torus.sh/ubuntu xenial InRelease
Get:9 https://get.torus.sh/ubuntu xenial Release [864 B]
Ign:11 https://get.torus.sh/ubuntu xenial Release.gpg
Get:12 https://get.torus.sh/ubuntu xenial/main amd64 Packages [324 B]
Fetched 308 kB in 0s (661 kB/s)
Reading package lists... Done
W: The repository 'https://get.torus.sh/ubuntu xenial Release' is not signed.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.
root@ip-172-31-25-91:~/.ssh# apt-get install torus
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
  torus
0 upgraded, 1 newly installed, 0 to remove and 28 not upgraded.
Need to get 2,124 kB of archives.
After this operation, 0 B of additional disk space will be used.
WARNING: The following packages cannot be authenticated!
  torus
Install these packages without verification? [y/N] y
Get:1 https://get.torus.sh/ubuntu xenial/main amd64 torus amd64 0.24.1 [2,124 kB]
Fetched 2,124 kB in 0s (7,829 kB/s)
Selecting previously unselected package torus.
(Reading database ... 124496 files and directories currently installed.)
Preparing to unpack .../torus_0.24.1_amd64.deb ...
Unpacking torus (0.24.1) ...
Setting up torus (0.24.1) ...
root@ip-172-31-25-91:~/.ssh# torus link
You must be logged in to run 'link'.
Login using 'login' or create an account using 'signup'.
root@ip-172-31-25-91:~/.ssh# torus link signup
You must be logged in to run 'link'.
Login using 'login' or create an account using 'signup'.
root@ip-172-31-25-91:~/.ssh# torus signup
By completing sign up, you agree to our terms of use (found at https://torus.sh/terms)
and our privacy policy (found at https://torus.sh/privacy)

✔ Full Name: satheesh kumar
✔ Username: jskcbe
✔ Email: satheeshj@soldatinc.com
✔ Password: ●●●●●●●●●
✔ Confirm Password: ●●●●●●●●●
Would you like to enable hints on how to use Torus?
They can be disabled at any time using `torus prefs`.
✔ Enable hints? [Y/n] y
Preferences updated.

You are now authenticated.
Keypairs generated
Signing keys signed
Signing keys uploaded
Encryption keys signed
Encryption keys uploaded

Your account has been created!

We have emailed you a verification code.
Please verify your email address by entering the code below.

✔ Verification code: CFKN6AAWG

Your email is now verified.
root@ip-172-31-25-91:~/.ssh# torus link
✔ Create a new organization: david
✔ Create a new project: scaleway
Keypairs generated
Signing keys signed
Signing keys uploaded
Encryption keys signed
Encryption keys uploaded
Org david created.
Project scaleway created.

This directory and its subdirectories have been linked to:
Org:     david
Project: scaleway

Use 'torus status' to view your full working context.
root@ip-172-31-25-91:~/.ssh# torus status
Org:         david
Project:     scaleway
Environment: dev-jskcbe
Service:     default
Identity:    jskcbe
Instance:    1

Credential path: /david/scaleway/dev-jskcbe/default/jskcbe/1
root@ip-172-31-25-91:~/.ssh# tourus services list
No command 'tourus' found, did you mean:
 Command 'torrus' from package 'torrus-common' (universe)
tourus: command not found
root@ip-172-31-25-91:~/.ssh# torus services list

scaleway (1)
------------
default

root@ip-172-31-25-91:~/.ssh# torus set organization
A secret name and value must be supplied.
Usage:
    torus set [command options] or =
root@ip-172-31-25-91:~/.ssh# torus set organization ac2680a1-df3f-4ca8-91e2-fb7e0e746ba6
Credentials retrieved
Keypairs retrieved
Encrypting key retrieved
Credential encrypted
Completed Operation

Credential organization has been set at /david/scaleway/dev-jskcbe/default/*/*/organization

Protip: See the exact path for each secret set using `torus view -v`
root@ip-172-31-25-91:~/.ssh# torus set token a272e8cd-5ac3-4a92-a82c-972f579e93c0
Credentials retrieved
Keypairs retrieved
Encrypting key retrieved
Credential encrypted
Completed Operation

Credential token has been set at /david/scaleway/dev-jskcbe/default/*/*/token

Protip: Start your process with your decrypted secrets using `torus run`
root@ip-172-31-25-91:~/.ssh# torus view
ORGANIZATION=ac2680a1-df3f-4ca8-91e2-fb7e0e746ba6
TOKEN=a272e8cd-5ac3-4a92-a82c-972f579e93c0

Protip: Start your process with your decrypted secrets using `torus run`
root@ip-172-31-25-91:~/.ssh#

installing Kubernetes on AWS EC2 using Kops

Installing Kubernetes on AWS with kops

Overview

This quickstart shows you how to easily install a Kubernetes cluster on AWS. It uses a tool called kops.
kops is an opinionated provisioning system:
  • Fully automated installation
  • Uses DNS to identify clusters
  • Self-healing: everything runs in Auto-Scaling Groups
  • Limited OS support (Debian preferred, Ubuntu 16.04 supported, early support for CentOS & RHEL)
  • High-Availability support
  • Can directly provision, or generate terraform manifests
If your opinions differ from these you may prefer to build your own cluster using kubeadm as a building block. kops builds on the kubeadm work.

Creating a cluster

(1/5) Install kops

Requirements

You must have kubectl installed in order for kops to work.

Installation

Download kops from the releases page (it is also easy to build from source):
On MacOS:
curl -OL https://github.com/kubernetes/kops/releases/download/1.8.0/kops-darwin-amd64
chmod +x kops-darwin-amd64
mv kops-darwin-amd64 /usr/local/bin/kops
# you can also install using Homebrew
brew update && brew install kops
On Linux:
wget https://github.com/kubernetes/kops/releases/download/1.8.0/kops-linux-amd64
chmod +x kops-linux-amd64
mv kops-linux-amd64 /usr/local/bin/kops

(2/5) Create a route53 domain for your cluster

kops uses DNS for discovery, both inside the cluster and so that you can reach the kubernetes API server from clients.
kops has a strong opinion on the cluster name: it should be a valid DNS name. By doing so you will no longer get your clusters confused, you can share clusters with your colleagues unambiguously, and you can reach them without relying on remembering an IP address.
You can, and probably should, use subdomains to divide your clusters. As our example we will use useast1.dev.example.com. The API server endpoint will then be api.useast1.dev.example.com.
A Route53 hosted zone can serve subdomains. Your hosted zone could be useast1.dev.example.com, but also dev.example.com or even example.com. kops works with any of these, so typically you choose for organization reasons (e.g. you are allowed to create records under dev.example.com, but not under example.com).
Let’s assume you’re using dev.example.com as your hosted zone. You create that hosted zone using the normal process, or with a command such as aws route53 create-hosted-zone --name dev.example.com --caller-reference 1.
You must then set up your NS records in the parent domain, so that records in the domain will resolve. Here, you would create NS records in example.com for dev. If it is a root domain name you would configure the NS records at your domain registrar (e.g. example.com would need to be configured where you bought example.com).
This step is easy to mess up (it is the #1 cause of problems!) You can double-check that your cluster is configured correctly if you have the dig tool by running:
dig NS dev.example.com
You should see the 4 NS records that Route53 assigned your hosted zone.

(3/5) Create an S3 bucket to store your clusters state

kops lets you manage your clusters even after installation. To do this, it must keep track of the clusters that you have created, along with their configuration, the keys they are using etc. This information is stored in an S3 bucket. S3 permissions are used to control access to the bucket.
Multiple clusters can use the same S3 bucket, and you can share an S3 bucket between your colleagues that administer the same clusters - this is much easier than passing around kubecfg files. But anyone with access to the S3 bucket will have administrative access to all your clusters, so you don’t want to share it beyond the operations team.
So typically you have one S3 bucket for each ops team (and often the name will correspond to the name of the hosted zone above!)
In our example, we chose dev.example.com as our hosted zone, so let’s pick clusters.dev.example.com as the S3 bucket name.
  • Export AWS_PROFILE (if you need to select a profile for the AWS CLI to work)
  • Create the S3 bucket using aws s3 mb s3://clusters.dev.example.com
  • You can export KOPS_STATE_STORE=s3://clusters.dev.example.com and then kops will use this location by default. We suggest putting this in your bash profile or similar.

(4/5) Build your cluster configuration

Run “kops create cluster” to create your cluster configuration:
kops create cluster --zones=us-east-1c useast1.dev.example.com
kops will create the configuration for your cluster. Note that it only creates the configuration, it does not actually create the cloud resources - you’ll do that in the next step with a kops update cluster. This give you an opportunity to review the configuration or change it.
It prints commands you can use to explore further:
  • List your clusters with: kops get cluster
  • Edit this cluster with: kops edit cluster useast1.dev.example.com
  • Edit your node instance group: kops edit ig --name=useast1.dev.example.com nodes
  • Edit your master instance group: kops edit ig --name=useast1.dev.example.com master-us-east-1c
If this is your first time using kops, do spend a few minutes to try those out! An instance group is a set of instances, which will be registered as kubernetes nodes. On AWS this is implemented via auto-scaling-groups. You can have several instance groups, for example if you wanted nodes that are a mix of spot and on-demand instances, or GPU and non-GPU instances.

(5/5) Create the cluster in AWS

Run “kops update cluster” to create your cluster in AWS:
kops update cluster useast1.dev.example.com --yes
That takes a few seconds to run, but then your cluster will likely take a few minutes to actually be ready. kops update cluster will be the tool you’ll use whenever you change the configuration of your cluster; it applies the changes you have made to the configuration to your cluster - reconfiguring AWS or kubernetes as needed.
For example, after you kops edit ig nodes, then kops update cluster --yes to apply your configuration, and sometimes you will also have to kops rolling-update cluster to roll out the configuration immediately.
Without --yeskops update cluster will show you a preview of what it is going to do. This is handy for production clusters

==================================================================
root@ip-172-31-25-91:~# uname -a
Linux ip-172-31-25-91 4.4.0-1022-aws #31-Ubuntu SMP Tue Jun 27 11:27:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
root@ip-172-31-25-91:~# cat /etc/*release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.2 LTS"
NAME="Ubuntu"
VERSION="16.04.2 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.2 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
root@ip-172-31-25-91:~#

root@ip-172-31-25-91:~#
root@ip-172-31-25-91:~# kubectl
kubectl: command not found
root@ip-172-31-25-91:~#
root@ip-172-31-25-91:~#
root@ip-172-31-25-91:~#
root@ip-172-31-25-91:~#
root@ip-172-31-25-91:~# curl -LO https://storage.googleapis.com/kubernetes-relea                                                 se/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/s                                                  table.txt)/bin/linux/amd64/kubectl
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 68.9M  100 68.9M    0     0  41.9M      0  0:00:01  0:00:01 --:--:-- 41.9M
root@ip-172-31-25-91:~# ll
total 169384
drwx------  3 root root      4096 Sep  2 02:38 ./
drwxr-xr-x 23 root root      4096 Aug 28 17:05 ../
-rw-r--r--  1 root root      1082 Aug 21 05:36 alert1.yaml
-rw-r--r--  1 root root       927 Aug 21 05:47 alert2.yaml
-rw-r--r--  1 root root      1082 Aug 21 03:10 alert.yaml
-rw-------  1 root root      2896 Aug 24 23:40 .bash_history
-rw-r--r--  1 root root      3106 Oct 22  2015 .bashrc
-rw-r--r--  1 root root     15413 Aug 17 18:38 index.html
-rw-r--r--  1 root root         4 Aug 21 03:14 infile
-rw-r--r--  1 root root  72337373 Sep  2 02:38 kubectl
-rw-r--r--  1 root root       102 Aug 21 05:31 move.sh
-rw-r--r--  1 root root        33 Aug 21 05:45 new.yaml
-rw-r--r--  1 root root         4 Aug 21 03:15 outfile
-rw-r--r--  1 root root       148 Aug 17  2015 .profile
-rw-r--r--  1 root root 101028776 Apr 29  2016 rundeck-2.6.7-1-GA.deb
drwx------  2 root root      4096 Aug 20 17:57 .ssh/
-rw-------  1 root root      4194 Aug 24 20:16 .viminfo
root@ip-172-31-25-91:~# chmod +x ./kubectl
root@ip-172-31-25-91:~# mv ./kubectl /usr/local/bin/kubectl
root@ip-172-31-25-91:~# kubectl
kubectl controls the Kubernetes cluster manager.

Find more information at https://github.com/kubernetes/kubernetes.

Basic Commands (Beginner):
  create         Create a resource by filename or stdin
  expose         Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
  run            Run a particular image on the cluster
  run-container  Run a particular image on the cluster
  set            Set specific features on objects

Basic Commands (Intermediate):
  get            Display one or many resources
  explain        Documentation of resources
  edit           Edit a resource on the server
  delete         Delete resources by filenames, stdin, resources and names, or by resources and label selector

Deploy Commands:
  rollout        Manage the rollout of a resource
  rolling-update Perform a rolling update of the given ReplicationController
  rollingupdate  Perform a rolling update of the given ReplicationController
  scale          Set a new size for a Deployment, ReplicaSet, Replication Controller, or Job
  resize         Set a new size for a Deployment, ReplicaSet, Replication Controller, or Job
  autoscale      Auto-scale a Deployment, ReplicaSet, or ReplicationController

Cluster Management Commands:
  certificate    Modify certificate resources.
  cluster-info   Display cluster info
  clusterinfo    Display cluster info
  top            Display Resource (CPU/Memory/Storage) usage.
  cordon         Mark node as unschedulable
  uncordon       Mark node as schedulable
  drain          Drain node in preparation for maintenance
  taint          Update the taints on one or more nodes

Troubleshooting and Debugging Commands:
  describe       Show details of a specific resource or group of resources
  logs           Print the logs for a container in a pod
  attach         Attach to a running container
  exec           Execute a command in a container
  port-forward   Forward one or more local ports to a pod
  proxy          Run a proxy to the Kubernetes API server
  cp             Copy files and directories to and from containers.
  auth           Inspect authorization

Advanced Commands:
  apply          Apply a configuration to a resource by filename or stdin
  patch          Update field(s) of a resource using strategic merge patch
  replace        Replace a resource by filename or stdin
  update         Replace a resource by filename or stdin
  convert        Convert config files between different API versions

Settings Commands:
  label          Update the labels on a resource
  annotate       Update the annotations on a resource
  completion     Output shell completion code for the specified shell (bash or zsh)

Other Commands:
  api-versions   Print the supported API versions on the server, in the form of "group/version"
  config         Modify kubeconfig files
  help           Help about any command
  plugin         Runs a command-line plugin
  version        Print the client and server version information

Use "kubectl --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all commands).
root@ip-172-31-25-91:~#

root@ip-172-31-25-91:~# chmod +x ./kubectl
root@ip-172-31-25-91:~# mv ./kubectl /usr/local/bin/kubectl
root@ip-172-31-25-91:~# kubectl
kubectl controls the Kubernetes cluster manager.

Find more information at https://github.com/kubernetes/kubernetes.

Basic Commands (Beginner):
  create         Create a resource by filename or stdin
  expose         Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
  run            Run a particular image on the cluster
  run-container  Run a particular image on the cluster
  set            Set specific features on objects


ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.2 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
root@ip-172-31-25-91:~# wget https://github.com/kubernetes/kops/releases/download/1.6.1/kops-linux-amd64
--2017-09-02 02:41:15--  https://github.com/kubernetes/kops/releases/download/1.6.1/kops-linux-amd64
Resolving github.com (github.com)... 192.30.253.112, 192.30.253.113
Connecting to github.com (github.com)|192.30.253.112|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://github-production-release-asset-2e65be.s3.amazonaws.com/62091339/4e48c984-4eca-11e7-9a28-7434a10577bd?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20170902%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20170902T024115Z&X-Amz-Expires=300&X-Amz-Signature=7d8388da1352d2b24afee18a752d7fd04bf6cd89c64872662b325709d1321c37&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dkops-linux-amd64&response-content-type=application%2Foctet-stream [following]
--2017-09-02 02:41:15--  https://github-production-release-asset-2e65be.s3.amazonaws.com/62091339/4e48c984-4eca-11e7-9a28-7434a10577bd?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20170902%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20170902T024115Z&X-Amz-Expires=300&X-Amz-Signature=7d8388da1352d2b24afee18a752d7fd04bf6cd89c64872662b325709d1321c37&X-Amz-SignedHeaders=host&actor_id=0&response-content-disposition=attachment%3B%20filename%3Dkops-linux-amd64&response-content-type=application%2Foctet-stream
Resolving github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)... 52.216.64.0
Connecting to github-production-release-asset-2e65be.s3.amazonaws.com (github-production-release-asset-2e65be.s3.amazonaws.com)|52.216.64.0|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 72731008 (69M) [application/octet-stream]
Saving to: ‘kops-linux-amd64’

kops-linux-amd64                 100%[========================================================>]  69.36M  46.6MB/s    in 1.5s

2017-09-02 02:41:17 (46.6 MB/s) - ‘kops-linux-amd64’ saved [72731008/72731008]

root@ip-172-31-25-91:~# chmod +x kops-linux-amd64
root@ip-172-31-25-91:~# mv kops-linux-amd64 /usr/local/bin/kops
root@ip-172-31-25-91:~# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
tunlr/pybase        latest              cbf0a5328b87        8 days ago          700MB
ubuntu              xenial              ccc7a11d65b1        3 weeks ago         120MB
root@ip-172-31-25-91:~#
root@ip-172-31-25-91:~#




root@ip-172-31-25-91:~# kops create cluster --zones=us-east-1c useast1.soldatinc.net

error reading cluster configuration "useast1.soldatinc.net": error reading s3://clusters.soldatinc.net/useast1.soldatinc.net/config: Unable to list AWS regions: NoCredentialProviders: no valid providers in chain. Deprecated.
        For verbose messaging see aws.Config.CredentialsChainVerboseErrors
root@ip-172-31-25-91:~# kops create cluster --zones=us-east-1c useast1.soldatinc.net

error reading cluster configuration "useast1.soldatinc.net": error reading s3://clusters.soldatinc.net/useast1.soldatinc.net/config: Unable to list AWS regions: NoCredentialProviders: no valid providers in chain. Deprecated.
        For verbose messaging see aws.Config.CredentialsChainVerboseErrors
root@ip-172-31-25-91:~# vi aws-secrets
root@ip-172-31-25-91:~#
root@ip-172-31-25-91:~#
root@ip-172-31-25-91:~#
root@ip-172-31-25-91:~#
root@ip-172-31-25-91:~#  export AWS_ACCESS_KEY_ID="AKIAJxxxxxxQIBFQQ"
root@ip-172-31-25-91:~#    24  export AWS_SECRET_ACCESS_KEY="fm4MrQxxxxxSafVr80TNe3/1JuV"
24: command not found
root@ip-172-31-25-91:~# export AWS_SECRET_ACCESS_KEY="fm4BsKxxxVr80TNe3/1JuV"
root@ip-172-31-25-91:~# env |grep -i aws
AWS_SECRET_ACCESS_KEY=fm4adfBsK0fSxfP6+IafVr80TNe3/1JuV
AWS_ACCESS_KEY_ID=AKIOQNYVOxxxxxxxxx
root@ip-172-31-25-91:~#
root@ip-172-31-25-91:~#
root@ip-172-31-25-91:~# kops create cluster --zones=us-east-1c useast1.soldatinc.net
I0905 04:03:56.699709   15807 create_cluster.go:654] Inferred --cloud=aws from zone "us-east-1c"
I0905 04:03:56.701279   15807 create_cluster.go:833] Using SSH public key: /root/.ssh/id_rsa.pub
I0905 04:03:56.748782   15807 subnets.go:183] Assigned CIDR 172.20.32.0/19 to subnet us-east-1c
Previewing changes that will be made:

I0905 04:03:57.446637   15807 executor.go:91] Tasks: 0 done / 63 total; 34 can run
I0905 04:03:57.653417   15807 executor.go:91] Tasks: 34 done / 63 total; 12 can run
I0905 04:03:57.737212   15807 executor.go:91] Tasks: 46 done / 63 total; 15 can run
I0905 04:03:57.797994   15807 executor.go:91] Tasks: 61 done / 63 total; 2 can run
I0905 04:03:57.832546   15807 executor.go:91] Tasks: 63 done / 63 total; 0 can run
Will create resources:
  AutoscalingGroup/master-us-east-1c.masters.useast1.soldatinc.net
        MinSize                 1
        MaxSize                 1
        Subnets                 [name:us-east-1c.useast1.soldatinc.net]
        Tags                    {Name: master-us-east-1c.masters.useast1.soldatinc.net, KubernetesCluster: useast1.soldatinc.net, k8s.io/role/master: 1}
        LaunchConfiguration     name:master-us-east-1c.masters.useast1.soldatinc.net

  AutoscalingGroup/nodes.useast1.soldatinc.net
        MinSize                 2
        MaxSize                 2
        Subnets                 [name:us-east-1c.useast1.soldatinc.net]
        Tags                    {k8s.io/role/node: 1, Name: nodes.useast1.soldatinc.net, KubernetesCluster: useast1.soldatinc.net}
        LaunchConfiguration     name:nodes.useast1.soldatinc.net

  DHCPOptions/useast1.soldatinc.net
        DomainName              ec2.internal
        DomainNameServers       AmazonProvidedDNS

  EBSVolume/c.etcd-events.useast1.soldatinc.net
        AvailabilityZone        us-east-1c
        VolumeType              gp2
        SizeGB                  20
        Encrypted               false
        Tags                    {k8s.io/etcd/events: c/c, k8s.io/role/master: 1, Name: c.etcd-events.useast1.soldatinc.net, KubernetesCluster: useast1.soldatinc.net}

  EBSVolume/c.etcd-main.useast1.soldatinc.net
        AvailabilityZone        us-east-1c
        VolumeType              gp2
        SizeGB                  20
        Encrypted               false
        Tags                    {k8s.io/etcd/main: c/c, k8s.io/role/master: 1, Name: c.etcd-main.useast1.soldatinc.net, KubernetesCluster: useast1.soldatinc.net}

  IAMInstanceProfile/masters.useast1.soldatinc.net

  IAMInstanceProfile/nodes.useast1.soldatinc.net

  IAMInstanceProfileRole/masters.useast1.soldatinc.net
        InstanceProfile         name:masters.useast1.soldatinc.net id:masters.useast1.soldatinc.net
        Role                    name:masters.useast1.soldatinc.net

  IAMInstanceProfileRole/nodes.useast1.soldatinc.net
        InstanceProfile         name:nodes.useast1.soldatinc.net id:nodes.useast1.soldatinc.net
        Role                    name:nodes.useast1.soldatinc.net

  IAMRole/masters.useast1.soldatinc.net
        ExportWithID            masters

  IAMRole/nodes.useast1.soldatinc.net
        ExportWithID            nodes

  IAMRolePolicy/masters.useast1.soldatinc.net
        Role                    name:masters.useast1.soldatinc.net

  IAMRolePolicy/nodes.useast1.soldatinc.net
        Role                    name:nodes.useast1.soldatinc.net

  InternetGateway/useast1.soldatinc.net
        VPC                     name:useast1.soldatinc.net
        Shared                  false

  Keypair/kops
        Subject                 o=system:masters,cn=kops
        Type                    client

  Keypair/kube-controller-manager
        Subject                 cn=system:kube-controller-manager
        Type                    client

  Keypair/kube-proxy
        Subject                 cn=system:kube-proxy
        Type                    client

  Keypair/kube-scheduler
        Subject                 cn=system:kube-scheduler
        Type                    client

  Keypair/kubecfg
        Subject                 o=system:masters,cn=kubecfg
        Type                    client

  Keypair/kubelet
        Subject                 o=system:nodes,cn=kubelet
        Type                    client

  Keypair/master
        Subject                 cn=kubernetes-master
        Type                    server
        AlternateNames          [100.64.0.1, 127.0.0.1, api.internal.useast1.soldatinc.net, api.useast1.soldatinc.net, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local]

  LaunchConfiguration/master-us-east-1c.masters.useast1.soldatinc.net
        ImageID                 kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-07-28
        InstanceType            m3.medium
        SSHKey                  name:kubernetes.useast1.soldatinc.net-0a:51:d7:9b:cf:5f:7e:23:6d:fa:bd:38:bf:c8:90:53 id:kubernetes.useast1.soldatinc.net-0a:51:d7:9b:cf:5f:7e:23:6d:fa:bd:38:bf:c8:90:53
        SecurityGroups          [name:masters.useast1.soldatinc.net]
        AssociatePublicIP       true
        IAMInstanceProfile      name:masters.useast1.soldatinc.net id:masters.useast1.soldatinc.net
        RootVolumeSize          20
        RootVolumeType          gp2
        SpotPrice

  LaunchConfiguration/nodes.useast1.soldatinc.net
        ImageID                 kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-07-28
        InstanceType            t2.medium
        SSHKey                  name:kubernetes.useast1.soldatinc.net-0a:51:d7:9b:cf:5f:7e:23:6d:fa:bd:38:bf:c8:90:53 id:kubernetes.useast1.soldatinc.net-0a:51:d7:9b:cf:5f:7e:23:6d:fa:bd:38:bf:c8:90:53
        SecurityGroups          [name:nodes.useast1.soldatinc.net]
        AssociatePublicIP       true
        IAMInstanceProfile      name:nodes.useast1.soldatinc.net id:nodes.useast1.soldatinc.net
        RootVolumeSize          20
        RootVolumeType          gp2
        SpotPrice

  ManagedFile/useast1.soldatinc.net-addons-bootstrap
        Location                addons/bootstrap-channel.yaml

  ManagedFile/useast1.soldatinc.net-addons-core.addons.k8s.io
        Location                addons/core.addons.k8s.io/v1.4.0.yaml

  ManagedFile/useast1.soldatinc.net-addons-dns-controller.addons.k8s.io-k8s-1.6
        Location                addons/dns-controller.addons.k8s.io/k8s-1.6.yaml

  ManagedFile/useast1.soldatinc.net-addons-dns-controller.addons.k8s.io-pre-k8s-1.6
        Location                addons/dns-controller.addons.k8s.io/pre-k8s-1.6.yaml

  ManagedFile/useast1.soldatinc.net-addons-kube-dns.addons.k8s.io-k8s-1.6
        Location                addons/kube-dns.addons.k8s.io/k8s-1.6.yaml

  ManagedFile/useast1.soldatinc.net-addons-kube-dns.addons.k8s.io-pre-k8s-1.6
        Location                addons/kube-dns.addons.k8s.io/pre-k8s-1.6.yaml

  ManagedFile/useast1.soldatinc.net-addons-limit-range.addons.k8s.io
        Location                addons/limit-range.addons.k8s.io/v1.5.0.yaml

  ManagedFile/useast1.soldatinc.net-addons-storage-aws.addons.k8s.io
        Location                addons/storage-aws.addons.k8s.io/v1.6.0.yaml

  Route/0.0.0.0/0
        RouteTable              name:useast1.soldatinc.net
        CIDR                    0.0.0.0/0
        InternetGateway         name:useast1.soldatinc.net

  RouteTable/useast1.soldatinc.net
        VPC                     name:useast1.soldatinc.net

  RouteTableAssociation/us-east-1c.useast1.soldatinc.net
        RouteTable              name:useast1.soldatinc.net
        Subnet                  name:us-east-1c.useast1.soldatinc.net

  SSHKey/kubernetes.useast1.soldatinc.net-0a:51:d7:9b:cf:5f:7e:23:6d:fa:bd:38:bf:c8:90:53
        KeyFingerprint          13:03:9e:35:87:bc:64:2b:e9:f5:92:81:3b:16:95:61

  Secret/admin

  Secret/kube

  Secret/kube-proxy

  Secret/kubelet

  Secret/system-controller_manager

  Secret/system-dns

  Secret/system-logging

  Secret/system-monitoring

  Secret/system-scheduler

  SecurityGroup/masters.useast1.soldatinc.net
        Description             Security group for masters
        VPC                     name:useast1.soldatinc.net
        RemoveExtraRules        [port=22, port=443, port=4001, port=4789, port=179]

  SecurityGroup/nodes.useast1.soldatinc.net
        Description             Security group for nodes
        VPC                     name:useast1.soldatinc.net
        RemoveExtraRules        [port=22]

  SecurityGroupRule/all-master-to-master
        SecurityGroup           name:masters.useast1.soldatinc.net
        SourceGroup             name:masters.useast1.soldatinc.net

  SecurityGroupRule/all-master-to-node
        SecurityGroup           name:nodes.useast1.soldatinc.net
        SourceGroup             name:masters.useast1.soldatinc.net

  SecurityGroupRule/all-node-to-node
        SecurityGroup           name:nodes.useast1.soldatinc.net
        SourceGroup             name:nodes.useast1.soldatinc.net

  SecurityGroupRule/https-external-to-master-0.0.0.0/0
        SecurityGroup           name:masters.useast1.soldatinc.net
        CIDR                    0.0.0.0/0
        Protocol                tcp
        FromPort                443
        ToPort                  443

  SecurityGroupRule/master-egress
        SecurityGroup           name:masters.useast1.soldatinc.net
        CIDR                    0.0.0.0/0
        Egress                  true

  SecurityGroupRule/node-egress
        SecurityGroup           name:nodes.useast1.soldatinc.net
        CIDR                    0.0.0.0/0
        Egress                  true

  SecurityGroupRule/node-to-master-tcp-1-4000
        SecurityGroup           name:masters.useast1.soldatinc.net
        Protocol                tcp
        FromPort                1
        ToPort                  4000
        SourceGroup             name:nodes.useast1.soldatinc.net

  SecurityGroupRule/node-to-master-tcp-4003-65535
        SecurityGroup           name:masters.useast1.soldatinc.net
        Protocol                tcp
        FromPort                4003
        ToPort                  65535
        SourceGroup             name:nodes.useast1.soldatinc.net

  SecurityGroupRule/node-to-master-udp-1-65535
        SecurityGroup           name:masters.useast1.soldatinc.net
        Protocol                udp
        FromPort                1
        ToPort                  65535
        SourceGroup             name:nodes.useast1.soldatinc.net

  SecurityGroupRule/ssh-external-to-master-0.0.0.0/0
        SecurityGroup           name:masters.useast1.soldatinc.net
        CIDR                    0.0.0.0/0
        Protocol                tcp
        FromPort                22
        ToPort                  22

  SecurityGroupRule/ssh-external-to-node-0.0.0.0/0
        SecurityGroup           name:nodes.useast1.soldatinc.net
        CIDR                    0.0.0.0/0
        Protocol                tcp
        FromPort                22
        ToPort                  22

  Subnet/us-east-1c.useast1.soldatinc.net
        VPC                     name:useast1.soldatinc.net
        AvailabilityZone        us-east-1c
        CIDR                    172.20.32.0/19
        Shared                  false
        Tags                    {KubernetesCluster: useast1.soldatinc.net, Name: us-east-1c.useast1.soldatinc.net, kubernetes.io/cluster/useast1.soldatinc.net: owned}

  VPC/useast1.soldatinc.net
        CIDR                    172.20.0.0/16
        EnableDNSHostnames      true
        EnableDNSSupport        true
        Shared                  false
        Tags                    {Name: useast1.soldatinc.net, kubernetes.io/cluster/useast1.soldatinc.net: owned, KubernetesCluster: useast1.soldatinc.net}

  VPCDHCPOptionsAssociation/useast1.soldatinc.net
        VPC                     name:useast1.soldatinc.net
        DHCPOptions             name:useast1.soldatinc.net

Must specify --yes to apply changes

Cluster configuration has been created.

Suggestions:
 * list clusters with: kops get cluster
 * edit this cluster with: kops edit cluster useast1.soldatinc.net
 * edit your node instance group: kops edit ig --name=useast1.soldatinc.net nodes
 * edit your master instance group: kops edit ig --name=useast1.soldatinc.net master-us-east-1c

Finally configure your cluster with: kops update cluster useast1.soldatinc.net --yes

root@ip-172-31-25-91:~#

To access the Dashboard

First time login - use "admin" as user and password get it from the Kops node

# kops get secrets kube --type secret -oplaintext



Installing Ingress Controller - Kubernetes

Installing the Ingress Controller Prerequisites Make sure you have access to the Ingress controller image: For NGINX Ingress controll...