Terraform Error Password: false aws_instance.web (remote-exec): Private key: false aws_instance.web (remote-exec): SSH Agent: false

aws_instance.web (remote-exec): Connecting to remote host via SSH...
aws_instance.web (remote-exec):   Host: 54.196.20.124
aws_instance.web (remote-exec):   User: root
aws_instance.web (remote-exec):   Password: false
aws_instance.web (remote-exec):   Private key: false
aws_instance.web (remote-exec):   SSH Agent: false
aws_instance.web (remote-exec): Connecting to remote host via SSH...
aws_instance.web (remote-exec):   Host: 54.196.20.124
aws_instance.web (remote-exec):   User: root
aws_instance.web (remote-exec):   Password: false
aws_instance.web (remote-exec):   Private key: false
aws_instance.web (remote-exec):   SSH Agent: false
aws_instance.web (remote-exec): Connecting to remote host via SSH...
aws_instance.web (remote-exec):   Host: 54.196.20.124
aws_instance.web (remote-exec):   User: root
aws_instance.web (remote-exec):   Password: false
aws_instance.web (remote-exec):   Private key: false
aws_instance.web (remote-exec):   SSH Agent: false
Interrupt received. Gracefully shutting down...
Interrupt received. Gracefully shutting down...

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Terraform code for AWS Ec2 instance creation

Sample Terraform Code:

provider "aws" {
  region = "us-east-1"
access_key = "xxxxxxxxxxxxxxFQQ"
secret_key = "fmxxxxxxxxxxxxxxxxxxxxx1JuV"
}
data "aws_ami" "ubuntu" {
  most_recent = true
  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-trusty-14.04-amd64-server-*"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }
  owners = ["099720109477"] # Canonical
}
resource "aws_instance" "web" {
  ami           = "${data.aws_ami.ubuntu.id}"
  instance_type = "t2.micro"
key_name = "newkeyaug2017"
  tags {
    Name = "HelloWorld"
  }
}





C:\Users\sjaganathan\Documents\terraform-aws>terraform.exe validate

C:\Users\sjaganathan\Documents\terraform-aws>terraform.exe plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.

data.aws_ami.ubuntu: Refreshing state...
The Terraform execution plan has been generated and is shown below.
Resources are shown in alphabetical order for quick scanning. Green resources
will be created (or destroyed and then created if an existing resource
exists), yellow resources are being changed in-place, and red resources
will be destroyed. Cyan entries are data sources to be read.

Note: You didn't specify an "-out" parameter to save this plan, so when
"apply" is called, Terraform can't guarantee this is what will execute.

+ aws_instance.web
    ami:                         "ami-7dce6507"
    associate_public_ip_address: ""
    availability_zone:           ""
    ebs_block_device.#:          ""
    ephemeral_block_device.#:    ""
    instance_state:              ""
    instance_type:               "t2.micro"
    ipv6_addresses.#:            ""
    key_name:                    "newkeyaug2017"
    network_interface_id:        ""
    placement_group:             ""
    private_dns:                 ""
    private_ip:                  ""
    public_dns:                  ""
    public_ip:                   ""
    root_block_device.#:         ""
    security_groups.#:           ""
    source_dest_check:           "true"
    subnet_id:                   ""
    tags.%:                      "1"
    tags.Name:                   "HelloWorld"
    tenancy:                     ""
    vpc_security_group_ids.#:    ""


Plan: 1 to add, 0 to change, 0 to destroy.

C:\Users\sjaganathan\Documents\terraform-aws>terraform.exe apply
data.aws_ami.ubuntu: Refreshing state...
aws_instance.web: Creating...
  ami:                         "" => "ami-7dce6507"
  associate_public_ip_address: "" => ""
  availability_zone:           "" => ""
  ebs_block_device.#:          "" => ""
  ephemeral_block_device.#:    "" => ""
  instance_state:              "" => ""
  instance_type:               "" => "t2.micro"
  ipv6_addresses.#:            "" => ""
  key_name:                    "" => "newkeyaug2017"
  network_interface_id:        "" => ""
  placement_group:             "" => ""
  private_dns:                 "" => ""
  private_ip:                  "" => ""
  public_dns:                  "" => ""
  public_ip:                   "" => ""
  root_block_device.#:         "" => ""
  security_groups.#:           "" => ""
  source_dest_check:           "" => "true"
  subnet_id:                   "" => ""
  tags.%:                      "" => "1"
  tags.Name:                   "" => "HelloWorld"
  tenancy:                     "" => ""
  vpc_security_group_ids.#:    "" => ""
aws_instance.web: Still creating... (10s elapsed)
aws_instance.web: Creation complete (ID: i-00a8c11897b8f9532)

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path:

C:\Users\sjaganathan\Documents\terraform-aws>

Vault UI docker container installation and errors


[root@ip-172-31-39-86 vault-ui]# docker images
REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
                              8fa3fd9b95f7        8 hours ago         68.63 MB
djenriquez/vault-ui   latest              09077b84b73c        21 hours ago        78.61 MB
vault                 latest              6f550e834e5a        2 days ago          80.51 MB
tomcat                latest              1269f3761db5        2 weeks ago         560.2 MB
node                  8.6-alpine          b7e15c83cdaf        4 weeks ago         67.18 MB
[root@ip-172-31-39-86 vault-ui]# ./run-docker-compose-dev
------------- yarn install -------------
./run-docker-compose-dev: line 3: yarn: command not found
------------- docker-compose up -d -------------
Building webpack
Step 1 : FROM node:8.6-alpine
 ---> b7e15c83cdaf
Step 2 : LABEL maintainer "Vault-UI Contributors"
 ---> Using cache
 ---> 9b7de5e4265c
Step 3 : WORKDIR /app
 ---> Using cache
 ---> 300cebc9574d
Step 4 : COPY . .
 ---> eb2e4519e6f6
Removing intermediate container 5e1ce4a5fda2
Step 5 : RUN yarn install --pure-lockfile --silent &&     yarn run build-web &&     yarn install --silent --production &&     yarn check --verify-tree --production &&     yarn global add nodemon &&     yarn cache clean &&     rm -f /root/.electron/*
 ---> Running in 9c2cc4237125
yarn run v1.1.0
$ webpack -p --env.target=web --hide-modules
[BABEL] Note: The code generator has deoptimised the styling of "/app/node_modules/lodash/lodash.js" as it exceeds the max of "500KB".
[BABEL] Note: The code generator has deoptimised the styling of "/app/node_modules/brace/index.js" as it exceeds the max of "500KB".
Killed
error Command failed with exit code 137.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
ERROR: Service 'webpack' failed to build: The command '/bin/sh -c yarn install --pure-lockfile --silent &&     yarn run build-web &&     yarn install --silent --production &&     yarn check --verify-tree --production &&     yarn global add nodemon &&     yarn cache clean &&     rm -f /root/.electron/*' returned a non-zero code: 1

------------- docker-compose ps -------------
Name   Command   State   Ports
------------------------------

------------- vault auth  -------------
ERROR: No container found for vault_1

------------- vault status -------------
ERROR: No container found for vault_1

------------- vault auth-enable userpass -------------
ERROR: No container found for vault_1

------------- vault auth-enable -path=userpass2 userpass -------------
ERROR: No container found for vault_1

------------- vault auth-enable github -------------
ERROR: No container found for vault_1

------------- vault auth-enable radius -------------
ERROR: No container found for vault_1

------------- vault auth-enable -path=awsaccount1 aws-ec2 -------------
ERROR: No container found for vault_1

------------- vault auth-enable okta -------------
ERROR: No container found for vault_1

------------- vault auth-enable approle -------------
ERROR: No container found for vault_1

------------- vault policy-write admin /misc/admin.hcl -------------
ERROR: No container found for vault_1

------------- vault write auth/userpass/users/test password=test policies=admin -------------
ERROR: No container found for vault_1

------------- vault write auth/userpass2/users/john password=doe policies=admin -------------
ERROR: No container found for vault_1

------------- vault write auth/userpass/users/lame password=lame policies=default -------------
ERROR: No container found for vault_1

------------- vault write auth/radius/users/test password=test policies=admin -------------
ERROR: No container found for vault_1

------------- vault write secret/test somekey=somedata -------------
ERROR: No container found for vault_1

------------- vault mount -path=ultrasecret generic -------------
ERROR: No container found for vault_1

------------- vault write ultrasecret/moretest somekey=somedata -------------
ERROR: No container found for vault_1

------------- vault write ultrasecret/dir1/secret somekey=somedata -------------
ERROR: No container found for vault_1

------------- vault write ultrasecret/dir2/secret somekey=somedata -------------
ERROR: No container found for vault_1

------------- vault write ultrasecret/dir2/secret2 somekey=somedata -------------
ERROR: No container found for vault_1

------------- vault write ultrasecret/admincantlistthis/butcanreadthis somekey=somedata -------------
ERROR: No container found for vault_1

------------- vault write ultrasecret/admincantreadthis somekey=somedata -------------
ERROR: No container found for vault_1

------------- Vault Root Token -------------
[root@ip-172-31-39-86 vault-ui]# pip install yarn
-bash: pip: command not found
[root@ip-172-31-39-86 vault-ui]#
[root@ip-172-31-39-86 vault-ui]#
[root@ip-172-31-39-86 vault-ui]# yum install yarn
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
No package yarn available.
Error: Nothing to do
[root@ip-172-31-39-86 vault-ui]# yum provides yarn
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
No matches found
[root@ip-172-31-39-86 vault-ui]# sudo wget https://dl.yarnpkg.com/rpm/yarn.repo -O /etc/yum.repos.d/yarn.repo
--2017-10-28 03:29:11--  https://dl.yarnpkg.com/rpm/yarn.repo
Resolving dl.yarnpkg.com (dl.yarnpkg.com)... 104.16.62.173, 104.16.63.173, 104.16.59.173, ...
Connecting to dl.yarnpkg.com (dl.yarnpkg.com)|104.16.62.173|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 130 [application/octet-stream]
Saving to: ‘/etc/yum.repos.d/yarn.repo’

100%[================================================================================================================>] 130         --.-K/s   in 0s

2017-10-28 03:29:11 (33.7 MB/s) - ‘/etc/yum.repos.d/yarn.repo’ saved [130/130]

[root@ip-172-31-39-86 vault-ui]# curl --silent --location https://rpm.nodesource.com/setup_6.x | sudo bash -

## Installing the NodeSource Node.js 6.x repo...


## Inspecting system...

+ rpm -q --whatprovides redhat-release || rpm -q --whatprovides centos-release || rpm -q --whatprovides cloudlinux-release || rpm -q --whatprovides sl-release
+ uname -m

## Confirming "el7-x86_64" is supported...

+ curl -sLf -o /dev/null 'https://rpm.nodesource.com/pub_6.x/el/7/x86_64/nodesource-release-el7-1.noarch.rpm'

## Downloading release setup RPM...

+ mktemp
+ curl -sL -o '/tmp/tmp.z8cUdKyEYL' 'https://rpm.nodesource.com/pub_6.x/el/7/x86_64/nodesource-release-el7-1.noarch.rpm'

## Installing release setup RPM...

+ rpm -i --nosignature --force '/tmp/tmp.z8cUdKyEYL'

## Cleaning up...

+ rm -f '/tmp/tmp.z8cUdKyEYL'

## Checking for existing installations...

+ rpm -qa 'node|npm' | grep -v nodesource

## Run `yum install -y nodejs` (as root) to install Node.js 6.x and npm.
## You may also need development tools to build native addons:
##   `yum install -y gcc-c++ make`

[root@ip-172-31-39-86 vault-ui]# yum install yarn
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
nodesource                                                                                                                         | 2.5 kB  00:00:00
yarn                                                                                                                               | 2.9 kB  00:00:00
(1/2): nodesource/x86_64/primary_db                                                                                                |  46 kB  00:00:00
(2/2): yarn/primary_db                                                                                                             |  16 kB  00:00:00
Resolving Dependencies
--> Running transaction check
---> Package yarn.noarch 0:1.2.1-1 will be installed
--> Processing Dependency: nodejs for package: yarn-1.2.1-1.noarch
--> Running transaction check
---> Package nodejs.x86_64 2:6.11.5-1nodesource will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==========================================================================================================================================================
 Package                         Arch                            Version                                        Repository                           Size
==========================================================================================================================================================
Installing:
 yarn                            noarch                          1.2.1-1                                        yarn                                860 k
Installing for dependencies:
 nodejs                          x86_64                          2:6.11.5-1nodesource                           nodesource                           13 M

Transaction Summary
==========================================================================================================================================================
Install  1 Package (+1 Dependent package)

Total download size: 14 M
Installed size: 41 M
Is this ok [y/d/N]: y
Downloading packages:
warning: /var/cache/yum/x86_64/7Server/yarn/packages/yarn-1.2.1-1.noarch.rpm: Header V4 RSA/SHA256 Signature, key ID 6963f07f: NOKEY
Public key for yarn-1.2.1-1.noarch.rpm is not installed
(1/2): yarn-1.2.1-1.noarch.rpm                                                                                                     | 860 kB  00:00:00
warning: /var/cache/yum/x86_64/7Server/nodesource/packages/nodejs-6.11.5-1nodesource.x86_64.rpm: Header V4 RSA/SHA256 Signature, key ID 34fa74dd: NOKEYTA
Public key for nodejs-6.11.5-1nodesource.x86_64.rpm is not installed
(2/2): nodejs-6.11.5-1nodesource.x86_64.rpm                                                                                        |  13 MB  00:00:01
----------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                      13 MB/s |  14 MB  00:00:01
Retrieving key from file:///etc/pki/rpm-gpg/NODESOURCE-GPG-SIGNING-KEY-EL
Importing GPG key 0x34FA74DD:
 Userid     : "NodeSource "
 Fingerprint: 2e55 207a 95d9 944b 0cc9 3261 5ddb e8d4 34fa 74dd
 Package    : nodesource-release-el7-1.noarch (installed)
 From       : /etc/pki/rpm-gpg/NODESOURCE-GPG-SIGNING-KEY-EL
Is this ok [y/N]: y
Retrieving key from https://dl.yarnpkg.com/rpm/pubkey.gpg
Importing GPG key 0x6963F07F:
 Userid     : "Yarn RPM Packaging "
 Fingerprint: 9a6f 73f3 4beb 7473 4d8c 6914 9cbb b558 6963 f07f
 From       : https://dl.yarnpkg.com/rpm/pubkey.gpg
Is this ok [y/N]: y
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Warning: RPMDB altered outside of yum.
  Installing : 2:nodejs-6.11.5-1nodesource.x86_64                                                                                                     1/2
  Installing : yarn-1.2.1-1.noarch                                                                                                                    2/2
  Verifying  : 2:nodejs-6.11.5-1nodesource.x86_64                                                                                                     1/2
  Verifying  : yarn-1.2.1-1.noarch                                                                                                                    2/2

Installed:
  yarn.noarch 0:1.2.1-1

Dependency Installed:
  nodejs.x86_64 2:6.11.5-1nodesource

Complete!
[root@ip-172-31-39-86 vault-ui]#
[root@ip-172-31-39-86 vault-ui]# ./run-docker-compose-dev
------------- yarn install -------------

yarn install v1.2.1
[1/4] Resolving packages...
[2/4] Fetching packages...
info fsevents@1.1.2: The platform "linux" is incompatible with this module.
info "fsevents@1.1.2" is an optional dependency and failed compatibility check. Excluding it from installation.
info 7zip-bin-mac@1.0.1: The platform "linux" is incompatible with this module.
info "7zip-bin-mac@1.0.1" is an optional dependency and failed compatibility check. Excluding it from installation.
info 7zip-bin-win@2.1.1: The platform "linux" is incompatible with this module.
info "7zip-bin-win@2.1.1" is an optional dependency and failed compatibility check. Excluding it from installation.
[3/4] Linking dependencies...
[4/4] Building fresh packages...
Done in 47.52s.
------------- docker-compose up -d -------------
Building webpack
Step 1 : FROM node:8.6-alpine
 ---> b7e15c83cdaf
Step 2 : LABEL maintainer "Vault-UI Contributors"
 ---> Using cache
 ---> 9b7de5e4265c
Step 3 : WORKDIR /app
 ---> Using cache
 ---> 300cebc9574d
Step 4 : COPY . .
 ---> Using cache
 ---> eb2e4519e6f6
Step 5 : RUN yarn install --pure-lockfile --silent &&     yarn run build-web &&     yarn install --silent --production &&     yarn check --verify-tree --production &&     yarn global add nodemon &&     yarn cache clean &&     rm -f /root/.electron/*
 ---> Running in fdd9822cc9d4
yarn run v1.1.0
$ webpack -p --env.target=web --hide-modules
[BABEL] Note: The code generator has deoptimised the styling of "/app/node_modules/lodash/lodash.js" as it exceeds the max of "500KB".
Killed
error Command failed with exit code 137.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
ERROR: Service 'webpack' failed to build: The command '/bin/sh -c yarn install --pure-lockfile --silent &&     yarn run build-web &&     yarn install --silent --production &&     yarn check --verify-tree --production &&     yarn global add nodemon &&     yarn cache clean &&     rm -f /root/.electron/*' returned a non-zero code: 1

------------- docker-compose ps -------------
Name   Command   State   Ports
------------------------------

------------- vault auth  -------------
ERROR: No container found for vault_1

------------- vault status -------------
ERROR: No container found for vault_1

------------- vault auth-enable userpass -------------
ERROR: No container found for vault_1

------------- vault auth-enable -path=userpass2 userpass -------------
ERROR: No container found for vault_1

------------- vault auth-enable github -------------
ERROR: No container found for vault_1

------------- vault auth-enable radius -------------
ERROR: No container found for vault_1

------------- vault auth-enable -path=awsaccount1 aws-ec2 -------------
ERROR: No container found for vault_1

------------- vault auth-enable okta -------------
ERROR: No container found for vault_1

------------- vault auth-enable approle -------------
ERROR: No container found for vault_1

------------- vault policy-write admin /misc/admin.hcl -------------
ERROR: No container found for vault_1

------------- vault write auth/userpass/users/test password=test policies=admin -------------
ERROR: No container found for vault_1

------------- vault write auth/userpass2/users/john password=doe policies=admin -------------
ERROR: No container found for vault_1

------------- vault write auth/userpass/users/lame password=lame policies=default -------------
ERROR: No container found for vault_1

------------- vault write auth/radius/users/test password=test policies=admin -------------
ERROR: No container found for vault_1

------------- vault write secret/test somekey=somedata -------------
ERROR: No container found for vault_1

------------- vault mount -path=ultrasecret generic -------------
ERROR: No container found for vault_1

------------- vault write ultrasecret/moretest somekey=somedata -------------
ERROR: No container found for vault_1

------------- vault write ultrasecret/dir1/secret somekey=somedata -------------
ERROR: No container found for vault_1

------------- vault write ultrasecret/dir2/secret somekey=somedata -------------
ERROR: No container found for vault_1

------------- vault write ultrasecret/dir2/secret2 somekey=somedata -------------
ERROR: No container found for vault_1

------------- vault write ultrasecret/admincantlistthis/butcanreadthis somekey=somedata -------------
ERROR: No container found for vault_1

------------- vault write ultrasecret/admincantreadthis somekey=somedata -------------
ERROR: No container found for vault_1

------------- Vault Root Token -------------
[root@ip-172-31-39-86 vault-ui]#

Chef Cookbooks - Upload




[root@chef-automate-server-qgr4glxhllrewxvs cookbooks]# cd ..
[root@chef-automate-server-qgr4glxhllrewxvs chef-repo]# ll
total 120
drwxr-xr-x 6 root root  4096 Sep 20 23:25 apache2
-rw-r--r-- 1 root root    79 Sep 20 17:37 Berksfile
-rw------- 1 root root   299 Sep 20 17:37 Berksfile.lock
-rw-r--r-- 1 root root   156 Sep 19 20:04 chefignore
drwxr-xr-x 3 root root  4096 Sep 20 23:30 cookbooks
drwxr-xr-x 2 root root  4096 Sep 19 20:04 environments
drwxrwxr-x 4 1000 1000  4096 Apr 22  2016 lamp
-rw------- 1 root root 71680 Sep 20 21:39 lamp-1.0.4.tar
-rw-r--r-- 1 root root  3929 Sep 19 20:04 README.md
drwxr-xr-x 2 root root  4096 Sep 19 20:04 roles
-rw-r--r-- 1 root root  5625 Sep 19 20:04 userdata.ps1
-rw-r--r-- 1 root root  3106 Sep 19 20:04 userdata.sh
[root@chef-automate-server-qgr4glxhllrewxvs chef-repo]# mv lamp /root/chef-repo/cookbooks/
[root@chef-automate-server-qgr4glxhllrewxvs chef-repo]# knife cookbook upload lamp
Uploading lamp           [1.0.4]
Uploaded 1 cookbook.
[root@chef-automate-server-qgr4glxhllrewxvs chef-repo]#

Grunt Build Output

,status=,progressDetail=,progress=,id=,from=,time=,errorDetail=,error=,aux=]
[Docker] INFO: BuildResponseItem[stream=Successfully built 9f82ec61892c
,status=,progressDetail=,progress=,id=,from=,time=,errorDetail=,error=,aux=]
[Docker] INFO: Build image id:9f82ec61892c
[docker7] $ /bin/sh -xe /tmp/jenkins2237505636879407858.sh
+ docker run -d -p 9000:9000 mportalui:latest sh -c 'sleep 3000 & wait'
f4e06850c795da753b90185f29ffff180ff7c16ecbcee931d1a46f808ee04b49
+ sudo /bin/bash /root/newsh.sh
Running "clean:dist" (clean) task
>> 19 paths cleaned.

Running "wiredep:app" (wiredep) task

Running "wiredep:test" (wiredep) task

Running "wiredep:sass" (wiredep) task

Running "useminPrepare:html" (useminPrepare) task
Configuration changed for concat, uglify, cssmin

Running "concurrent:dist" (concurrent) task
>> Warning: There are more tasks than your concurrency limit. After this limit
>> is reached no further tasks will be run until the current tasks are
>> completed. You can adjust the limit in the concurrent task options
    
    Running "compass:dist" (compass) task
    Warning: not found: compass  Used --force, continuing.
    
    Done, but with warnings.
    
    
    Execution Time (2017-09-28 21:52:04 UTC-0)
    loading tasks                  408ms  ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 80%
    loading grunt-contrib-compass   66ms  ▇▇▇▇▇ 13%
    compass:dist                    32ms  ▇▇ 6%
    Total 507ms
        
    Running "svgmin:dist" (svgmin) task
    ✔ app/images/ic_email_black_24px.svg (saved 26 B 10%)
    ✔ app/images/ic_lock_black_24px.svg (saved 26 B 7%)
    ✔ app/images/ic_perm_identity_black_24px.svg (saved 26 B 6%)
    Total saved: 78 B
    
    Done.
    
    
    Execution Time (2017-09-28 21:52:06 UTC-0)
    loading tasks         378ms  ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 53%
    loading grunt-svgmin  108ms  ▇▇▇▇▇▇▇ 15%
    svgmin:dist           215ms  ▇▇▇▇▇▇▇▇▇▇▇▇▇ 30%
    Total 716ms
        
    Running "imagemin:dist" (imagemin) task
    Minified 14 images (saved 17.51 kB)
    
    Done.
    
    
    Execution Time (2017-09-28 21:52:04 UTC-0)
    loading tasks                   406ms  ▇▇▇▇▇ 16%
    loading grunt-contrib-imagemin  680ms  ▇▇▇▇▇▇▇▇▇ 27%
    imagemin:dist                    1.4s  ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 57%
    Total 2.5s
    
Running "postcss:server" (postcss) task

Running "postcss:dist" (postcss) task

Running "ngtemplates:dist" (ngtemplates) task
File .tmp/templateCache.js created.
Added .tmp/templateCache.js to 

Running "concat:generated" (concat) task
File .tmp/concat/scripts/vendor.js created.
File .tmp/concat/scripts/scripts.js created.

Running "ngAnnotate:dist" (ngAnnotate) task
>> 2 files successfully generated.

Running "copy:dist" (copy) task
Copied 9 files
Running "cdnify:dist" (cdnify) task
Going through dist/404.html, dist/index.html to update script refs
(node:6) [DEP0022] DeprecationWarning: os.tmpDir() is deprecated. Use os.tmpdir() instead.
✔ bower_components/angular/angular.js changed to //ajax.googleapis.com/ajax/libs/angularjs/1.6.1/angular.min.js
✔ bower_components/angular-animate/angular-animate.js changed to //ajax.googleapis.com/ajax/libs/angularjs/1.6.1/angular-animate.min.js
✔ bower_components/angular-cookies/angular-cookies.js changed to //ajax.googleapis.com/ajax/libs/angularjs/1.6.1/angular-cookies.min.js
✔ bower_components/angular-resource/angular-resource.js changed to //ajax.googleapis.com/ajax/libs/angularjs/1.6.1/angular-resource.min.js
✔ bower_components/angular-route/angular-route.js changed to //ajax.googleapis.com/ajax/libs/angularjs/1.6.1/angular-route.min.js
✔ bower_components/angular-sanitize/angular-sanitize.js changed to //ajax.googleapis.com/ajax/libs/angularjs/1.6.1/angular-sanitize.min.js
✔ bower_components/angular-touch/angular-touch.js changed to //ajax.googleapis.com/ajax/libs/angularjs/1.6.1/angular-touch.min.js
✔ bower_components/angular/angular.js changed to //ajax.googleapis.com/ajax/libs/angularjs/1.6.1/angular.min.js
✔ bower_components/angular-animate/angular-animate.js changed to //ajax.googleapis.com/ajax/libs/angularjs/1.6.1/angular-animate.min.js
✔ bower_components/angular-cookies/angular-cookies.js changed to //ajax.googleapis.com/ajax/libs/angularjs/1.6.1/angular-cookies.min.js
✔ bower_components/angular-resource/angular-resource.js changed to //ajax.googleapis.com/ajax/libs/angularjs/1.6.1/angular-resource.min.js
✔ bower_components/angular-route/angular-route.js changed to //ajax.googleapis.com/ajax/libs/angularjs/1.6.1/angular-route.min.js
✔ bower_components/angular-sanitize/angular-sanitize.js changed to //ajax.googleapis.com/ajax/libs/angularjs/1.6.1/angular-sanitize.min.js
✔ bower_components/angular-touch/angular-touch.js changed to //ajax.googleapis.com/ajax/libs/angularjs/1.6.1/angular-touch.min.js

Running "cssmin:generated" (cssmin) task

Chefdk and Knife

[root@ip-172-31-16-75 ~]# wget https://packages.chef.io/files/stable/chefdk/2.3.                                                                                    1/el/7/chefdk-2.3.1-1.el7.x86_64.rpm
--2017-09-20 15:35:14--  https://packages.chef.io/files/stable/chefdk/2.3.1/el/7                                                                                    /chefdk-2.3.1-1.el7.x86_64.rpm
Resolving packages.chef.io (packages.chef.io)... 151.101.34.110
Connecting to packages.chef.io (packages.chef.io)|151.101.34.110|:443... connect                                                                                    ed.
HTTP request sent, awaiting response... 200 OK
Length: 104886580 (100M) [application/x-rpm]
Saving to: ‘chefdk-2.3.1-1.el7.x86_64.rpm’

chefdk-2.3.1-1.el7. 100%[===================>] 100.03M  64.4MB/s    in 1.6s

2017-09-20 15:35:16 (64.4 MB/s) - ‘chefdk-2.3.1-1.el7.x86_64.rpm’ saved [1048865                                                                                    80/104886580]

[root@ip-172-31-16-75 ~]# rpm -ivh https://packages.chef.io/files/stable/chefdk/                                                                                    2.3.1/el/7/chefdk-2.3.1-1.el7.x86_64.rpm^C
[root@ip-172-31-16-75 ~]# rpm -ivh chefdk-2.3.1-1.el7.x86_64.rpm
warning: chefdk-2.3.1-1.el7.x86_64.rpm: Header V4 DSA/SHA1 Signature, key ID 83e                                                                                    f826a: NOKEY
Preparing...                          ################################# [100%]
Updating / installing...
   1:chefdk-2.3.1-1.el7               ################################# [100%]
Thank you for installing Chef Development Kit!
[root@ip-172-31-16-75 ~]# which ruby
/usr/bin/ruby
[root@ip-172-31-16-75 ~]# chef
Usage:
    chef -h/--help
    chef -v/--version
    chef command [arguments...] [options...]


Available Commands:
    exec                    Runs the command in context of the embedded ruby
    env                     Prints environment variables used by ChefDK
    gem                     Runs the `gem` command in context of the embedded ruby
    generate                Generate a new app, cookbook, or component
    shell-init              Initialize your shell to use ChefDK as your primary ruby
    install                 Install cookbooks from a Policyfile and generate a locked cookbook set
    update                  Updates a Policyfile.lock.json with latest run_list and cookbooks
    push                    Push a local policy lock to a policy group on the server
    push-archive            Push a policy archive to a policy group on the server
    show-policy             Show policyfile objects on your Chef Server
    diff                    Generate an itemized diff of two Policyfile lock documents
    provision               Provision VMs and clusters via cookbook
    export                  Export a policy lock as a Chef Zero code repo
    clean-policy-revisions  Delete unused policy revisions on the server
    clean-policy-cookbooks  Delete unused policyfile cookbooks on the server
    delete-policy-group     Delete a policy group on the server
    delete-policy           Delete all revisions of a policy on the server
    undelete                Undo a delete command
[root@ip-172-31-16-75 ~]# knife
ERROR: You need to pass a sub-command (e.g., knife SUB-COMMAND)

Usage: knife sub-command (options)
    -s, --server-url URL             Chef Server URL
        --chef-zero-host HOST        Host to start chef-zero on
        --chef-zero-port PORT        Port (or port range) to start chef-zero on.  Port ranges like 1000,1010 or                                                      8889-9999 will try all given ports until one works.
    -k, --key KEY                    API Client Key
        --[no-]color                 Use colored output, defaults to enabled
    -c, --config CONFIG              The configuration file to use
        --config-option OPTION=VALUE Override a single configuration option
        --defaults                   Accept default values for all questions
    -d, --disable-editing            Do not open EDITOR, just accept the data as is
    -e, --editor EDITOR              Set the editor to use for interactive commands
    -E, --environment ENVIRONMENT    Set the Chef environment (except for in searches, where this will be flagr                                                     antly ignored)
        --[no-]fips                  Enable fips mode
    -F, --format FORMAT              Which format to use for output
        --[no-]listen                Whether a local mode (-z) server binds to a port
    -z, --local-mode                 Point knife commands at local repository instead of server
    -u, --user USER                  API Client Username
        --print-after                Show the data after a destructive operation
    -V, --verbose                    More verbose output. Use twice for max verbosity
    -v, --version                    Show chef version
    -y, --yes                        Say yes to all prompts for confirmation
    -h, --help                       Show this message

Available subcommands: (for details, knife SUB-COMMAND --help)

** OPSCODE PRIVATE CHEF ORGANIZATION MANAGEMENT COMMANDS **
knife opc org create ORG_SHORT_NAME ORG_FULL_NAME (options)
knife opc org delete ORG_NAME
knife opc org edit ORG
knife opc org list
knife opc org show ORGNAME
knife opc org user add ORG_NAME USER_NAME
knife opc org user remove ORG_NAME USER_NAME
knife opc user create USERNAME FIRST_NAME [MIDDLE_NAME] LAST_NAME EMAIL PASSWORD
knife opc user delete USERNAME [-d]
knife opc user edit USERNAME
knife opc user list
knife opc user password USERNAME [PASSWORD | --enable-external-auth]
knife opc user show USERNAME

** BOOTSTRAP COMMANDS **
knife bootstrap [SSH_USER@]FQDN (options)
knife bootstrap windows ssh FQDN (options)
knife bootstrap windows winrm FQDN (options)

** CLIENT COMMANDS **
knife client bulk delete REGEX (options)
knife client create CLIENTNAME (options)
knife client delete [CLIENT[,CLIENT]] (options)
knife client edit CLIENT (options)
knife client key create CLIENT (options)
knife client key delete CLIENT KEYNAME (options)
knife client key edit CLIENT KEYNAME (options)
knife client key list CLIENT (options)
knife client key show CLIENT KEYNAME (options)
knife client list (options)
knife client reregister CLIENT (options)
knife client show CLIENT (options)

** CONFIGURE COMMANDS **
knife configure (options)
knife configure client DIRECTORY

** COOKBOOK COMMANDS **
knife cookbook bulk delete REGEX (options)
Usage: /usr/bin/knife (options)
knife cookbook delete COOKBOOK VERSION (options)
knife cookbook download COOKBOOK [VERSION] (options)
knife cookbook list (options)
knife cookbook metadata COOKBOOK (options)
knife cookbook metadata from FILE (options)
knife cookbook show COOKBOOK [VERSION] [PART] [FILENAME] (options)
knife cookbook test [COOKBOOKS...] (options)
knife cookbook upload [COOKBOOKS...] (options)

** COOKBOOK SITE COMMANDS **
knife cookbook site download COOKBOOK [VERSION] (options)
knife cookbook site install COOKBOOK [VERSION] (options)
knife cookbook site list (options)
knife cookbook site search QUERY (options)
knife cookbook site share COOKBOOK [CATEGORY] (options)
knife cookbook site show COOKBOOK [VERSION] (options)
knife cookbook site unshare COOKBOOK

** DATA BAG COMMANDS **
knife data bag create BAG [ITEM] (options)
knife data bag delete BAG [ITEM] (options)
knife data bag edit BAG ITEM (options)
knife data bag from file BAG FILE|FOLDER [FILE|FOLDER..] (options)
knife data bag list (options)
knife data bag show BAG [ITEM] (options)

** EC2 COMMANDS **
knife ec2 amis ubuntu DISTRO [TYPE] (options)

** ENVIRONMENT COMMANDS **
knife environment compare [ENVIRONMENT..] (options)
knife environment create ENVIRONMENT (options)
knife environment delete ENVIRONMENT (options)
knife environment edit ENVIRONMENT (options)
knife environment from file FILE [FILE..] (options)
knife environment list (options)
knife environment show ENVIRONMENT (options)

** EXEC COMMANDS **
knife exec [SCRIPT] (options)

** HELP COMMANDS **
knife help [list|TOPIC]

** INDEX COMMANDS **
knife index rebuild (options)

** JOB COMMANDS **
knife job list
knife job output [ ...]
knife job start [ ...]
knife job status

** KNIFE COMMANDS **
Usage: /usr/bin/knife (options)

** NODE COMMANDS **
knife node bulk delete REGEX (options)
knife node create NODE (options)
knife node delete [NODE[,NODE]] (options)
knife node edit NODE (options)
knife node environment set NODE ENVIRONMENT
knife node from file FILE (options)
knife node list (options)
knife node run_list add [NODE] [ENTRY[,ENTRY]] (options)
knife node run_list remove [NODE] [ENTRY[,ENTRY]] (options)
knife node run_list set NODE ENTRIES (options)
knife node show NODE (options)
knife node status [ ...]

** NULL COMMANDS **
knife null

** OSC COMMANDS **
knife osc_user create USER (options)
knife osc_user delete USER (options)
knife osc_user edit USER (options)
knife osc_user list (options)
knife osc_user reregister USER (options)
knife osc_user show USER (options)

** PATH-BASED COMMANDS **
knife delete [PATTERN1 ... PATTERNn]
knife deps PATTERN1 [PATTERNn]
knife diff PATTERNS
knife download PATTERNS
knife edit [PATTERN1 ... PATTERNn]
knife list [-dfR1p] [PATTERN1 ... PATTERNn]
knife show [PATTERN1 ... PATTERNn]
knife upload PATTERNS
knife xargs [COMMAND]

** RAW COMMANDS **
knife raw REQUEST_PATH

** RECIPE COMMANDS **
knife recipe list [PATTERN]

** REHASH COMMANDS **
knife rehash

** ROLE COMMANDS **
knife role bulk delete REGEX (options)
knife role create ROLE (options)
knife role delete ROLE (options)
knife role edit ROLE (options)
knife role env_run_list add [ROLE] [ENVIRONMENT] [ENTRY[,ENTRY]] (options)
knife role env_run_list clear [ROLE] [ENVIRONMENT]
knife role env_run_list remove [ROLE] [ENVIRONMENT] [ENTRIES]
knife role env_run_list replace [ROLE] [ENVIRONMENT] [OLD_ENTRY] [NEW_ENTRY]
knife role env_run_list set [ROLE] [ENVIRONMENT] [ENTRIES]
knife role from file FILE [FILE..] (options)
knife role list (options)
knife role run_list add [ROLE] [ENTRY[,ENTRY]] (options)
knife role run_list clear [ROLE]
knife role run_list remove [ROLE] [ENTRY]
knife role run_list replace [ROLE] [OLD_ENTRY] [NEW_ENTRY]
knife role run_list set [ROLE] [ENTRIES]
knife role show ROLE (options)

** SEARCH COMMANDS **
knife search INDEX QUERY (options)

** SERVE COMMANDS **
knife serve (options)

** SPORK COMMANDS **
knife spork bump COOKBOOK [major|minor|patch|manual]
knife spork check COOKBOOK (options)
knife spork data bag create BAG [ITEM] (options)
knife spork data bag delete BAG [ITEM] (options)
knife spork data bag edit BAG ITEM (options)
knife spork data bag from file BAG FILE|FOLDER [FILE|FOLDER..] (options)
knife spork delete [COOKBOOKS...] (options)
knife spork environment check ENVIRONMENT (options)
knife spork environment create ENVIRONMENT (options)
knife spork environment delete ENVIRONMENT (options)
knife spork environment edit ENVIRONMENT (options)
knife spork environment from file FILENAME (options)
knife spork info
knife spork node create NODE (options)
knife spork node delete NODE (options)
knife spork node edit NODE (options)
knife spork node from file FILE (options)
knife spork node run_list add [NODE] [ENTRY[,ENTRY]] (options)
knife spork node run_list add [NODE] [ENTRY[,ENTRY]] (options)
knife spork node run_list set NODE ENTRIES (options)
knife spork omni COOKBOOK (options)
knife spork promote ENVIRONMENT COOKBOOK (options)
knife spork role create ROLE (options)
knife spork role delete ROLENAME (options)
knife spork role edit ROLENAME (options)
knife spork role from file FILENAME (options)
knife spork upload [COOKBOOKS...] (options)

** SSH COMMANDS **
knife ssh QUERY COMMAND (options)

** SSL COMMANDS **
knife ssl check [URL] (options)
knife ssl fetch [URL] (options)

** STATUS COMMANDS **
knife status QUERY (options)

** SUPERMARKET COMMANDS **
knife supermarket download COOKBOOK [VERSION] (options)
knife supermarket install COOKBOOK [VERSION] (options)
knife supermarket list (options)
knife supermarket search QUERY (options)
knife supermarket share COOKBOOK [CATEGORY] (options)
knife supermarket show COOKBOOK [VERSION] (options)
knife supermarket unshare COOKBOOK (options)

** TAG COMMANDS **
knife tag create NODE TAG ...
knife tag delete NODE TAG ...
knife tag list NODE

** USER COMMANDS **
knife user create USERNAME DISPLAY_NAME FIRST_NAME LAST_NAME EMAIL PASSWORD (options)
knife user delete USER (options)
knife user edit USER (options)
knife user key create USER (options)
knife user key delete USER KEYNAME (options)
knife user key edit USER KEYNAME (options)
knife user key list USER (options)
knife user key show USER KEYNAME (options)
knife user list (options)
knife user reregister USER (options)
knife user show USER (options)

** VAULT COMMANDS **
knife vault create VAULT ITEM VALUES (options)
knife vault delete VAULT ITEM (options)
knife vault download VAULT ITEM PATH (options)
knife vault edit VAULT ITEM (options)
knife vault isvault VAULT ITEM (options)
knife vault itemtype VAULT ITEM (options)
knife vault list (options)
knife vault refresh VAULT ITEM
knife vault remove VAULT ITEM VALUES (options)
knife vault rotate all keys
knife vault rotate keys VAULT ITEM (options)
knife vault show VAULT [ITEM] [VALUES] (options)
knife vault update VAULT ITEM VALUES (options)

** WINDOWS COMMANDS **
knife windows cert generate FILE_PATH (options)
knife windows cert install CERT [CERT] (options)
knife bootstrap windows winrm FQDN (options)
knife bootstrap windows ssh FQDN (options)
knife winrm QUERY COMMAND (options)
knife wsman test QUERY (options)
knife windows listener create (options)

** WINRM COMMANDS **
knife winrm QUERY COMMAND (options)

** WSMAN COMMANDS **
knife wsman test QUERY (options)

[root@ip-172-31-16-75 ~]# chef generate app
Usage: chef generate app NAME [options]
    -C, --copyright COPYRIGHT        Name of the copyright holder - defaults to 'The Authors'
    -m, --email EMAIL                Email address of the author - defaults to 'you@example.com'
    -a, --generator-arg KEY=VALUE    Use to set arbitrary attribute KEY to VALUE in the code_generator cookbook
    -h, --help                       Show this message
    -I, --license LICENSE            all_rights, apachev2, mit, gplv2, gplv3 - defaults to all_rights
    -v, --version                    Show chef version
    -g GENERATOR_COOKBOOK_PATH,      Use GENERATOR_COOKBOOK_PATH for the code_generator cookbook
        --generator-cookbook

[root@ip-172-31-16-75 ~]# pwd
/root



[root@ip-172-31-16-75 ~]# which chef
/usr/bin/chef
[root@ip-172-31-16-75 ~]# knife client list
WARNING: No knife configuration file found
WARN: Failed to read the private key /etc/chef/client.pem: #
ERROR: Your private key could not be loaded from /etc/chef/client.pem
Check your configuration file and ensure that your private key is readable
[root@ip-172-31-16-75 ~]# ls -la
total 102472
dr-xr-x---  5 root root      4096 Sep 20 15:35 .
dr-xr-xr-x 25 root root      4096 Sep 19 20:14 ..
-rw-r--r--  1 root root        18 Jan 15  2011 .bash_logout
-rw-r--r--  1 root root       176 Jan 15  2011 .bash_profile
-rw-r--r--  1 root root       176 Jan 15  2011 .bashrc
-rw-r--r--  1 root root 104886580 Sep 14 18:53 chefdk-2.3.1-1.el7.x86_64.rpm
-rw-r--r--  1 root root       100 Jan 15  2011 .cshrc
drwxr-----  3 root root      4096 Sep 20 15:35 .pki
drwxr-xr-x  3 root root      4096 Sep 19 20:15 .python-eggs
drwx------  2 root root      4096 Sep 19 20:06 .ssh
-rw-r--r--  1 root root       129 Jan 15  2011 .tcshrc
------------------------------------------------------------------------------------------------------------------------
  • On each workstation, this directory is the location into which SSL certificates are placed after they are downloaded from the Chef server using the knife ssl fetch subcommand
[root@ip-172-31-16-75 ~]# knife ssl fetch
WARNING: No knife configuration file found
WARNING: Certificates from localhost will be fetched and placed in your trusted_cert
directory (/root/.chef/trusted_certs).

Knife has no means to verify these are the correct certificates. You should
verify the authenticity of these certificates after downloading.

Adding certificate for chef-automate-server-qgr4glxhllrewxvs in /root/.chef/trusted_certs/chef-automate-server-                                                     qgr4glxhllrewxvs.crt
Adding certificate for AWS_OpsWorks_Intermediate_CA_for_us-east-1_region in /root/.chef/trusted_certs/AWS_OpsWo                                                     rks_Intermediate_CA_for_us-east-1_region.crt
Adding certificate for AWS_OpsWorks_Root_CA in /root/.chef/trusted_certs/AWS_OpsWorks_Root_CA.crt

--------------------------------------------------------------------------------------------------------------------------
  • On every node, this directory is the location into which SSL certificates are placed when a node has been bootstrapped with the chef-client from a workstation
[root@ip-172-31-16-75 ~]# cd .chef/
[root@ip-172-31-16-75 .chef]# ll
total 4
drwxr-xr-x 2 root root 4096 Sep 20 16:15 trusted_certs
[root@ip-172-31-16-75 .chef]# cd trusted_certs/
[root@ip-172-31-16-75 trusted_certs]# ll
total 12
-rw-r--r-- 1 root root 2155 Sep 20 16:15 AWS_OpsWorks_Intermediate_CA_for_us-east-1_region.crt
-rw-r--r-- 1 root root 2147 Sep 20 16:15 AWS_OpsWorks_Root_CA.crt
-rw-r--r-- 1 root root 1704 Sep 20 16:15 chef-automate-server-qgr4glxhllrewxvs.crt
[root@ip-172-31-16-75 trusted_certs]# ll /opt/chef/embedded/ssl/certs/cacert.pem
ls: cannot access /opt/chef/embedded/ssl/certs/cacert.pem: No such file or directory
[root@ip-172-31-16-75 trusted_certs]# ll /opt/chefdk/embedded/ssl/certs/cacert.pem
-rw-r--r-- 1 root root 256008 Sep 14 18:51 /opt/chefdk/embedded/ssl/certs/cacert.pem
[root@ip-172-31-16-75 trusted_certs]#

--------------------------------------------------------------------------------------------------------------------------




[root@ip-172-31-16-75 trusted_certs]# knife ssl check
WARNING: No knife configuration file found

Configuration Info:

OpenSSL Configuration:
* Version: OpenSSL 1.0.2l  25 May 2017
* Certificate file: /opt/chefdk/embedded/ssl/cert.pem
* Certificate directory: /opt/chefdk/embedded/ssl/certs
Chef SSL Configuration:
* ssl_ca_path: nil
* ssl_ca_file: nil
* trusted_certs_dir: "/root/.chef/trusted_certs"
WARNING: There are invalid certificates in your trusted_certs_dir.
OpenSSL will not use the following certificates when verifying SSL connections:

/root/.chef/trusted_certs/AWS_OpsWorks_Intermediate_CA_for_us-east-1_region.crt: unable to get local issuer certificate
/root/.chef/trusted_certs/chef-automate-server-qgr4glxhllrewxvs.crt: unable to get local issuer certificate


TO FIX THESE WARNINGS:

We are working on documentation for resolving common issues uncovered here.

* If the certificate is generated by the server, you may try redownloading the
server's certificate. By default, the certificate is stored in the following
location on the host where your chef-server runs:

  /var/opt/opscode/nginx/ca/SERVER_HOSTNAME.crt

Copy that file to your trusted_certs_dir (currently: /root/.chef/trusted_certs)
using SSH/SCP or some other secure method, then re-run this command to confirm
that the server's certificate is now trusted.

Connecting to host localhost:443
ERROR: The SSL cert is signed by a trusted authority but is not valid for the given hostname
ERROR: You are attempting to connect to:   'localhost'
ERROR: The server's certificate belongs to 'chef-automate-server-qgr4glxhllrewxvs'

TO FIX THIS ERROR:

The solution for this issue depends on your networking configuration. If you
are able to connect to this server using the hostname chef-automate-server-qgr4glxhllrewxvs
instead of localhost, then you can resolve this issue by updating chef_server_url
in your configuration file.

If you are not able to connect to the server using the hostname chef-automate-server-qgr4glxhllrewxvs
you will have to update the certificate on the server to use the correct hostname.
[root@ip-172-31-16-75 trusted_certs]# ll /var/opt/opscode/nginx/ca/SERVER_HOSTNAME.crt
ls: cannot access /var/opt/opscode/nginx/ca/SERVER_HOSTNAME.crt: No such file or directory
[root@ip-172-31-16-75 trusted_certs]# cd /var/opt/opscode/nginx/ca/
[root@ip-172-31-16-75 ca]# pwd
/var/opt/opscode/nginx/ca
[root@ip-172-31-16-75 ca]# ls
chef-automate-server-qgr4glxhllrewxvs.us-east-1.opsworks-cm.io.crt  chef-automate-server-qgr4glxhllrewxvs.us-east-1.opsworks-cm.io.key  dhparams.pem
[root@ip-172-31-16-75 ca]# ls -la
total 20
drwxr-x--- 2 opscode opscode 4096 Sep 19 20:12 .
drwxr-x--- 8 opscode opscode 4096 Sep 19 20:11 ..
-rw-r--r-- 1 root    root    1562 Sep 19 20:11 chef-automate-server-qgr4glxhllrewxvs.us-east-1.opsworks-cm.io.crt
-rw-r--r-- 1 root    root    1679 Sep 19 20:11 chef-automate-server-qgr4glxhllrewxvs.us-east-1.opsworks-cm.io.key
-rw-r--r-- 1 root    root     424 Sep 19 20:12 dhparams.pem
[root@ip-172-31-16-75 ca]# cp chef-automate-server-qgr4glxhllrewxvs.us-east-1.opsworks-cm.io.crt /root/.chef/trusted_certs/
[root@ip-172-31-16-75 ca]#





































How to install Docker on Redhat Linux RHEL 7 - yum

Prerequisites

To install CS Docker Engine, you need root or sudo privileges and you need access to a command line on the system.

Install using a repository

Install on CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2/7.3 (YUM-based systems)

This section explains how to install on CentOS 7.1/7.2 & RHEL 7.0/7.1/7.2/7.3. Only these versions are supported. CentOS 7.0 is not supported. On RHEL, depending on your current level of updates, you may need to reboot your server to update its RHEL kernel.
  1. Add the Docker public key for CS Docker Engine packages:
    $ sudo rpm --import "https://sks-keyservers.net/pks/lookup?op=get&search=0xee6d536cf7dc86e2d7d56f59a178ac6c6238f52e"
    
    Note: If the key server above does not respond, you can try one of these:
    • pgp.mit.edu
    • keyserver.ubuntu.com
  2. Install yum-utils if necessary:
    $ sudo yum install -y yum-utils
    
  3. Add the Docker repository:
    $ sudo yum-config-manager --add-repo https://packages.docker.com/1.12/yum/repo/main/centos/7
    
    This adds the repository of the latest version of CS Docker Engine. You can customize the URL to install an older version.
  4. Install Docker CS Engine:
    • Latest version:
      $ sudo yum makecache fast
      
      $ sudo yum install docker-engine
      
    • Specific version:
      On production systems, you should install a specific version rather than relying on the latest.
      1. List the available versions:
        $ yum list docker-engine.x86_64  --showduplicates |sort -r
        
        The second column represents the version.
      2. Install a specific version by adding the version after docker-engine, separated by a hyphen (-):
        $ sudo yum install docker-engine-
        
  5. Configure devicemapper:
    By default, the devicemapper graph driver does not come pre-configured in a production-ready state. Follow the documented step by step instructions to configure devicemapper with direct-lvm for production to achieve the best performance and reliability for your environment.
  6. Configure the Docker daemon to start automatically when the system starts, and start it now.
    $ sudo systemctl enable docker.service
    $ sudo systemctl start docker.service
    
  7. Confirm the Docker daemon is running:
    $ sudo docker info
    
  8. Only users with sudo access will be able to run docker commands. Optionally, add non-sudo access to the Docker socket by adding your user to the docker group.
    $ sudo usermod -a -G docker $USER

Installing Hashicorp Vault

https://www.vaultproject.io/downloads.html

Download the VAULT executable for Linux - 32bit, 64bit or arm/arm64

And just run it as a shell script

./vault

There are options how to use it, just follow the document.

$ vault
usage: vault [-version] [-help]  [args]

Common commands:
    delete           Delete operation on secrets in Vault
    path-help        Look up the help for a path
    read             Read data or secrets from Vault
    renew            Renew the lease of a secret
    revoke           Revoke a secret.
    server           Start a Vault server
    status           Outputs status of whether Vault is sealed and if HA mode is enabled
    unwrap           Unwrap a wrapped secret
    write            Write secrets or configuration into Vault

All other commands:
    audit-disable    Disable an audit backend
    audit-enable     Enable an audit backend
    audit-list       Lists enabled audit backends in Vault
    auth             Prints information about how to authenticate with Vault
    auth-disable     Disable an auth provider
    auth-enable      Enable a new auth provider
    capabilities     Fetch the capabilities of a token on a given path
    generate-root    Generates a new root token
    init             Initialize a new Vault server
    key-status       Provides information about the active encryption key
    list             List data or secrets in Vault
    mount            Mount a logical backend
    mount-tune       Tune mount configuration parameters
    mounts           Lists mounted backends in Vault
    policies         List the policies on the server
    policy-delete    Delete a policy from the server
    policy-write     Write a policy to the server
    rekey            Rekeys Vault to generate new unseal keys
    remount          Remount a secret backend to a new path
    rotate           Rotates the backend encryption key used to persist data
    seal             Seals the vault server
    ssh              Initiate a SSH session
    step-down        Force the Vault node to give up active duty
    token-create     Create a new auth token
    token-lookup     Display information about the specified token
    token-renew      Renew an auth token if there is an associated lease
    token-revoke     Revoke one or more auth tokens
    unmount          Unmount a secret backend
    unseal           Unseals the vault server
    version          Prints the Vault version

Spinning EC2 instance using Ansible

[root@ip-172-31-46-185 ~]# /usr/local/bin/pip install boto
Requirement already satisfied: boto in /usr/lib/python2.7/dist-packages
[root@ip-172-31-46-185 ~]# /usr/local/bin/pip install ansible
Collecting ansible
  Downloading ansible-2.3.2.0.tar.gz (4.3MB)
    100% |████████████████████████████████| 4.3MB 272kB/s
Requirement already satisfied: jinja2 in /usr/lib/python2.7/dist-packages (from ansible)
Requirement already satisfied: PyYAML in /usr/lib64/python2.7/dist-packages (from ansible)
Requirement already satisfied: paramiko in /usr/lib/python2.7/dist-packages (from ansible)
Requirement already satisfied: pycrypto>=2.6 in /usr/lib64/python2.7/dist-packages (from ansible)
Requirement already satisfied: setuptools in /usr/lib/python2.7/dist-packages (from ansible)
Requirement already satisfied: markupsafe in /usr/lib64/python2.7/dist-packages (from jinja2->ansible)
Requirement already satisfied: ecdsa>=0.11 in /usr/lib/python2.7/dist-packages (from paramiko->ansible)
Installing collected packages: ansible
  Running setup.py install for ansible ... done
Successfully installed ansible-2.3.2.0
[root@ip-172-31-46-185 ~]# ansible
Usage: ansible [options]

Options:
  -a MODULE_ARGS, --args=MODULE_ARGS
                        module arguments
  --ask-vault-pass      ask for vault password
  -B SECONDS, --background=SECONDS
                        run asynchronously, failing after X seconds
                        (default=N/A)
  -C, --check           don't make any changes; instead, try to predict some
                        of the changes that may occur
  -D, --diff            when changing (small) files and templates, show the
                        differences in those files; works great with --check
  -e EXTRA_VARS, --extra-vars=EXTRA_VARS
                        set additional variables as key=value or YAML/JSON
  -f FORKS, --forks=FORKS
                        specify number of parallel processes to use
                        (default=5)
  -h, --help            show this help message and exit
  -i INVENTORY, --inventory-file=INVENTORY
                        specify inventory host path
                        (default=/etc/ansible/hosts) or comma separated host
                        list.
  -l SUBSET, --limit=SUBSET
                        further limit selected hosts to an additional pattern
  --list-hosts          outputs a list of matching hosts; does not execute
                        anything else
  -m MODULE_NAME, --module-name=MODULE_NAME
                        module name to execute (default=command)
  -M MODULE_PATH, --module-path=MODULE_PATH
                        specify path(s) to module library (default=None)
  --new-vault-password-file=NEW_VAULT_PASSWORD_FILE
                        new vault password file for rekey
  -o, --one-line        condense output
  --output=OUTPUT_FILE  output file name for encrypt or decrypt; use - for
                        stdout
  -P POLL_INTERVAL, --poll=POLL_INTERVAL
                        set the poll interval if using -B (default=15)
  --syntax-check        perform a syntax check on the playbook, but do not
                        execute it
  -t TREE, --tree=TREE  log output to this directory
  --vault-password-file=VAULT_PASSWORD_FILE
                        vault password file
  -v, --verbose         verbose mode (-vvv for more, -vvvv to enable
                        connection debugging)
  --version             show program's version number and exit

  Connection Options:
    control as whom and how to connect to hosts

    -k, --ask-pass      ask for connection password
    --private-key=PRIVATE_KEY_FILE, --key-file=PRIVATE_KEY_FILE
                        use this file to authenticate the connection
    -u REMOTE_USER, --user=REMOTE_USER
                        connect as this user (default=None)
    -c CONNECTION, --connection=CONNECTION
                        connection type to use (default=smart)
    -T TIMEOUT, --timeout=TIMEOUT
                        override the connection timeout in seconds
                        (default=10)
    --ssh-common-args=SSH_COMMON_ARGS
                        specify common arguments to pass to sftp/scp/ssh (e.g.
                        ProxyCommand)
    --sftp-extra-args=SFTP_EXTRA_ARGS
                        specify extra arguments to pass to sftp only (e.g. -f,
                        -l)
    --scp-extra-args=SCP_EXTRA_ARGS
                        specify extra arguments to pass to scp only (e.g. -l)
    --ssh-extra-args=SSH_EXTRA_ARGS
                        specify extra arguments to pass to ssh only (e.g. -R)

  Privilege Escalation Options:
    control how and which user you become as on target hosts

    -s, --sudo          run operations with sudo (nopasswd) (deprecated, use
                        become)
    -U SUDO_USER, --sudo-user=SUDO_USER
                        desired sudo user (default=root) (deprecated, use
                        become)
    -S, --su            run operations with su (deprecated, use become)
    -R SU_USER, --su-user=SU_USER
                        run operations with su as this user (default=root)
                        (deprecated, use become)
    -b, --become        run operations with become (does not imply password
                        prompting)
    --become-method=BECOME_METHOD
                        privilege escalation method to use (default=sudo),
                        valid choices: [ sudo | su | pbrun | pfexec | doas |
                        dzdo | ksu | runas ]
    --become-user=BECOME_USER
                        run operations as this user (default=root)
    --ask-sudo-pass     ask for sudo password (deprecated, use become)
    --ask-su-pass       ask for su password (deprecated, use become)
    -K, --ask-become-pass
                        ask for privilege escalation password
ERROR! Missing target hosts
[root@ip-172-31-46-185 ~]# pwd
/root
[root@ip-172-31-46-185 ~]# vi aws-secrets
[root@ip-172-31-46-185 ~]#
[root@ip-172-31-46-185 ~]#
[root@ip-172-31-46-185 ~]# cat aws-secrets
Access Key ID:
AKIAJYOQNYVOVBQIBFQQ
Secret Access Key:
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
[root@ip-172-31-46-185 ~]#
[root@ip-172-31-46-185 ~]#
[root@ip-172-31-46-185 ~]#
[root@ip-172-31-46-185 ~]# export AWS_ACCESS_KEY_ID="AKIAJYOQNYVOVBQIBFQQ"
[root@ip-172-31-46-185 ~]# export AWS_SECRET_ACCESS_KEY="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"
[root@ip-172-31-46-185 ~]# vi hosts
[root@ip-172-31-46-185 ~]#
[root@ip-172-31-46-185 ~]#
[root@ip-172-31-46-185 ~]# vi ec2-basic.yml
[root@ip-172-31-46-185 ~]#
[root@ip-172-31-46-185 ~]# ll
total 12
-rw-r--r-- 1 root root   96 Sep  4 19:08 aws-secrets
-rw-r--r-- 1 root root 2458 Sep  4 19:30 ec2-basic.yml
-rw-r--r-- 1 root root   31 Sep  4 19:10 hosts
[root@ip-172-31-46-185 ~]# ansible-playbook -i ./hosts ec2-basic.yml

PLAY [Provision an EC2 Instance] ****************************************************************************

TASK [Create a security group] ******************************************************************************
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Region us-east-1c does                                                            not seem to be available for aws module boto.ec2. If the region definitely exists, you may need to upgrade b                                                           oto or extend with endpoints_path"}
        to retry, use: --limit @/root/ec2-basic.retry

PLAY RECAP **************************************************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1

[root@ip-172-31-46-185 ~]# vi ec2-basic.yml
[root@ip-172-31-46-185 ~]# ansible-playbook -i ./hosts ec2-basic.yml

PLAY [Provision an EC2 Instance] ****************************************************************************

TASK [Create a security group] ******************************************************************************
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Region us-east-1b does                                                            not seem to be available for aws module boto.ec2. If the region definitely exists, you may need to upgrade b                                                           oto or extend with endpoints_path"}
        to retry, use: --limit @/root/ec2-basic.retry

PLAY RECAP **************************************************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1

[root@ip-172-31-46-185 ~]# vi ec2-basic.yml

[root@ip-172-31-46-185 ~]# cat ec2-basic.yml
---
  - name: Provision an EC2 Instance
    hosts: local
    connection: local
    gather_facts: False
    tags: provisioning
    # Necessary Variables for creating/provisioning the EC2 Instance
    vars:
      instance_type: t2.micro
      security_group: web-Security-Group-1 # Change the security group name here
      image: ami-a4c7edb2 # This is an AMI i created myself
      keypair:  newkeyaug2017 # This is one of my keys that i already have in AWS
      region: us-east-1 # Change the Region
      count: 2

    # Task that will be used to Launch/Create an EC2 Instance
    tasks:

      - name: Create a security group
        local_action:
          module: ec2_group
          name: "{{ security_group }}"
          description: Security Group for webserver Servers
          region: "{{ region }}"
          rules:
            - proto: tcp
              from_port: 22
              to_port: 22
              cidr_ip: 0.0.0.0/0
            - proto: tcp
              from_port: 80
              to_port: 80
              cidr_ip: 0.0.0.0/0
            - proto: tcp
              from_port: 443
              to_port: 443
              cidr_ip: 0.0.0.0/0
          rules_egress:
            - proto: all
              cidr_ip: 0.0.0.0/0
        register: basic_firewall

      - name: Launch the new EC2 Instance
        local_action: ec2
                      group={{ security_group }}
                      instance_type={{ instance_type}}
                      image={{ image }}
                      wait=true
                      region={{ region }}
                      keypair={{ keypair }}
                      count={{count}}
        register: ec2

      - name: Wait for SSH to come up
        local_action: wait_for
                      host={{ item.public_ip }}
                      port=22
                      state=started
        with_items: ec2.instances

      - name: Add tag to Instance(s)
        local_action: ec2_tag resource={{ item.id }} region={{ region }} state=present
        with_items: ec2.instances
        args:
          tags:
            Name: webserver
[root@ip-172-31-46-185 ~]#


[root@ip-172-31-46-185 ~]# python
Python 2.7.12 (default, Sep  1 2016, 22:14:00)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import boto.ec2
>>> boto.ec2.regions()
[RegionInfo:us-east-1, RegionInfo:us-west-1, RegionInfo:cn-north-1, RegionInfo:ap-northeast-1, RegionInfo:ap-                                                           southeast-2, RegionInfo:sa-east-1, RegionInfo:ap-southeast-1, RegionInfo:ap-northeast-2, RegionInfo:us-west-2                                                           , RegionInfo:us-gov-west-1, RegionInfo:ap-south-1, RegionInfo:eu-central-1, RegionInfo:eu-west-1]
>>> pip upgrade boto
  File "", line 1
    pip upgrade boto
              ^
SyntaxError: invalid syntax
>>> upgrade boto
  File "", line 1
    upgrade boto
               ^
SyntaxError: invalid syntax
>>>
[root@ip-172-31-46-185 ~]# /usr/local/bin/pip install boto-2.4.5
Collecting boto-2.4.5
  Could not find a version that satisfies the requirement boto-2.4.5 (from versions: )
No matching distribution found for boto-2.4.5
[root@ip-172-31-46-185 ~]# /usr/local/bin/pip install boto*
Invalid requirement: 'boto*'
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/site-packages/pip/req/req_install.py", line 82, in __init__
    req = Requirement(req)
  File "/usr/local/lib/python2.7/site-packages/pip/_vendor/packaging/requirements.py", line 96, in __init__
    requirement_string[e.loc:e.loc + 8]))
InvalidRequirement: Invalid requirement, parse error at "'*'"

[root@ip-172-31-46-185 ~]# /usr/local/bin/pip list boto
DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (                                                           or define a format=(legacy|columns) in your pip.conf under the [list] section) to disable this warning.
ansible (2.3.2.0)
aws-cfn-bootstrap (1.4)
awscli (1.11.132)
Babel (0.9.4)
backports.ssl-match-hostname (3.4.0.2)
boto (2.42.0)
botocore (1.5.95)
chardet (2.0.1)
cloud-init (0.7.6)
colorama (0.2.5)
configobj (4.7.2)
docutils (0.11)
ecdsa (0.11)
futures (3.0.3)
iniparse (0.3.1)
Jinja2 (2.7.2)
jmespath (0.9.2)
jsonpatch (1.2)
jsonpointer (1.0)
kitchen (1.1.1)
lockfile (0.8)
MarkupSafe (0.11)
paramiko (1.15.1)
PIL (1.1.6)
pip (9.0.1)
ply (3.4)
pyasn1 (0.1.7)
pycrypto (2.6.1)
pycurl (7.19.0)
pygpgme (0.3)
pyliblzma (0.5.3)
pystache (0.5.3)
python-daemon (1.5.2)
python-dateutil (2.1)
pyxattr (0.5.0)
PyYAML (3.10)
requests (1.2.3)
rsa (3.4.1)
setuptools (12.2)
simplejson (3.6.5)
six (1.8.0)
urlgrabber (3.10)
urllib3 (1.8.2)
virtualenv (12.0.7)
yum-metadata-parser (1.1.4)
[root@ip-172-31-46-185 ~]# /usr/local/bin/pip list |grep boto
DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (                                                           or define a format=(legacy|columns) in your pip.conf under the [list] section) to disable this warning.
boto (2.42.0)
botocore (1.5.95)
[root@ip-172-31-46-185 ~]# python -v
# installing zipimport hook
import zipimport # builtin
# installed zipimport hook
# /usr/lib64/python2.7/site.pyc matches /usr/lib64/python2.7/site.py
import site # precompiled from /usr/lib64/python2.7/site.pyc
# /usr/lib64/python2.7/os.pyc matches /usr/lib64/python2.7/os.py
import os # precompiled from /usr/lib64/python2.7/os.pyc
import errno # builtin
import posix # builtin
# /usr/lib64/python2.7/posixpath.pyc matches /usr/lib64/python2.7/posixpath.py
import posixpath # precompiled from /usr/lib64/python2.7/posixpath.pyc
# /usr/lib64/python2.7/stat.pyc matches /usr/lib64/python2.7/stat.py
import stat # precompiled from /usr/lib64/python2.7/stat.pyc
# /usr/lib64/python2.7/genericpath.pyc matches /usr/lib64/python2.7/genericpath.py
import genericpath # precompiled from /usr/lib64/python2.7/genericpath.pyc
# /usr/lib64/python2.7/warnings.pyc matches /usr/lib64/python2.7/warnings.py
import warnings # precompiled from /usr/lib64/python2.7/warnings.pyc
# /usr/lib64/python2.7/linecache.pyc matches /usr/lib64/python2.7/linecache.py
import linecache # precompiled from /usr/lib64/python2.7/linecache.pyc
# /usr/lib64/python2.7/types.pyc matches /usr/lib64/python2.7/types.py
import types # precompiled from /usr/lib64/python2.7/types.pyc
# /usr/lib64/python2.7/UserDict.pyc matches /usr/lib64/python2.7/UserDict.py
import UserDict # precompiled from /usr/lib64/python2.7/UserDict.pyc
# /usr/lib64/python2.7/_abcoll.pyc matches /usr/lib64/python2.7/_abcoll.py
import _abcoll # precompiled from /usr/lib64/python2.7/_abcoll.pyc
# /usr/lib64/python2.7/abc.pyc matches /usr/lib64/python2.7/abc.py
import abc # precompiled from /usr/lib64/python2.7/abc.pyc
# /usr/lib64/python2.7/_weakrefset.pyc matches /usr/lib64/python2.7/_weakrefset.py
import _weakrefset # precompiled from /usr/lib64/python2.7/_weakrefset.pyc
import _weakref # builtin
# /usr/lib64/python2.7/copy_reg.pyc matches /usr/lib64/python2.7/copy_reg.py
import copy_reg # precompiled from /usr/lib64/python2.7/copy_reg.pyc
# /usr/lib64/python2.7/traceback.pyc matches /usr/lib64/python2.7/traceback.py
import traceback # precompiled from /usr/lib64/python2.7/traceback.pyc
# /usr/lib64/python2.7/sysconfig.pyc matches /usr/lib64/python2.7/sysconfig.py
import sysconfig # precompiled from /usr/lib64/python2.7/sysconfig.pyc
# /usr/lib64/python2.7/re.pyc matches /usr/lib64/python2.7/re.py
import re # precompiled from /usr/lib64/python2.7/re.pyc
# /usr/lib64/python2.7/sre_compile.pyc matches /usr/lib64/python2.7/sre_compile.py
import sre_compile # precompiled from /usr/lib64/python2.7/sre_compile.pyc
import _sre # builtin
# /usr/lib64/python2.7/sre_parse.pyc matches /usr/lib64/python2.7/sre_parse.py
import sre_parse # precompiled from /usr/lib64/python2.7/sre_parse.pyc
# /usr/lib64/python2.7/sre_constants.pyc matches /usr/lib64/python2.7/sre_constants.py
import sre_constants # precompiled from /usr/lib64/python2.7/sre_constants.pyc
dlopen("/usr/lib64/python2.7/lib-dynload/_localemodule.so", 2);
import _locale # dynamically loaded from /usr/lib64/python2.7/lib-dynload/_localemodule.so
# /usr/lib64/python2.7/_sysconfigdata.pyc matches /usr/lib64/python2.7/_sysconfigdata.py
import _sysconfigdata # precompiled from /usr/lib64/python2.7/_sysconfigdata.pyc
import encodings # directory /usr/lib64/python2.7/encodings
# /usr/lib64/python2.7/encodings/__init__.pyc matches /usr/lib64/python2.7/encodings/__init__.py
import encodings # precompiled from /usr/lib64/python2.7/encodings/__init__.pyc
# /usr/lib64/python2.7/codecs.pyc matches /usr/lib64/python2.7/codecs.py
import codecs # precompiled from /usr/lib64/python2.7/codecs.pyc
import _codecs # builtin
# /usr/lib64/python2.7/encodings/aliases.pyc matches /usr/lib64/python2.7/encodings/aliases.py
import encodings.aliases # precompiled from /usr/lib64/python2.7/encodings/aliases.pyc
# /usr/lib64/python2.7/encodings/utf_8.pyc matches /usr/lib64/python2.7/encodings/utf_8.py
import encodings.utf_8 # precompiled from /usr/lib64/python2.7/encodings/utf_8.pyc
Python 2.7.12 (default, Sep  1 2016, 22:14:00)
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
dlopen("/usr/lib64/python2.7/lib-dynload/readline.so", 2);
import readline # dynamically loaded from /usr/lib64/python2.7/lib-dynload/readline.so
>>>
# clear __builtin__._
# clear sys.path
# clear sys.argv
# clear sys.ps1
# clear sys.ps2
# clear sys.exitfunc
# clear sys.exc_type
# clear sys.exc_value
# clear sys.exc_traceback
# clear sys.last_type
# clear sys.last_value
# clear sys.last_traceback
# clear sys.path_hooks
# clear sys.path_importer_cache
# clear sys.meta_path
# clear sys.flags
# clear sys.float_info
# restore sys.stdin
# restore sys.stdout
# restore sys.stderr
# cleanup __main__
# cleanup[1] encodings
# cleanup[1] site
# cleanup[1] sysconfig
# cleanup[1] abc
# cleanup[1] _weakrefset
# cleanup[1] sre_constants
# cleanup[1] re
# cleanup[1] _codecs
# cleanup[1] _warnings
# cleanup[1] zipimport
# cleanup[1] _sysconfigdata
# cleanup[1] encodings.utf_8
# cleanup[1] codecs
# cleanup[1] readline
# cleanup[1] _locale
# cleanup[1] signal
# cleanup[1] traceback
# cleanup[1] posix
# cleanup[1] encodings.aliases
# cleanup[1] exceptions
# cleanup[1] _weakref
# cleanup[1] sre_compile
# cleanup[1] _sre
# cleanup[1] sre_parse
# cleanup[2] copy_reg
# cleanup[2] posixpath
# cleanup[2] errno
# cleanup[2] _abcoll
# cleanup[2] types
# cleanup[2] genericpath
# cleanup[2] stat
# cleanup[2] warnings
# cleanup[2] UserDict
# cleanup[2] os.path
# cleanup[2] linecache
# cleanup[2] os
# cleanup sys
# cleanup __builtin__
# cleanup ints: 19 unfreed ints
# cleanup floats
[root@ip-172-31-46-185 ~]# python -V
Python 2.7.12
[root@ip-172-31-46-185 ~]# /usr/bin/python -m pip install boto
Requirement already satisfied: boto in /usr/lib/python2.7/dist-packages
[root@ip-172-31-46-185 ~]# /usr/bin/python -m pip upgrade boto
ERROR: unknown command "upgrade"
[root@ip-172-31-46-185 ~]# /usr/bin/python -m pip install -U boto
Collecting boto
  Downloading boto-2.48.0-py2.py3-none-any.whl (1.4MB)
    100% |████████████████████████████████| 1.4MB 841kB/s
Installing collected packages: boto
  Found existing installation: boto 2.42.0
    Uninstalling boto-2.42.0:
      Successfully uninstalled boto-2.42.0
Successfully installed boto-2.48.0
[root@ip-172-31-46-185 ~]# ansible-playbook -i ./hosts ec2-basic.yml

PLAY [Provision an EC2 Instance] ****************************************************************************

TASK [Create a security group] ******************************************************************************
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Region us-east-1b does                                                            not seem to be available for aws module boto.ec2. If the region definitely exists, you may need to upgrade b                                                           oto or extend with endpoints_path"}
        to retry, use: --limit @/root/ec2-basic.retry

PLAY RECAP **************************************************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1

[root@ip-172-31-46-185 ~]# ansible-playbook -i ./hosts ec2-basic.yml

PLAY [Provision an EC2 Instance] *********************************************************************************************

TASK [Create a security group] ***********************************************************************************************
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Region us-east-1b does not seem to be a                                          vailable for aws module boto.ec2. If the region definitely exists, you may need to upgrade boto or extend with endpoints_path"                                          }
        to retry, use: --limit @/root/ec2-basic.retry

PLAY RECAP *******************************************************************************************************************
localhost                  : ok=0    changed=0    unreachable=0    failed=1

[root@ip-172-31-46-185 ~]# vi ec2-basic.yml
[root@ip-172-31-46-185 ~]# ansible-playbook -i ./hosts ec2-basic.yml

PLAY [Provision an EC2 Instance] *********************************************************************************************

TASK [Create a security group] ***********************************************************************************************
changed: [localhost -> localhost]

TASK [Launch the new EC2 Instance] *******************************************************************************************
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Instance creation failed => InvalidKeyP                                          air.NotFound: The key pair 'newkeyaug2017# This is one of my keys that i already have in AWS' does not exist"}
        to retry, use: --limit @/root/ec2-basic.retry

PLAY RECAP *******************************************************************************************************************
localhost                  : ok=1    changed=1    unreachable=0    failed=1

[root@ip-172-31-46-185 ~]# vi ec2-basic.yml
[root@ip-172-31-46-185 ~]# ansible-playbook -i ./hosts ec2-basic.yml

PLAY [Provision an EC2 Instance] *********************************************************************************************

TASK [Create a security group] ***********************************************************************************************
ok: [localhost -> localhost]

TASK [Launch the new EC2 Instance] *******************************************************************************************
changed: [localhost -> localhost]

TASK [Add the newly created EC2 instance(s) to the local host group (located inside the directory)] **************************
fatal: [localhost]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a vari                                          able that is undefined. The error was: 'ansible.vars.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'public_ip'\n\nTh                                          e error appears to have been in '/root/ec2-basic.yml': line 54, column 9, but may\nbe elsewhere in the file depending on the e                                          xact syntax problem.\n\nThe offending line appears to be:\n\n\n      - name: Add the newly created EC2 instance(s) to the loca                                          l host group (located inside the directory)\n        ^ here\n"}
        to retry, use: --limit @/root/ec2-basic.retry

PLAY RECAP *******************************************************************************************************************
localhost                  : ok=2    changed=1    unreachable=0    failed=1

[root@ip-172-31-46-185 ~]# vi ec2-basic.yml
[root@ip-172-31-46-185 ~]# ansible-playbook -i ./hosts ec2-basic.yml

PLAY [Provision an EC2 Instance] *********************************************************************************************

TASK [Create a security group] ***********************************************************************************************
ok: [localhost -> localhost]

TASK [Launch the new EC2 Instance] *******************************************************************************************
changed: [localhost -> localhost]

TASK [Add the newly created EC2 instance to the local hosts] *****************************************************************
fatal: [localhost]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a vari                                          able that is undefined. The error was: 'ansible.vars.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'public_ip'\n\nTh                                          e error appears to have been in '/root/ec2-basic.yml': line 54, column 9, but may\nbe elsewhere in the file depending on the e                                          xact syntax problem.\n\nThe offending line appears to be:\n\n\n      - name: Add the newly created EC2 instance to the local h                                          osts\n        ^ here\n"}
        to retry, use: --limit @/root/ec2-basic.retry

PLAY RECAP *******************************************************************************************************************
localhost                  : ok=2    changed=1    unreachable=0    failed=1

[root@ip-172-31-46-185 ~]# vi ec2-basic.yml
[root@ip-172-31-46-185 ~]# ansible-playbook -i ./hosts ec2-basic.yml

PLAY [Provision an EC2 Instance] *********************************************************************************************

TASK [Create a security group] ***********************************************************************************************
ok: [localhost -> localhost]

TASK [Launch the new EC2 Instance] *******************************************************************************************
changed: [localhost -> localhost]

TASK [Add the newly created EC2 instance] ************************************************************************************
fatal: [localhost]: FAILED! => {"failed": true, "msg": "the field 'args' has an invalid value, which appears to include a vari                                          able that is undefined. The error was: 'ansible.vars.unsafe_proxy.AnsibleUnsafeText object' has no attribute 'public_ip'\n\nTh                                          e error appears to have been in '/root/ec2-basic.yml': line 54, column 9, but may\nbe elsewhere in the file depending on the e                                          xact syntax problem.\n\nThe offending line appears to be:\n\n\n      - name: Add the newly created EC2 instance\n        ^ her                                          e\n"}
        to retry, use: --limit @/root/ec2-basic.retry

PLAY RECAP *******************************************************************************************************************
localhost                  : ok=2    changed=1    unreachable=0    failed=1

Installing Ingress Controller - Kubernetes

Installing the Ingress Controller Prerequisites Make sure you have access to the Ingress controller image: For NGINX Ingress controll...