Cannot connect to the Docker daemon

#yum install docker
...
..
..
...
Installed:
  docker.x86_64 2:1.12.6-28.git1398f24.el7.centos                                                            

Dependency Installed:
  container-selinux.noarch 2:2.12-2.gite7096ce.el7       docker-client.x86_64 2:1.12.6-28.git1398f24.el7.centos
  docker-common.x86_64 2:1.12.6-28.git1398f24.el7.centos oci-register-machine.x86_64 1:0-3.11.gitdd0daef.el7
  oci-systemd-hook.x86_64 1:0.1.7-2.git2788078.el7       skopeo-containers.x86_64 1:0.1.19-1.el7            

Dependency Updated:
  libsemanage.x86_64 0:2.5-5.1.el7_3                  libsemanage-python.x86_64 0:2.5-5.1.el7_3              
  policycoreutils.x86_64 0:2.5-11.el7_3               policycoreutils-python.x86_64 0:2.5-11.el7_3            

Complete!

Tried to run my docker Swarm on AWS and got the error

[root@localhost ~]# docker run --rm -ti -v /var/run/docker.sock:/var/run/docker.sock -e DOCKER_HOST dockercloud/client vanithav/soldatinc
/usr/bin/docker-current: Cannot connect to the Docker daemon. Is the docker daemon running on this host?.
See '/usr/bin/docker-current run --help'.

To start Docker Daemon manually in the "Foreground" run dockerd - command

[root@localhost ~]# dockerd
INFO[0001] libcontainerd: new containerd process, pid: 32065
WARN[0000] containerd: low RLIMIT_NOFILE changing to max  current=1024 max=4096
WARN[0003] devmapper: Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section.
INFO[0004] devmapper: Creating filesystem xfs on device docker-253:0-67127513-base
INFO[0004] devmapper: Successfully created filesystem xfs on device docker-253:0-67127513-base
INFO[0005] Graph migration to content-addressability took 0.00 seconds
INFO[0005] Loading containers: start.                
INFO[0006] Firewalld running: true                    
INFO[0008] Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address

INFO[0009] Loading containers: done.                  
INFO[0009] Daemon has completed initialization        
INFO[0009] Docker daemon                                 commit=1398f24/1.12.6 graphdriver=devicemapper version=1.12.6
INFO[0009] API listen on /var/run/docker.sock        

Terraform - Infrastructure as Code - beginners guide

Build Infrastructure

With Terraform installed, let's dive right into it and start creating some infrastructure.
We'll build infrastructure on AWS for the getting started guide since it is popular and generally understood, but Terraform can manage many providers, including multiple providers in a single configuration. Some examples of this are in the use cases section.
If you don't have an AWS account, create one now. For the getting started guide, we'll only be using resources which qualify under the AWS free-tier, meaning it will be free. If you already have an AWS account, you may be charged some amount of money, but it shouldn't be more than a few dollars at most.

»Configuration

The set of files used to describe infrastructure in Terraform is simply known as a Terraform configuration. We're going to write our first configuration now to launch a single AWS EC2 instance.
The format of the configuration files is documented here. Configuration files can also be JSON, but we recommend only using JSON when the configuration is generated by a machine.
The entire configuration is shown below. We'll go over each part after. Save the contents to a file named example.tf. Verify that there are no other *.tf files in your directory, since Terraform loads all of them.
provider "aws" {
  access_key = "ACCESS_KEY_HERE"
  secret_key = "SECRET_KEY_HERE"
  region     = "us-east-1"
}

resource "aws_instance" "example" {
  ami           = "ami-2757f631"
  instance_type = "t2.micro"
}
Replace the ACCESS_KEY_HERE and SECRET_KEY_HERE with your AWS access key and secret key, available from this page. We're hardcoding them for now, but will extract these into variables later in the getting started guide.
This is a complete configuration that Terraform is ready to apply. The general structure should be intuitive and straightforward.
The provider block is used to configure the named provider, in our case "aws." A provider is responsible for creating and managing resources. Multiple provider blocks can exist if a Terraform configuration is composed of multiple providers, which is a common situation.
The resource block defines a resource that exists within the infrastructure. A resource might be a physical component such as an EC2 instance, or it can be a logical resource such as a Heroku application.
The resource block has two strings before opening the block: the resource type and the resource name. In our example, the resource type is "aws_instance" and the name is "example." The prefix of the type maps to the provider. In our case "aws_instance" automatically tells Terraform that it is managed by the "aws" provider.
Within the resource block itself is configuration for that resource. This is dependent on each resource provider and is fully documented within our providers reference. For our EC2 instance, we specify an AMI for Ubuntu, and request a "t2.micro" instance so we qualify under the free tier.

»Execution Plan

Next, let's see what Terraform would do if we asked it to apply this configuration. In the same directory as the example.tffile you created, run terraform plan. You should see output similar to what is copied below. We've truncated some of the output to save space.
a4gZ6a4bAG6fvhw7Xb++OH6Wtyg3YOs+gb1IkIl4
terraform plan shows what changes Terraform will apply to your infrastructure given the current state of your infrastructure as well as the current contents of your configuration.
If terraform plan failed with an error, read the error message and fix the error that occurred. At this stage, it is probably a syntax error in the configuration.
The output format is similar to the diff format generated by tools such as Git. The output has a "+" next to "aws_instance.example", meaning that Terraform will create this resource. Beneath that, it shows the attributes that will be set. When the value displayed is , it means that the value won't be known until the resource is created.

»Apply

The plan looks good, our configuration appears valid, so it's time to create real resources. Run terraform apply in the same directory as your example.tf, and watch it go! It will take a few minutes since Terraform waits for the EC2 instance to become available.
$ terraform apply
aws_instance.example: Creating...
  ami:                      "" => "ami-2757f631"
  instance_type:            "" => "t2.micro"
  [...]

aws_instance.example: Still creating... (10s elapsed)
aws_instance.example: Creation complete

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

# ...
Done! You can go to the AWS console to prove to yourself that the EC2 instance has been created.
Terraform also puts some state into the terraform.tfstate file by default. This state file is extremely important; it maps various resource metadata to actual resource IDs so that Terraform knows what it is managing. This file must be saved and distributed to anyone who might run Terraform. It is generally recommended to setup remote state when working with Terraform. This will mean that any potential secrets stored in the state file, will not be checked into version control
You can inspect the state using terraform show:
$ terraform show
aws_instance.example:
  id = i-32cf65a8
  ami = ami-2757f631
  availability_zone = us-east-1a
  instance_state = running
  instance_type = t2.micro
  private_ip = 172.31.30.244
  public_dns = ec2-52-90-212-55.compute-1.amazonaws.com
  public_ip = 52.90.212.55
  subnet_id = subnet-1497024d
  vpc_security_group_ids.# = 1
  vpc_security_group_ids.3348721628 = sg-67652003
You can see that by creating our resource, we've also gathered a lot more metadata about it. This metadata can actually be referenced for other resources or outputs, which will be covered later in the getting started guide.

»Provisioning

The EC2 instance we launched at this point is based on the AMI given, but has no additional software installed. If you're running an image-based infrastructure (perhaps creating images with Packer), then this is all you need.
However, many infrastructures still require some sort of initialization or software provisioning step. Terraform supports provisioners, which we'll cover a little bit later in the getting started guide, in order to do this.
Source : https://www.terraform.io/intro/getting-started/build.html

Go to this Link with your AWS Login to create the Access Key and Secret Key and save it in your local machine..
https://console.aws.amazon.com/iam/home?#/security_credential
+++++++++++++++++++++++++++++++++++
root@localhost Downloads]# ./terraform apply
aws_instance.example: Refreshing state... (ID: i-0f6d4ae9e49a6e29f)
aws_instance.example: Creating...
  ami:                          "" => "ami-2757f631"
  associate_public_ip_address:  "" => ""
  availability_zone:            "" => ""
  ebs_block_device.#:           "" => ""
  ephemeral_block_device.#:     "" => ""
  instance_state:               "" => ""
  instance_type:                "" => "t2.micro"
  ipv6_address_count:           "" => ""
  ipv6_addresses.#:             "" => ""
  key_name:                     "" => ""
  network_interface.#:          "" => ""
  network_interface_id:         "" => ""
  placement_group:              "" => ""
  primary_network_interface_id: "" => ""
  private_dns:                  "" => ""
  private_ip:                   "" => ""
  public_dns:                   "" => ""
  public_ip:                    "" => ""
  root_block_device.#:          "" => ""
  security_groups.#:            "" => ""
  source_dest_check:            "" => "true"
  subnet_id:                    "" => ""
  tenancy:                      "" => ""
  volume_tags.%:                "" => ""
  vpc_security_group_ids.#:     "" => ""
aws_instance.example: Still creating... (10s elapsed)
aws_instance.example: Still creating... (20s elapsed)
  tenancy:                      "" => ""
  volume_tags.%:                "" => ""
  vpc_security_group_ids.#:     "" => ""
aws_instance.example: Still creating... (10s elapsed)
aws_instance.example: Still creating... (20s elapsed)
aws_instance.example: Still creating... (30s elapsed)
aws_instance.example: Still creating... (40s elapsed)
aws_instance.example: Still creating... (50s elapsed)
aws_instance.example: Creation complete (ID: i-05082c270a68066a2)

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: 


Docker Best Practices

Docker way

Docker has some restrictions and requirements depending on the architecture of your system (applications that you pack into containers). You can ignore these requirements or find some workarounds, but in this case, you won't get all the benefits from using Docker. My strong advice is to follow these recommendations:
  • 1 application = 1 container
  • Run process in foreground (don't use systemd, upstart or any other similar tools)
  • Keep data out of container — use volumes
  • Do not use SSH (if you need to step into container you can use docker exec command)
  • Avoid manual configurations (or actions) inside container

Docker Commands Cheatsheet

$ boot2docker
Usage: boot2docker [] {help|init|up|ssh|save|down|poweroff|reset
|restart|config|status|info|ip|shellinit|delete|download|upgrade
|version} [

$ boot2docker status
$ boot2docker version
$ boot2docker ip
192.168.59.103
$ docker help
$ docker COMMAND --help
$ docker version
Now, this command will give us a little bit more information than the boot2docker command output, as follows:
Client version: 1.7.0
Client API version: 1.19
Go version (client): go1.4.2
Git commit (client): 0baf609
OS/Arch (client): darwin/amd64
Server version: 1.7.0
Server API version: 1.19
Go version (server): go1.4.2
Git commit (server): 0baf609
OS/Arch (server): linux/amd64
$ docker images
REPOSITORY         TAG                IMAGE ID           CREATED             VIRTUAL SIZE
ubuntu                   14.10               ab57dbafeeea       11 days ago         194.5 MB
ubuntu                   trusty               6d4946999d4f       11 days ago         188.3 MB
ubuntu                   latest               6d4946999d4f       11 days ago         188.3 MB
$ docker search ubuntu
We would get back our results:
NAME                           DESCRIPTION                                                   STARS     OFFICIAL   AUTOMATED
ubuntu                         Ubuntu is a Debian-based Linux operating s...       1835         [OK]      
ubuntu-upstart                 Upstart is an event-based replacement for ...           26           [OK]      
tutum/ubuntu                       Ubuntu image with SSH access. For the root...         25                             [OK]
torusware/speedus-ubuntu Always updated official Ubuntu docker imag...          25                             [OK]
ubuntu-debootstrap             debootstrap --variant=minbase --components...       10           [OK]      
rastasheep/ubuntu-sshd     Dockerized SSH service, built on top of of...               4                      [OK]
maxexcloo/ubuntu               Docker base image built on Ubuntu with Sup...           2                             [OK]
nuagebec/ubuntu                 Simple always updated Ubuntu docker images...       2                             [OK]
nimmis/ubuntu                     This is a docker images different LTS vers...       1                             [OK]
alsanium/ubuntu               Ubuntu Core image for Docker                         1                             [OK
$ docker pull tutum/Ubuntu
to remove a image use "rmi" option
$ docker rmi ubuntu:trusty
To run a Docker image use the following command and you will get a shell prompt
$ docker run -i -t : /bin/bash
two switches: -i and -t. The -i gives us an interactive shell into the running container, the -t will allocate a pseudo-tty that, while using interactive processes, must be used together with the -i switch.
$ docker run -d :
To view all running containers
$ docker ps

CONTAINER ID       IMAGE               COMMAND             CREATED            STATUS             PORTS               NAMES
cc1fefcfa098       ubuntu:14.10       "/bin/bash"         3 seconds ago       Up 3 seconds                           boring_mccarth
Very Important -
You can also the expose the ports on your containers by using the -p switch as follows:
$ docker run -d -p : :
$ docker run -d -p 8080:80 ubuntu:14.10
CONTAINER ID       IMAGE               COMMAND             CREATED             STATUS             PORTS                         NAMES
55cfdcb6beb6       ubuntu:14.10       "/bin/bash"         2 seconds ago       Up 2 seconds       0.0.0.0:8080->80/tcp   babbage 
To check the logs of a container.:
$ docker logs 55cfdcb6beb6
Or:
$ docker logs babbage
$ docker rename  
$ docker top 
$ docker rm   - to delete or remove a container
$ docker stats 

CONTAINER           CPU %               MEM USAGE/LIMIT     MEM %               NET I/O
web1                       0.00%              1.016 MB/2.099 GB   0.05%                 0 B/0 B

expect script to create a new user on 100+ servers and set his/her password and forcing to change at 1st login

First create a server list using vi devsrvlist in your home directory
Then create this script and use it..
run this as your user (ID) which should have sudo access.

#!/usr/bin/expect -f
set ip_file "/export/home/userid/devsrvlist"
set fid [open $ip_file r]
while {[gets $fid ip] != -1} {
    spawn ssh $ip
    expect "Password:"
    send "password\r"
    expect "~$"
    send "uname -n\r"
    send "sudo  passwd userid\r"
    expect "Password:"
    send "Password\r"
    expect "New Password:"
    send "newpasswd\r"
    expect "Password:"
    send "newpasswd\r"
    expect "~$"
    send "sudo passwd -f userid\r"
    send "exit\r"
    expect eof
}

+++++++++++++++++++++++++++++++++++++++++++++++++++++
https://likegeeks.com/expect-command/

questions - is a bash script - spawn will run a script or command



#!/usr/bin/expect -f

set my_name [lindex $argv 0]

set my_favorite [lindex $argv 1]

set timeout -1

spawn ./questions

expect "Hello, who are you?\r"

send -- "Im $my_name\r"

expect "Can I ask you some questions?\r"

send -- "Sure\r"

expect "What is your favorite topic?\r"

send -- "$my_favorite\r"

expect eof

$ ./answerbot SomeName Programming



SomeName will be replaced for $my_name = lindex $argv 0


Infiniband commands and output

[root@srvrdsw-iba0 ~]# showtopology
DEV4099_02P srvrd-h2-storadm_PCIe_1
    P1 -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-3A
    P2 -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-3A
SUNDCS36QDR srvrdsw-iba0
    C-17A -4x-10G-> DEV4099_02P srvrdceladm02_C_10.129.172.102,10.129.172.103_HCA-1 P1
    C-17B -4x-10G-> DEV4099_02P srvrdceladm01_C_10.129.172.100,10.129.172.101_HCA-1 P1
    C-16A -4x-10G-> DEV4099_02P srvrdceladm04_C_10.129.172.106,10.129.172.107_HCA-1 P2
    C-16B -4x-10G-> DEV4099_02P srvrdceladm03_C_10.129.172.104,10.129.172.105_HCA-1 P1
    C-14A -4x-10G-> DEV4099_02P srvrdzadmclient0101_S_10.129.172.80_HCA-1 P1
    C-14B -4x-10G-> DEV4099_02P srvrd-h1-storadm_PCIe_1 P1
    C-13A -4x-10G-> DEV4099_02P srvrdzadmclient0101_S_10.129.172.81_HCA-2 P1
    C-9B -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-9A
    C-9A -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-9B
    C-10B -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-10A
    C-10A -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-10B
    C-11B -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-11A
    C-11A -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-11B
    C-0A -4x-10G-> DEV4099_02P srvrd-app0202_S_10.129.172.158_HCA-1 P2
    C-1A -4x-10G-> DEV4099_02P srvrd-app0202_S_ P2
    C-3A -4x-10G-> DEV4099_02P srvrd-h2-storadm_PCIe_1 P1
    C-4A -4x-10G-> DEV4099_02P srvrdzadmclient0201_S_10.129.172.91_HCA-2 P2
    C-5B -4x-10G-> DEV4099_02P srvrd-app0102_S_10.129.172.152_HCA-1 P1
    C-5A -4x-10G-> DEV4099_02P srvrdzadmclient0201_S_10.129.172.90_HCA-1 P2
    C-8A -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-8A
    C-8B -4x-10G-> SUNDCS36QDR srvrdsw-ibs0 C-1B
    C-6B -4x-10G-> DEV4099_02P srvrd-app0102_S_ P1
SUNDCS36QDR srvrdsw-ibs0
    C-0B -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-8B
    C-1B -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-8B
SUNDCS36QDR srvrdsw-ibb0
    C-17A -4x-10G-> DEV4099_02P srvrdceladm02_C_10.129.172.102,10.129.172.103_HCA-1 P2
    C-17B -4x-10G-> DEV4099_02P srvrdceladm01_C_10.129.172.100,10.129.172.101_HCA-1 P2
    C-16A -4x-10G-> DEV4099_02P srvrdceladm04_C_10.129.172.106,10.129.172.107_HCA-1 P1
    C-16B -4x-10G-> DEV4099_02P srvrdceladm03_C_10.129.172.104,10.129.172.105_HCA-1 P2
    C-14A -4x-10G-> DEV4099_02P srvrdzadmclient0101_S_10.129.172.80_HCA-1 P2
    C-14B -4x-10G-> DEV4099_02P srvrd-h1-storadm_PCIe_1 P2
    C-13A -4x-10G-> DEV4099_02P srvrdzadmclient0101_S_10.129.172.81_HCA-2 P2
    C-9B -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-9A
    C-9A -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-9B
    C-10B -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-10A
    C-10A -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-10B
    C-11B -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-11A
    C-11A -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-11B
    C-0A -4x-10G-> DEV4099_02P srvrd-app0202_S_10.129.172.158_HCA-1 P1
    C-1A -4x-10G-> DEV4099_02P srvrd-app0202_S_ P1
    C-3A -4x-10G-> DEV4099_02P srvrd-h2-storadm_PCIe_1 P2
    C-4A -4x-10G-> DEV4099_02P srvrdzadmclient0201_S_10.129.172.91_HCA-2 P1
    C-5B -4x-10G-> DEV4099_02P srvrd-app0102_S_10.129.172.152_HCA-1 P2
    C-5A -4x-10G-> DEV4099_02P srvrdzadmclient0201_S_10.129.172.90_HCA-1 P1
    C-8A -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-8A
    C-8B -4x-10G-> SUNDCS36QDR srvrdsw-ibs0 C-0B
    C-6B -4x-10G-> DEV4099_02P srvrd-app0102_S_ P2
DEV4099_02P srvrdceladm01_C_10.129.172.100,10.129.172.101_HCA-1
    P1 -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-17B
    P2 -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-17B
DEV4099_02P srvrd-h1-storadm_PCIe_1
    P2 -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-14B
    P1 -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-14B
DEV4099_02P srvrdceladm02_C_10.129.172.102,10.129.172.103_HCA-1
    P1 -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-17A
    P2 -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-17A
DEV4099_02P srvrdceladm03_C_10.129.172.104,10.129.172.105_HCA-1
    P2 -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-16B
    P1 -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-16B
DEV4099_02P srvrdceladm04_C_10.129.172.106,10.129.172.107_HCA-1
    P1 -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-16A
    P2 -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-16A
DEV4099_02P srvrdzadmclient0201_S_10.129.172.90_HCA-1
    P1 -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-5A
    P2 -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-5A
DEV4099_02P srvrdzadmclient0101_S_10.129.172.80_HCA-1
    P1 -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-14A
    P2 -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-14A
DEV4099_02P srvrdzadmclient0201_S_10.129.172.91_HCA-2
    P1 -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-4A
    P2 -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-4A
DEV4099_02P srvrdzadmclient0101_S_10.129.172.81_HCA-2
    P1 -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-13A
    P2 -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-13A
DEV4099_02P srvrd-app0202_S_
    P1 -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-1A
    P2 -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-1A
DEV4099_02P srvrd-app0102_S_
    P1 -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-6B
    P2 -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-6B
DEV4099_02P srvrd-app0202_S_10.129.172.158_HCA-1
    P1 -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-0A
    P2 -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-0A
DEV4099_02P srvrd-app0102_S_10.129.172.152_HCA-1
    P1 -4x-10G-> SUNDCS36QDR srvrdsw-iba0 C-5B
    P2 -4x-10G-> SUNDCS36QDR srvrdsw-ibb0 C-5B
# Created from srvrdsw-iba0 at Tue Jun  6 13:04:47 CDT 2017
[root@srvrdsw-iba0 ~]#

[root@srvrdsw-iba0 ~]# ibstat
Switch 'is4_0'
        Switch type: MT48436
        Number of ports: 0
        Firmware version: 7.4.3002
        Hardware version: a1
        Node GUID: 0x0010e0802288a0a0
        System image GUID: 0x0010e0802288a0a3
        Port 0:
                State: Active
                Physical state: LinkUp
                Rate: 40
                Base lid: 2
                LMC: 0
                SM lid: 3
                Capability mask: 0x4250084a
                Port GUID: 0x0010e0802288a0a0
[root@srvrdsw-iba0 ~]# ibstatus
Infiniband device 'is4_0' port 0 status:
        default gid:     fe80:0000:0000:0000:0010:e080:2288:a0a0
        base lid:        0x2
        sm lid:          0x3
        state:           4: ACTIVE
        phys state:      5: LinkUp
        rate:            40 Gb/sec (4X QDR)
[root@srvrdsw-iba0 ~]# ibnetstatus
Loading IBDIAGNET from: /usr/lib/ibdiagnet1.2
-W- Topology file is not specified.
    Reports regarding cluster links will use direct routes.
Loading IBDM from: /usr/lib/ibdm1.2
-I- Using port 0 as the local port.
-I- Discovering ... 17 nodes (3 Switches & 14 CA-s) discovered.

-I---------------------------------------------------
-I- Bad Guids/LIDs Info
-I---------------------------------------------------
-I- skip option set. no report will be issued
-I---------------------------------------------------
-I- Links With Logical State = INIT
-I---------------------------------------------------
-I- No bad Links (with logical state = INIT) were found
-I---------------------------------------------------
-I- PM Counters Info
-I---------------------------------------------------
-I- No illegal PM counters values were found
-I---------------------------------------------------
-I- Links With links width != 4x (as set by -lw option)
-I---------------------------------------------------
-I- No unmatched Links (with width != 4x) were found
-I---------------------------------------------------
-I- Links With links speed != 10 (as set by -ls option)
-I---------------------------------------------------
-I- No unmatched Links (with speed != 10) were found
-I---------------------------------------------------
-I- Fabric Partitions Report (see ibdiagnet.pkey for a full hosts list)
-I---------------------------------------------------
-I---------------------------------------------------
-I- IPoIB Subnets Check
-I---------------------------------------------------
-I- Subnet: IPv4 PKey:0x0001 QKey:0x00000b1b MTU:2048Byte rate:10Gbps SL:0x00
-W- No members found for group
-I- Subnet: IPv4 PKey:0x7fff QKey:0x00000b1b MTU:2048Byte rate:10Gbps SL:0x00
-W- No members found for group
-I---------------------------------------------------
-I- Bad Links Info
-I- No bad link were found
-I---------------------------------------------------
----------------------------------------------------------------
-I- Stages Status Report:
    STAGE                                    Errors Warnings
    Bad GUIDs/LIDs Check                     0      0
    Link State Active Check                  0      0
    Performance Counters Report              0      0
    Specific Link Width Check                0      0
    Specific Link Speed Check                0      0
    Partitions Check                         0      0
    IPoIB Subnets Check                      0      2
----------------------------------------------------------------
-I- Done. Run time was 23 seconds.
[root@srvrdsw-iba0 ~]#

[root@srvrdsw-iba0 ~]# ibhosts
Ca      : 0x0010e0000188fa04 ports 2 "srvrd-app0102 S "
Ca      : 0x0010e0000188b588 ports 2 "srvrdzadmclient0201 S 10.129.172.90 HCA-1"
Ca      : 0x0010e00001880fc8 ports 2 "srvrd-app0102 S 10.129.172.152 HCA-1"
Ca      : 0x0010e0000188b568 ports 2 "srvrdzadmclient0201 S 10.129.172.91 HCA-2"
Ca      : 0x0010e0000185f9e0 ports 2 "srvrd-h2-storadm PCIe 1"
Ca      : 0x0010e0000188ab68 ports 2 "srvrd-app0202 S "
Ca      : 0x0010e0000188b5a8 ports 2 "srvrd-app0202 S 10.129.172.158 HCA-1"
Ca      : 0x0010e000018823c8 ports 2 "srvrdzadmclient0101 S 10.129.172.81 HCA-2"
Ca      : 0x0010e0000185fee0 ports 2 "srvrd-h1-storadm PCIe 1"
Ca      : 0x0010e0000188f674 ports 2 "srvrdzadmclient0101 S 10.129.172.80 HCA-1"
Ca      : 0x0010e000018ec958 ports 2 "srvrdceladm03 C 10.129.172.104,10.129.172.105 HCA-1"
Ca      : 0x0010e000018ecd08 ports 2 "srvrdceladm04 C 10.129.172.106,10.129.172.107 HCA-1"
Ca      : 0x0010e00001092d30 ports 2 "srvrdceladm01 C 10.129.172.100,10.129.172.101 HCA-1"
Ca      : 0x0010e000018ecbd8 ports 2 "srvrdceladm02 C 10.129.172.102,10.129.172.103 HCA-1"
[root@srvrdsw-iba0 ~]#
[root@srvrdsw-iba0 ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:E0:4B:53:37:2A
          inet addr:10.129.170.130  Bcast:10.129.170.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:119386771 errors:0 dropped:0 overruns:0 frame:0
          TX packets:127273381 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:286239220 (272.9 MiB)  TX bytes:555225108 (529.5 MiB)
          Interrupt:11 Memory:bfde0000-bfe00000
lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:3719863206 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3719863206 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:2622825189 (2.4 GiB)  TX bytes:2622825189 (2.4 GiB)
[root@srvrdsw-iba0 ~]#

[root@srvrdsw-iba0 ~]# ibcheckstate -v
# Checking Switch: nodeguid 0x0010e0802288a0a0
Node check lid 2:  OK
Port check lid 2 port 36:  OK
Port check lid 2 port 32:  OK
Port check lid 2 port 31:  OK
Port check lid 2 port 30:  OK
Port check lid 2 port 29:  OK
Port check lid 2 port 28:  OK
Port check lid 2 port 26:  OK
Port check lid 2 port 22:  OK
Port check lid 2 port 20:  OK
Port check lid 2 port 18:  OK
Port check lid 2 port 17:  OK
Port check lid 2 port 16:  OK
Port check lid 2 port 15:  OK
Port check lid 2 port 14:  OK
Port check lid 2 port 13:  OK
Port check lid 2 port 9:  OK
Port check lid 2 port 8:  OK
Port check lid 2 port 7:  OK
Port check lid 2 port 4:  OK
Port check lid 2 port 3:  OK
Port check lid 2 port 2:  OK
Port check lid 2 port 1:  OK
# Checking Switch: nodeguid 0x0010e0802502a0a0
Node check lid 3:  OK
Port check lid 3 port 19:  OK
Port check lid 3 port 21:  OK
# Checking Switch: nodeguid 0x0010e080228da0a0
Node check lid 4:  OK
Port check lid 4 port 36:  OK
Port check lid 4 port 30:  OK
Port check lid 4 port 29:  OK
Port check lid 4 port 28:  OK
Port check lid 4 port 26:  OK
Port check lid 4 port 22:  OK
Port check lid 4 port 20:  OK
Port check lid 4 port 9:  OK
Port check lid 4 port 8:  OK
Port check lid 4 port 7:  OK
Port check lid 4 port 4:  OK
Port check lid 4 port 3:  OK
Port check lid 4 port 2:  OK
Port check lid 4 port 1:  OK
Port check lid 4 port 32:  OK
Port check lid 4 port 31:  OK
Port check lid 4 port 17:  OK
Port check lid 4 port 18:  OK
Port check lid 4 port 15:  OK
Port check lid 4 port 16:  OK
Port check lid 4 port 13:  OK
Port check lid 4 port 14:  OK
# Checking Ca: nodeguid 0x0010e0000188fa04
Node check lid 27:  OK
Port check lid 27 port 2:  OK
Port check lid 27 port 1:  OK
# Checking Ca: nodeguid 0x0010e0000188b588
Node check lid 15:  OK
Port check lid 15 port 1:  OK
Port check lid 15 port 2:  OK
# Checking Ca: nodeguid 0x0010e00001880fc8
Node check lid 31:  OK
Port check lid 31 port 2:  OK
Port check lid 31 port 1:  OK
# Checking Ca: nodeguid 0x0010e0000188b568
Node check lid 19:  OK
Port check lid 19 port 1:  OK
Port check lid 19 port 2:  OK
# Checking Ca: nodeguid 0x0010e0000185f9e0
Node check lid 7:  OK
Port check lid 7 port 2:  OK
Port check lid 7 port 1:  OK
# Checking Ca: nodeguid 0x0010e0000188ab68
Node check lid 24:  OK
Port check lid 24 port 1:  OK
Port check lid 24 port 2:  OK
# Checking Ca: nodeguid 0x0010e0000188b5a8
Node check lid 28:  OK
Port check lid 28 port 1:  OK
Port check lid 28 port 2:  OK
# Checking Ca: nodeguid 0x0010e000018823c8
Node check lid 23:  OK
Port check lid 23 port 2:  OK
Port check lid 23 port 1:  OK
# Checking Ca: nodeguid 0x0010e0000185fee0
Node check lid 8:  OK
Port check lid 8 port 2:  OK
Port check lid 8 port 1:  OK
# Checking Ca: nodeguid 0x0010e0000188f674
Node check lid 18:  OK
Port check lid 18 port 2:  OK
Port check lid 18 port 1:  OK
# Checking Ca: nodeguid 0x0010e000018ec958
Node check lid 11:  OK
Port check lid 11 port 2:  OK
Port check lid 11 port 1:  OK
# Checking Ca: nodeguid 0x0010e000018ecd08
Node check lid 13:  OK
Port check lid 13 port 1:  OK
Port check lid 13 port 2:  OK
# Checking Ca: nodeguid 0x0010e00001092d30
Node check lid 6:  OK
Port check lid 6 port 2:  OK
Port check lid 6 port 1:  OK
# Checking Ca: nodeguid 0x0010e000018ecbd8
Node check lid 10:  OK
Port check lid 10 port 2:  OK
Port check lid 10 port 1:  OK
## Summary: 17 nodes checked, 0 bad nodes found
##          74 ports checked, 0 ports with bad state found
[root@srvrdsw-iba0 ~]#

Installing Ingress Controller - Kubernetes

Installing the Ingress Controller Prerequisites Make sure you have access to the Ingress controller image: For NGINX Ingress controll...