DevOps is a culture that emphasizes using various tools, automation, close collaboration, synchronization so as to help the organization thus benefiting the complete software development to the deployment process.
This is overwhelming when you hear so many tools all together and you barely know about them. But let me help you understand with easy and clear guidance.
There are multiple ways to accomplish one thing. In other words verity of different DevOps tools exist, but choosing the best ones for your project, best for you to learn and implement effectively makes you a smart and success DevOps Engineer in the IT world.
You don't have to learn all the competitive tools but having a brief idea about them would surely help you choose the right one for your purpose and overcome overwhelmed feeling.
Open source tools and competitors.
Version Control
GitHub | Bitbucket | Perforce
GitHub | Bitbucket | Perforce
Configuration
Ansible | Terraform | Puppet | Chef | CloudFormation
Ansible | Terraform | Puppet | Chef | CloudFormation
Containerization
Docker | Vagrant
Docker | Vagrant
Orchestration
Kubernetes | Docker Swarm | Salt
Kubernetes | Docker Swarm | Salt
CI/CD Pipeline
Jenkins | GitHub Action | GitLab | TeamCity
Jenkins | GitHub Action | GitLab | TeamCity
Docker Containerization
Understanding Docker Containers
Docker is a Containerization platform and Kubernetes is an Orchestrator for container platforms like Docker.
Containers are super fast and lightweight Micro Computers/VMs. Which Images are pulled and installed by Docker.
Let us compare Docker containers with VMs. for clear understanding.
Step 1. Prerequisites
Hardware / Cloud / VM

Step 2. Install Ubuntu as a host OS.
Step 3. Install Docker over Ubuntu (host OS)
a. set-up repository
# apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
b. Add Docker official GPG Key
# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
c. Install Docker Engine
# apt-get update
# apt-get install docker-ce docker-ce-cli containerd.io
d. Create a Container
# docker pull centos --> pull a Centos image from DockerHub
# docker run -d -t --name <container1> centos
Ansible
Ansible is an Open source infrastructure configuration management, deployment and orchestration tool.
Easy Ansible Lab set-up YouTube
Step 1. Create user 'gpokhrel' and root access in all the hosts
ansible-master (Ubuntu) -- > Control machine
centos-node1 (CentOS)
ubuntu-node3 (Ubuntu)
centos-node1 (CentOS)
ubuntu-node3 (Ubuntu)
$ sudo visudo
gpokhrel ALL=NOPASSWD: ALL
gpokhrel ALL=NOPASSWD: ALL
Step 2. Install Ansible
$ sudo apt install ansible
$ ansible --version
$ ansible --version
Step 3. Create an Inventory
$ vi /etc/ansible/hosts
centos-node1
ubuntu-node3
centos-node1
ubuntu-node3
Step 4. Write a PlayBook
$ vi test.yml
$ ansible-playbook test.yml
$ ansible-playbook test.yml
ad-hoc command example:
$ ansible all -b -m apt -a "name=ntp state=latest" --> (yum for #Centos)
$ ansible all -b -m apt -a "name=ntp state=latest" --> (yum for #Centos)
Step 5. Make sure all nodes are pinging
$ ansible all -m ping
Step 6. Establish passwordLess SSH
$ ssh-keygen
$ ssh-copy-id centos-node1
$ ssh-copy-id ubuntu-node3
$ ssh-copy-id centos-node1
$ ssh-copy-id ubuntu-node3
Ansible ad-hoc commands - Cheat sheet
$ ansible <Host-Group> -m <Module> -a <Command Argument> | ad-hoc command Syntax |
$ ansible all -m shell -a "uptime;uname -a" | Run uptime & uname commands on all nodes |
$ ansible all -m user -a "name=gpokhrel group=group1 createhome=yes" | Create a user |
$ ansible all -m file -a "path=/opt/oracle group=group1 owner=user1" | Change file ownership |
$ ansible all -s -m service -a "name=httpd state=started/stop enabled=yes" | Start/stop service and make it persistent. |
$ ansible all -s -m cron -a "name='daily-job' minute=*/15 job='/path/to/daily-script.sh'" | scheduling and managing Cron job. |
Ansible Tower: This is a web base user Interface, GUI mode.
Jenkins CI/CD Process
Jenkins tool explained in easy concept any beginner can understand.
The "CI/CD" part of Jenkins stands for "Continuous Integration/Continuous Delivery." This means that Jenkins helps people automatically test their program every time they make changes to it, and then automatically release those changes to production if everything looks good.
An overview of the steps involved in going from code commit through Git to Jenkins to Production. Here are the typical steps involved in this process:
1. Code commit: Developers write new code or make changes to existing code and commit their changes to a Git repository. This repository is usually hosted on a code hosting platform such as GitHub, GitLab or Bitbucket.
2. Git triggers: When a commit is made, Git can trigger an event that notifies Jenkins that new code is available.
3. Jenkins pipeline: Jenkins can be configured to run a pipeline that automatically pulls the new code from the Git repository and executes a series of automated tests and builds on it.
4. Automated testing: Jenkins can run automated tests on the new code to ensure that it functions as expected and does not break any existing functionality.
5. Build and deploy: If the automated tests pass, Jenkins can build the code into an executable format and deploy it to a testing or staging environment for further testing by a QA team.
6. QA testing: The QA team can perform additional testing on the code in the staging environment to ensure that it meets all requirements and that there are no unexpected issues.
7. Production deployment: If the QA team approves the new code, Jenkins can deploy it to the production environment, where it will be available to end-users.
Overall, this process ensures that new code changes are thoroughly tested before being released to production, minimizing the risk of bugs or errors affecting users.