DevOps

It is a set of practices that combines that combines software development[Dev] and IT operations[Ops].A culture that allows development and operation team to work together.It aims to shorten the software development lifecycle and provides continuous delivery with high software quality.Several devOps concepts came from agile methodology.

It is the union of people ,process and products to enable continuous delivery of values to end users.The intent is to enable communication between the teams so that they can build,test, and release software more quickly with greater efficiency and speed.
Different SDLC models and their working approach -

WaterfallLeanAgileDevOps
1970199020012010


Working Approach -
WaterfallDesignCodeTestDeploy
AgileDesignCodeTestCodeTestCodeTest CodeTestCodeTest...Deploy
DevOpsDesignCodeTestDeployCodeTestDeploy CodeTestDeploy...CodeTestDeploy

Cloud Computing


Cloud computing is the on-demand availability of computer system resources, especially data storage and computing power, without direct active management by the user. Large clouds often have functions distributed over multiple locations, each of which is a data center.Simply put, cloud computing is the delivery of computing services—including servers, storage, databases, networking, software, analytics, and intelligence—over the internet (“the cloud”) to offer faster innovation, flexible resources, and economies of scale.

What cloud actually does?
cloud computing is a range of services delivered over the internet, or “the cloud.” It means using remote servers to store and access data instead of relying on local hard drives and private datacenters.
There are four main types of cloud computing:
private clouds
public clouds
hybrid clouds
multiclouds

Public cloud — and other info delivered over the internet that can be shared with various people and organizations.
Private cloud — data and other info that is only accessible to users within your organization.
Hybrid cloud — a combination of the two. This environment uses public and private clouds.
Multicloud - It is used when an organization uses cloud computing services from at least two cloud providers to run their applications. Instead of using a single-cloud stack, multicloud environments typically include a combination of two or more public clouds, two or more private clouds, or some combination of both.

Virtualisation


In computing, virtualization or virtualisation is the act of creating a virtual version of something at the same abstraction level, including virtual computer hardware platforms, storage devices, and computer network resources.It allows you to use a physical machine's full capacity by distributing its capabilities among many users or environments
Benefit -
Virtualization relies on software to simulate hardware functionality and create a virtual computer system. This enables IT organizations to run more than one virtual system – and multiple operating systems and applications – on a single server. The resulting benefits include economies of scale and greater efficiency.
Virtual machines and hypervisors are two important concepts in virtualization.
Virtual machine
A virtual machine is a software-defined computer that runs on a physical computer with a separate operating system and computing resources. The physical computer is called the host machine and virtual machines are guest machines. Multiple virtual machines can run on a single physical machine. Virtual machines are abstracted from the computer hardware by a hypervisor.
Hypervisor
The hypervisor is a software component that manages multiple virtual machines in a computer. It ensures that each virtual machine gets the allocated resources and does not interfere with the operation of other virtual machines. There are two types of hypervisors.
Software use for achieving virtualisation :
VMware Fusion, Parallels Desktop, Oracle VM Virtual Box and VMware Workstation are the top four software that is really good for virtualization. Oracle VM Virtual Box gives you really nice features at a free of cost. It can also be used on Mac, Windows, Linux, and Solaris.
Pros of Virtualization.
Uses Hardware Efficiently. Available at all Times. Recovery is Easy. Quick and Easy Setup.
Cons of Virtualization.
High Initial Investment. Data Can be at Risk. Quick Scalability is a Challenge.

Virtualisation types

You can use virtualization technology to get the functions of many different types of physical infrastructure and all the benefits of a virtualized environment. You can go beyond virtual machines to create a collection of virtual resources in your virtual environment.
Application Virtualisation :
Application virtualization pulls out the functions of applications to run on operating systems other than the operating systems for which they were designed. For example, users can run a Microsoft Windows application on a Linux machine without changing the machine configuration.

Network Virtualisation


Any computer network has hardware elements such as switches, routers, and firewalls. An organization with offices in multiple geographic locations can have several different network technologies working together to create its enterprise network. Network virtualization is a process that combines all of these network resources to centralize administrative tasks. Administrators can adjust and control these elements virtually without touching the physical components, which greatly simplifies network management.

Desktop Virtualisation


Desktop virtualization is a software technology that separates the desktop environment and associated application software from the physical client device that is used to access it.
Benefit -
Using desktop virtualization allows enterprises to provision just a few types of desktops to its users, reducing the need to configure desktops for each employee. Additionally, because virtual desktop VDI can be provisioned so quickly, it's easier for the company to onboard new hires with just a few mouse clicks.
VDI or remote desktop connection is an example of Desktop Virtualisation which has frequent use in IT industry.

Storage Virtualisation


Storage virtualization combines the functions of physical storage devices such as network attached storage (NAS) and storage area network (SAN). You can pool the storage hardware in your data center, even if it is from different vendors or of different types. Storage virtualization uses all your physical data storage and creates a large unit of virtual storage that you can assign and control by using management software.
There are three main types of server virtualization:
full-virtualization
para-virtualization
OS-level virtualization.

Server Virtualisation


Server virtualization is a process that partitions a physical server into multiple virtual servers. It is an efficient and cost-effective way to use server resources and deploy IT services in an organization. Without server virtualization, physical servers use only a small amount of their processing capacities, which leave devices idle.

Data Virtualisation


Modern organizations collect data from several sources and store it in different formats. They might also store data in different places, such as in a cloud infrastructure and an on-premises data center. Data virtualization creates a software layer between this data and the applications that need it. Data virtualization tools process an application’s data request and return results in a suitable format. Thus, organizations use data virtualization solutions to ncrease flexibility for data integration and support cross-functional data analysis.

Virtual Machine


In computing, a "virtual machine" is the virtualization or emulation of a computer system. Virtual machines are based on computer architectures and provide the functionality of a physical computer.A virtual machine is a computer file, typically called an image, that behaves like an actual computer. It can run in a window as a separate computing environment, often to run a different operating system—
A hypervisor (also known as a virtual machine monitor, VMM, or virtualizer) is a type of computer software, firmware or hardware that creates and runs virtual machines.
Virtual machines support legacy applications, reducing the cost of migrating to a new operating system. For example, a Linux virtual machine running a distribution of Linux as the guest operating system can exist on a host server that is running a non-Linux operating system, such as Windows.

Container


Containers are packages of software that contain all of the necessary elements to run in any environment. In this way, containers virtualize the operating system and run anywhere, from a private data center to the public cloud or even on a developer's personal laptop.
A container on a single machine share OS of the host.A container can start up pretty quickly in compare to VM as OS has already started on the host.They are not resource intensive as they don't need a slice of the hardware resources on the host.So, we don't need to give thema specific number of CPU cores or memory,disk space.In a single host, we can run 10s of 100s of containers.
Container real time Usecase :
All Google applications, like GMail and Google Calendar, are containerized and run on their cloud server.
Containers allow applications to be more rapidly deployed, patched, or scaled.
Benefits of containers
Less overhead
Containers require less system resources than traditional or hardware virtual machine environments because they don’t include operating system images.
Increased portability
Applications running in containers can be deployed easily to multiple different operating systems and hardware platforms.

Container Runtime


A container runtime, also known as container engine, is a software component that can run containers on a host operating system.
The container runtime is the low-level component that creates and runs containers
Containerized applications can get complicated, however. When in production, many might require hundreds to thousands of separate containers in production. This is where container runtime environments such as Docker benefit from the se of other tools to orchestrate or manage all the containers in operation.
Docker Container Vs Runtime
Docker is another popular container runtime that uses containerd as an internal runtime. But, the Docker container is easier to manage and run the same tasks as the containerd to get better and more efficient results. Docker has made it easier for developers to create, run, test, and deploy applications.
Container runtime are tools or software that are used to create and run containers. Eg: dockers and rkt. Dockers must be installed to create containers

Virtual Machine Vs Container


Virtual machines run in a hypervisor environment where each virtual machine must include its own guest operating system inside it, along with its related binaries, libraries, and application files. This consumes a large amount of system resources and overhead, especially when multiple VMs are running on the same physical server, each with its own guest OS.
Containers host the individual microservices that form a microservices application. However , microservices can be host & deploy in a variety of other ways: VMs. It's uncommon to host microservices inside VMs. Nevertheless, it's technically feasible for developers to deploy a set of microservices inside individual VMs and then connect them together to form a microservices app. Directly on the OS. There is also no technical reason why you can't deploy a set of microservices directly on the same OS and not isolate them inside of a container or VM.
In contrast, each container shares the same host OS or system kernel and is much lighter in size, often only megabytes. This often means a container might take just seconds to start (versus the gigabytes and minutes required for a typical VM).
containers provide a way to virtualize an OS so that multiple workloads can run on a single OS instance. With VMs, the hardware is being virtualized to run multiple OS instances. Containers’ speed, agility, and portability make them yet another tool to help streamline software development.
Containers sit on top of a physical server and its host OS—for example, Linux or Windows. Each container shares the host OS kernel and, usually, the binaries and libraries, too. Shared components are read-only. Containers are thus exceptionally “light”—they are only megabytes in size and take just seconds to start, versus gigabytes and minutes for a VM.

Kubernetes


Kubernetes is an open-source container orchestration system for automating software deployment, scaling, and management. Originally designed by Google, the project is now maintained by the Cloud Native Computing Foundation.
Kubernetes being a container orchestration tool is used when our app is distributed in multiple containers.It's job is to monitor,scale,restart container automatically even if they are spread across multiple nodes.
Kubernetes Use :
Kubernetes automates operational tasks of container management and includes built-in commands for deploying applications, rolling out changes to your applications, scaling your applications up and down to fit changing needs, monitoring your applications, and more—making it easier to manage applications.
Kubernetes, a container orchestrator that recognizes multiple container runtime environments, including Docker.
Kubernetes orchestrates the operation of multiple containers in harmony together. It manages areas like the use of underlying infrastructure resources for containerized applications such as the amount of compute, network, and storage resources required. Orchestration tools like Kubernetes make it easier to automate and scale container-based workloads for live production environments.
Kubernetes source code is in the Go language.
Kubelet -
The kubelet is the primary "node agent" that runs on each node. It can register the node with the apiserver using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider.
The Kubelet is responsible for managing the deployment of pods to Kubernetes nodes. It receives commands from the API server and instructs the container runtime to start or stop containers as needed.
Every Kubernetes Node runs at least: Kubelet, a process responsible for communication between the Kubernetes control plane and the Node; it manages the Pods and the containers running on a machine.
Kubelet - it runs containers inside pod.If a pod goes down ,it is the kubelet job to communicate to the master.It is the technology that applies, creates,updates, & destroy containers on a kubernetes node.It is the primary node agent that runs on each node.

Kube proxy


kube-proxy is a network proxy that runs on each node in your cluster, implementing part of the Kubernetes Service concept. kube-proxy maintains network rules on nodes. These network rules allow network communication to your Pods from network sessions inside or outside of your cluster.
Kube proxy - it redirects the incoming traffic to the desired pod.if there is a pod that is running a web page than kube proxy allows in & out flow. of traffic b/w pod & that web page. The number of containers that r inside a pod will have the same ip address, memory & volume.1 container communication with other is done through localhost. Kubelet Vs kube-proxy
kubelet – watches the API server for pods on that node and makes sure they are running.
kube-proxy – watches the API server for pods/services changes in order to maintain the network up to date.

Kubectl


It is short for Command line tool (kubectl) Kubernetes. The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.
Kubelet Vs Kubectl
kubelet is Kubernetes's mechanism for creating containers in a worker node, while kubectl is the CLI tool that developers use for interacting with a Kubernetes cluster.
Kubectl - command line tool to manage kubernetes cluster

Minikube


Minikube is a tool that runs a single-node Kubernetes cluster in a virtual machine.SO,MiniKube is a VM
Minikube is a lightweight Kubernetes implementation that creates a VM on your local machine and deploys a simple cluster containing only one node. Minikube is available for Linux, macOS, and Windows systems.
Kubectl Vs Minikube
Minikube - contains actual kubernetes code .
Kubernetes is a more comprehensive orchestration tool that is good for large scale projects and to manage the developed applications with a whole range of libraries. Whereas, Minikube, as a local Kubernetes engine, is great for deployment purposes and comes with limited nodes and external testing capabilities.
Does MiniKube require Kubernetes
Minikube is a utility you can use to run Kubernetes (k8s) on your local machine. It creates a single node cluster contained in a virtual machine (VM). This cluster lets you demo Kubernetes operations without requiring the time and resource-consuming installation of full-blown K8s.
Master and Node can't run on the same machine in the kubernetes cluster

Docker


Docker is a set of platform as a service products that use OS-level virtualization to deliver software in packages called containers. The service has both free and premium tiers. The software that hosts the containers is called Docker Engine. It was first started in 2013 and is developed by Docker, Inc.
Docker is a software platform that allows you to build, test, and deploy applications quickly. Docker packages software into standardized units called containers that have everything the software needs to run including libraries, system tools, code, and runtime.

Docker is a popular runtime environment used to create and build software inside containers. It uses Docker images (copy-on-write snapshots) to deploy containerized applications or software in multiple environments, from development to test and production. Docker was built on open standards and functions inside most common operating environments, including Linux, Microsoft Windows, and other on-premises or cloud-based infrastructures.
=====================================
Have you ever observed that an application runs successfully in development environment, but the same application with the same code creates multiple issues while running in the production environment?
to avoid issues generated during production deployment, we should once think of Docker concept . docker & containers r a new way of running s/w in production environment without any issues . Docker is a new technology that facilitates development teams to build, manage, and secure applications anywhere.
Docker works on a concept of Container. container is a kind of software that packs up code and all its dependencies in a standard unit so the application runs from one environment to another quickly and reliably. Likewise Docker provides the ability to wrap up and run an application without affecting the reliability in an isolated environment is known as a docker container .
With Docker, you can run multiple containers simultaneously on a given host Instances of containerized apps use far less memory than virtual machines, they start and stop more quickly. Docker Image ->
Docker image is a lightweight, standalone, executable bundle of software that contains everything (code, runtime, system tools, system libraries and settings) which is required to run an application Docker Hub ->
Docker Hub is a kind of repository service provided by Docker for storing, finding and sharing container images with your team. It is the world’s largest repository of containerplatform ______________________________________________ Docker Usecase :
Docker's container-based platform allows for highly portable workloads. Docker containers can run on a developer's local laptop, on physical or virtual machines in a data center, on cloud providers, or in a mixture of environments.
Docker image
A Docker image is a read-only template that contains a set of instructions for creating a container that can run on the Docker platform. It provides a convenient way to package up applications and preconfigured server environments, which you can use for your own private use or share publicly with other Docker users.
Docker Image Vs Docker Container
The key difference between a Docker image Vs a container is that a Docker image is a read-only immutable template that defines how a container will be realized. A Docker container is a runtime instance of a Docker image that gets created when the $ docker run command is implemented.
A docker image can have another docker image as it's parent image. Each time Docker launches a container from an image, it adds a thin writable layer, known as the container layer, which stores all changes to the container throughout its runtime

Dangling images


Dangling images are untagged Docker images that aren't used by a container or depended on by a descendant. They usually serve no purpose but still consume disk space.
It's a good practice to clean up dangling and unused Docker images once in a while since a lot of unused images can lead to wasted disk space.
To clean dangling images in docker?
Docker prune command

Docker Hub


Docker Hub is a service provided by Docker for finding and sharing container images.
Docker Hub is a hosted repository service provided by Docker for finding and sharing container images with your team.
Docker registries are used to host and distribute Docker Images. Docker Hub is Docker's official cloud-based registry. To get started with Docker Hub you can pull (download) an image or push (upload) one of your local images.

Docker Swarm


A Docker Swarm is a container orchestration tool running the Docker application. It has been configured to join together in a cluster.
Docker Swarm is a clustering and scheduling tool for Docker containers. With Swarm, IT administrators and developers can establish and manage a cluster of Docker nodes as a single virtual system. Swarm mode also exists natively for Docker Engine, the layer between the OS and container image
Docker Swarm Benefits
It is lightweight and easy to use. Also, Docker Swarm takes less time to understand than more complex orchestration tools. It provides automated load balancing within the Docker containers, whereas other container orchestration tools require manual efforts.

Docker Compose


Docker Compose is a tool that was developed to help define and share multi-container applications. With Compose, we can create a YAML file to define the services and with a single command, can spin everything up or tear it all down.
Docker File Vs Docker Compose
A Dockerfile is a simple text file that contains the commands a user could call to assemble an image whereas Docker Compose is a tool for defining and running multi-container Docker applications. Docker Compose define the services that make up your app in docker-compose.
Docker Compose Vs Kubernetes
Docker Compose deploys multi-container Docker apps to a single server, while Kubernetes is a production-grade container orchestrator that can run multiple container runtimes, including Docker's, across multiple virtual or physical machines.
Docker Compose Advantage
Docker Compose files are very easy to write in a scripting language called YAML, which is an XML-based language that stands for Yet Another Markup Language. Another great thing about Docker Compose is that users can activate all the services (containers) using a single command.

Docker Registry


Docker registries enable developers to store and distribute Docker images. Most developers use Docker registries instead of other packages because they simplify development processes significantly.
The Registry is a stateless, highly scalable server side application that stores and lets you distribute Docker images. The Registry is open-source.
A Docker registry is a system for versioning, storing and distributing Docker images. DockerHub is a hosted registry used by default when installing the Docker engine, but there are other hosted registries available for public use such as AWS and Google's own registries.

Is Docker hub a registry or repository?


Docker Hub offers a hosted registry with additional features such as teams, organizations, web hooks, automated builds, etc
Docker Hub Registry Vs Repository
A Docker registry is a storage and distribution system for named Docker images. The same image might have multiple different versions, identified by their tags. A Docker registry is organized into Docker repositories , where a repository holds all the versions of a specific image.
Docker Hub is a good example of a public registry. can browse a list of public Docker images, and also store and view private Docker images. A private registry is a Docker registry where access to Docker images are restricted to authenticated user

Kubernetes Vs Docker compose


Docker Compose -> Compose is a tool for defining and running multi-container Docker applications. One of the additional features of Docker Compose is that it can create containers using container images that are hosted on a container repository such as DockerHub.
Also, Docker Compose can build containers based on a Dockerfile stored on the hosting machine.
______________
Kubernetes Vs docker compose
Kubernetes and Docker Compose are both container orchestration frameworks. Kubernetes runs containers over a number of computers, virtual or real. Docker Compose runs containers on a single host machine.
________________

Kubernetes Vs Docker swarm


Docker Swarm is a lightweight, easy-to-use orchestration tool with limited offerings compared to Kubernetes. In contrast, Kubernetes is complex but powerful and provides self-healing, auto-scaling capabilities out of the box.
Which is better to use and when .
If you or your company does not need to manage complex workloads, then Docker Swarm is the right choice. If your applications are critical and you are looking to include monitoring, security features, high availability, and flexibility, then Kubernetes is the right choice.
The main difference is that Kubernetes is a container orchestration system that manages multiple containers. Docker Swarm does not manage any containers but instead is a cluster manager for Docker containers. Kubernetes also has built-in support for stateful applications, whereas Docker Swarm does not. K8s architecture is more complicated than Swarm as the platform has master/worker nodes and pods that can contain one or more containers. Kubernetes is ideal for complex apps that can benefit from automatic scaling.

Kubernetes , Docker swarm comparison


PointKubernetesDocker Swarm
Main selling pointA complete container orchestration solution with advanced automation features and high customization An emphasis on ease of use and a seamless fit with other Docker products
InstallationSomewhat complex as you need to install (and learn to use) kubectl Quick and easy setup (if you already run Docker)
Monitoring capabilities Has built-in monitoring and logging Basic server log and event tools, but needs a third-party tool for advanced monitoring
Load balancingNo built-in mechanism for auto load-balancing Internal load balancing
Optimal use caseHigh-demand apps with a complex configuration Simple apps that are quick to deploy and easy to manage

Docker-Kubernetes twinning


Docker is a container runtime, Kubernetes is a platform for running and managing containers from many container runtimes. Kubernetes supports numerous container runtimes including Docker, containerd, CRI-O, and any implementation of the Kubernetes CRI (Container Runtime Interface). Docker without Kubernetes using Docker compose.
Docker without Kubernetes
Can You Use Docker Without Kubernetes?
The short and simple answer is yes, Docker can function without Kubernetes. You see, Docker is a standalone software designed to run containerized applications. Since container creation is part of Docker, you don’t need any separate software for Docker to execute.

Kubernetes without Docker using other container runtimes


Kubernetes takes care of the containers created by a platform like Docker. It ensures the health and failure management of a system, thus automating the whole process.Kubernetes is meant to run across a cluster, whereas Docker runs on a single node.

The answer is both yes and no. Kubernetes, in itself, is not a complete solution. It depends on a container runtime to orchestrate; you can’t manage containers without having containers in the first place. Docker is one of the platforms used for containerization but it is not the only platform out there. This means, as long as you have a container runtime, Kubernetes will do its job. You can choose that container runtime to be Docker, but it’s not a requirement.
And also -
Docker is heavy. We get better performance with a lightweight container runtime like containerd or CRI-O. containerd consumes less memory and CPU, and that pods start in less time than on Docker. Docker, however, was never designed to run inside Kubernetes. Realizing this problem, the Kubernetes developers eventually implemented an API called Container Runtime Interface (CRI). This interface allows us to choose among different container runtimes, making the platform more flexible and less dependent on Docker

Container without Docker


yes it is possible to create and run containers without using docker by using other containerization platform. Several tools can be used to create containers without Docker. Some examples include: LXC (Linux Containers): This is an open-source containerization system that allows you to run multiple isolated Linux systems on a single host.

Docker/Kubernetes/Nagios


Docker is a software platform that enables packaging an application into containers. These containers represent isolated environments that provide everything necessary to run the application. Dockerizing an application refers to packaging it in a Docker image to run in one or more containers.Docker image is a reproducible environment for the application that guarantees portability across machines.Docker Hub is largest repository of container images .
You can also share applications and collaborate with other developers using Docker Hub.
User only have it's code into his machine.
Docker wraps code to
(Code+services+libraries+db ) into 1 single package called docker.
Docker - container platform
Kubernetes - container Management platform
Nagios - Nagios is an open source monitoring system for computer systems. Nagios can monitor memory usage, disk usage, microprocessor load, the number of currently running processes and log files. Nagios also can monitor services, such as Simple Mail Transfer Protocol (SMTP), Post Office Protocol 3 (POP3), Hypertext Transfer Protocol (HTTP) and other common network protocols. ____________________

Why deploying microservice on container is better


( VMs take up to a few minutes to start, but containers can typically start in just a few seconds. It's easier to maximize the agility of microservices when they are hosted inside containers.)
( Containers provide isolation for each containerized application or microservice, which reduces the risk that a security vulnerability can spread. Microservices deployed directly on a host OS are less secure in this respect.)
Containerizing a service can help to make it highly maintainable and testable.

Container Vs Images


Container needs to run an image to exist.Hence , are dependent on images and use them to construct a run-time environment and run an application. while images can exist without container.
A container is just a running image.Once we create a container , it adds a writable layer on top of the immutable image,meaning we can now modify it.On the other hand , images are just templates .We can't start or run them.We can use this template as a base to build a container.

Hypervisor


It is a software that is use to create and manage VM.eg...VMWare,Virtual Box.These 2 hypervisor provide cross platform so they can run on windows,MacOs,Linux.With hypervisor (also known as Virtual Machine Manager[VMM]),we can manage VM.It provides an isolated environment between 2 applications.