⚑ Complete DevOps Course β€’ 2025 Edition

CI/CD Deep Learning: From Beginner to Advanced with Real-World Projects

Bhai, yeh course sirf theory nahi hai β€” yahan real pipelines banayenge, production-level code likhenge, aur actual companies mein kaise kaam hota hai woh sab cover karenge. Jenkins se lekar Kubernetes tak, sab kuch hands-on!

15Chapters
75+Projects
200+Code Snippets
∞Real-World Value
CH 01

Introduction to CI/CD

The “Why” before the “How” β€” samjho pehle, phir karo

Imagine karo ek team of 20 developers hai, sab apna-apna code likh rahe hain. Ek developer ne ek feature banaya, doosre ne kuch aur β€” jab deploy karne ka time aaya, toh sab kuch crash ho gaya. This is the world before CI/CD. CI/CD (Continuous Integration / Continuous Delivery) is the solution to this exact chaos.

πŸ” What is CI/CD?

Continuous Integration (CI) means every code change is automatically tested and merged into the main branch. Continuous Delivery (CD) means that code is automatically prepared for deployment at any time. Continuous Deployment goes one step further β€” every passing build goes directly to production without manual approval.

πŸ”

Continuous Integration

Automate build + test on every code push. Catch bugs early, merge confidently.

πŸ“¦

Continuous Delivery

Code is always in a deployable state. One-click deploys to any environment.

πŸš€

Continuous Deployment

Fully automated path to production. Zero human intervention after code push.

πŸ›‘οΈ

Quality Gates

Automated checks β€” lint, unit tests, security scans β€” block bad code automatically.

πŸ“Š The CI/CD Pipeline Flow

Developer pushes code β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ SOURCE CONTROL (GitHub / GitLab / Bitbucket) β”‚ β”‚ Branch: feature/login β†’ Pull Request β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ webhook trigger β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ CI SERVER (Jenkins / GitHub Actions / GitLab CI) β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ β”‚ Build β”‚β†’ β”‚ Test β”‚β†’ β”‚ Security Scan β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ artifact (Docker image / JAR) β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ CD PIPELINE (ArgoCD / Spinnaker / Helm) β”‚ β”‚ DEV β†’ STAGING β†’ UAT β†’ PRODUCTION β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
πŸ’‘

Trainer Tip: CI/CD is not a tool β€” it’s a practice. Tools jaise Jenkins, GitHub Actions yeh sirf CI/CD ko implement karne ke liye use hote hain. Concept pehle clear karo!

πŸ› οΈ Chapter Projects

PROJECT 1.1

Your First Hello-World Pipeline

Create a basic CI pipeline that echoes “Build Successful” using shell scripting and a minimal Jenkinsfile.

JenkinsBashBeginner
PROJECT 1.2

GitHub Webhook Trigger Demo

Configure a GitHub repository to trigger a Jenkins build automatically on every push event.

GitHubWebhooksJenkins
PROJECT 1.3

Pipeline Stages Visualization

Build a 5-stage pipeline (Checkout β†’ Build β†’ Test β†’ Package β†’ Deploy) and visualize it in Jenkins Blue Ocean.

Blue OceanStagesPipeline
PROJECT 1.4

Email Notification on Build Failure

Configure post-build email notifications so team gets alerted when a pipeline stage fails.

SMTPNotificationsJenkins
PROJECT 1.5

Multi-Branch Pipeline Setup

Configure Jenkins to automatically discover and build all branches of a repository using Multibranch Pipeline.

MultibranchGitFlowJenkins
πŸ“‚ Project 1.1 β€” Full Implementation: Hello-World Pipeline β–Ό

Objective

Create a Jenkins declarative pipeline that checks out code, runs a build step, and outputs a success message. This is your “Hello World” for CI/CD.

Prerequisites

  • Jenkins installed (see install commands below)
  • GitHub account + a sample repository
  • Linux VM / EC2 instance (Ubuntu 22.04 recommended)

Step 1 β€” Install Jenkins on Ubuntu

bash
# Update system packages
sudo apt update && sudo apt upgrade -y

# Install Java (Jenkins requires Java 17+)
sudo apt install openjdk-17-jdk -y
java -version

# Add Jenkins GPG key and repository
curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo tee \
  /usr/share/keyrings/jenkins-keyring.asc > /dev/null

echo deb [signed-by=/usr/share/keyrings/jenkins-keyring.asc] \
  https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
  /etc/apt/sources.list.d/jenkins.list > /dev/null

# Install Jenkins
sudo apt update
sudo apt install jenkins -y

# Start and enable Jenkins service
sudo systemctl start jenkins
sudo systemctl enable jenkins
sudo systemctl status jenkins

# Get initial admin password
sudo cat /var/lib/jenkins/secrets/initialAdminPassword

Step 2 β€” Create Jenkinsfile

jenkinsfile
// Jenkinsfile (Declarative Pipeline)
pipeline {
    agent any

    environment {
        APP_NAME = 'hello-cicd-app'
        BUILD_VERSION = "1.0.${BUILD_NUMBER}"
    }

    stages {
        stage('πŸ” Checkout') {
            steps {
                git branch: 'main',
                    url: 'https://github.com/your-username/hello-cicd.git'
                echo "βœ… Code checked out successfully"
                sh 'ls -la'
            }
        }

        stage('πŸ”¨ Build') {
            steps {
                echo "πŸ—οΈ Building ${env.APP_NAME} version ${env.BUILD_VERSION}"
                sh '''
                    echo "Starting build process..."
                    mkdir -p build/output
                    echo "Build artifacts generated at: build/output/"
                    echo "Version: ${BUILD_VERSION}" > build/output/version.txt
                    cat build/output/version.txt
                '''
            }
        }

        stage('πŸ§ͺ Test') {
            steps {
                echo 'Running unit tests...'
                sh '''
                    echo "=== Running Tests ==="
                    echo "Test 1: Health Check... PASSED βœ…"
                    echo "Test 2: Config Validation... PASSED βœ…"
                    echo "Test 3: API Endpoint... PASSED βœ…"
                    echo "All tests passed!"
                '''
            }
        }

        stage('πŸ“¦ Package') {
            steps {
                echo 'Packaging the application...'
                sh '''
                    tar -czf build/app-${BUILD_VERSION}.tar.gz build/output/
                    echo "Package created: app-${BUILD_VERSION}.tar.gz"
                    ls -lh build/*.tar.gz
                '''
            }
        }

        stage('πŸš€ Deploy (Dev)') {
            steps {
                echo 'Deploying to Development environment...'
                sh '''
                    echo "================================================"
                    echo " πŸŽ‰ BUILD SUCCESSFUL!"
                    echo " App: ${APP_NAME}"
                    echo " Version: ${BUILD_VERSION}"
                    echo " Environment: Development"
                    echo " Status: DEPLOYED βœ…"
                    echo "================================================"
                '''
            }
        }
    }

    post {
        success {
            echo 'βœ… Pipeline completed successfully!'
        }
        failure {
            echo '❌ Pipeline failed! Check logs above.'
        }
        always {
            echo "Pipeline finished. Build #${BUILD_NUMBER}"
        }
    }
}

Expected Output

output
Started by user admin
[Pipeline] Start of Pipeline
[Pipeline] agent
[Pipeline] { (πŸ” Checkout)
[Pipeline] echo
βœ… Code checked out successfully
[Pipeline] { (πŸ”¨ Build)
Building hello-cicd-app version 1.0.42
Starting build process...
Build artifacts generated at: build/output/
[Pipeline] { (πŸ§ͺ Test)
=== Running Tests ===
Test 1: Health Check... PASSED βœ…
Test 2: Config Validation... PASSED βœ…
Test 3: API Endpoint... PASSED βœ…
[Pipeline] { (πŸ“¦ Package)
Package created: app-1.0.42.tar.gz
[Pipeline] { (πŸš€ Deploy (Dev))
================================================
 πŸŽ‰ BUILD SUCCESSFUL!
 App: hello-cicd-app
 Version: 1.0.42
 Environment: Development
 Status: DEPLOYED βœ…
================================================
Finished: SUCCESS

CH 02

DevOps Lifecycle

Plan karo, code karo, build karo, test karo, deploy karo, monitor karo β€” yeh hai DevOps

DevOps sirf ek tool ya technology nahi hai β€” yeh ek culture aur practice hai jo Development (Dev) aur Operations (Ops) teams ko ek saath kaam karne ke liye encourage karta hai. Netflix, Amazon, Google β€” yeh sab companies DevOps ko follow karti hain aur din mein hundreds of times deploy karti hain!

πŸ”„ The 8-Phase DevOps Lifecycle

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ ∞ DEVOPS INFINITY LOOP β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ PLAN β”‚ β†’ β”‚ CODE β”‚ β†’ β”‚ BUILD β”‚ β†’ β”‚ TEST β”‚ β”‚Jira/Conf β”‚ β”‚Git/VS β”‚ β”‚Maven/npm β”‚ β”‚JUnit/Sel β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ ↑ ↓ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ MONITOR β”‚ ← β”‚ OPERATE β”‚ ← β”‚ DEPLOY β”‚ ← β”‚RELEASE β”‚ β”‚Prom/Graf β”‚ β”‚K8s/AWS β”‚ β”‚Helm/Argo β”‚ β”‚Nexus/ECR β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
πŸ“‹

Plan

Jira, Confluence, Trello β€” backlogs, sprints, roadmaps.

πŸ’»

Code

Git, GitHub, GitLab β€” version control, branching strategies.

πŸ”¨

Build

Maven, Gradle, npm β€” compile code into deployable artifacts.

πŸ§ͺ

Test

JUnit, Selenium, SonarQube β€” automated quality assurance.

πŸ“¦

Release

Nexus, JFrog, ECR β€” artifact storage and versioning.

πŸš€

Deploy

Helm, ArgoCD, Spinnaker β€” push to environments safely.

βš™οΈ

Operate

Kubernetes, ECS, Ansible β€” manage infrastructure and apps.

πŸ“Š

Monitor

Prometheus, Grafana, ELK β€” observe, alert, and improve.

πŸ› οΈ Chapter Projects

PROJECT 2.1

DevOps Toolchain Setup

Install and configure a complete DevOps toolchain: Jenkins + SonarQube + Nexus + Docker on a single Ubuntu VM.

UbuntuDocker ComposeSetup
PROJECT 2.2

Sprint-to-Deploy Simulation

Simulate a full sprint cycle: create Jira ticket β†’ code β†’ PR β†’ CI build β†’ staging deploy.

JiraGitHubJenkins
PROJECT 2.3

Automated Dev Environment

Use Vagrant + shell scripts to provision an identical dev environment for all team members.

VagrantVirtualBoxIaC
PROJECT 2.4

GitOps Workflow Demo

Implement GitOps: Git as single source of truth for both code and infrastructure definitions.

GitOpsArgoCDHelm
PROJECT 2.5

DORA Metrics Dashboard

Build a Grafana dashboard showing the 4 DORA metrics: deployment frequency, lead time, MTTR, change failure rate.

GrafanaPrometheusDORA
bash β€” DevOps Toolchain via Docker Compose
# docker-compose.yml β€” Full DevOps Stack
version: '3.8'

services:
  jenkins:
    image: jenkins/jenkins:lts-jdk17
    container_name: jenkins
    ports:
      - "8080:8080"
      - "50000:50000"
    volumes:
      - jenkins_data:/var/jenkins_home
      - /var/run/docker.sock:/var/run/docker.sock
    environment:
      - JAVA_OPTS=-Djenkins.install.runSetupWizard=false
    restart: unless-stopped

  sonarqube:
    image: sonarqube:lts-community
    container_name: sonarqube
    ports:
      - "9000:9000"
    environment:
      - SONAR_JDBC_URL=jdbc:postgresql://sonar-db:5432/sonar
      - SONAR_JDBC_USERNAME=sonar
      - SONAR_JDBC_PASSWORD=sonar
    volumes:
      - sonar_data:/opt/sonarqube/data
      - sonar_logs:/opt/sonarqube/logs
    depends_on:
      - sonar-db

  sonar-db:
    image: postgres:15
    container_name: sonar-db
    environment:
      - POSTGRES_DB=sonar
      - POSTGRES_USER=sonar
      - POSTGRES_PASSWORD=sonar
    volumes:
      - sonar_db_data:/var/lib/postgresql/data

  nexus:
    image: sonatype/nexus3:latest
    container_name: nexus
    ports:
      - "8081:8081"
      - "8082:8082"
    volumes:
      - nexus_data:/nexus-data
    restart: unless-stopped

volumes:
  jenkins_data:
  sonar_data:
  sonar_logs:
  sonar_db_data:
  nexus_data:
bash β€” Start the Stack
# Save the above as docker-compose.yml, then:
docker compose up -d

# Check all services are running
docker compose ps

# Access:
# Jenkins   β†’ http://localhost:8080
# SonarQube β†’ http://localhost:9000  (admin/admin)
# Nexus     β†’ http://localhost:8081  (admin / see logs)

# Get Nexus initial password
docker exec nexus cat /nexus-data/admin.password

CH 03

Git & Version Control

CI/CD ki neev hai Git β€” isko master kiye bina pipeline nahi chalti

Git sirf git push nahi hota. Professional DevOps engineers advanced Git strategies use karte hain like GitFlow, Trunk-Based Development, cherry-pick, rebase. Yahan woh sab seekhenge jo actual teams follow karti hain.

🌿 Branching Strategies

GitFlow Strategy: ───────────────────────────────────────────────────── main ●─────────────────────────────●──────────● (production) \ / \ develop ●──────●──────●──────●─────● ● (integration) / \ / \ feature/A ●──● \ / ●──● (feature branches) \ / release/1.0 ●──● (release branch: hotfix only) | hotfix ●──● β†’ merged to main + develop
bash β€” Essential Git Commands for DevOps
# ── GITFLOW WORKFLOW ──────────────────────────────────

# Initialize git in project
git init
git remote add origin https://github.com/org/project.git

# Create feature branch from develop
git checkout develop
git pull origin develop
git checkout -b feature/user-authentication

# Work, commit with conventional commits
git add .
git commit -m "feat(auth): add JWT token validation"
git commit -m "fix(auth): handle expired token edge case"
git commit -m "test(auth): add unit tests for login flow"

# Push feature branch
git push origin feature/user-authentication

# ── REBASING (Clean History) ──────────────────────────
git fetch origin
git rebase origin/develop

# Interactive rebase β€” squash commits before PR
git rebase -i HEAD~3

# ── TAGGING FOR RELEASES ──────────────────────────────
git tag -a v1.2.0 -m "Release version 1.2.0 β€” auth module"
git push origin v1.2.0
git push origin --tags

# ── GIT HOOKS (Pre-commit linting) ────────────────────
cat > .git/hooks/pre-commit << 'EOF'
#!/bin/bash
echo "πŸ” Running pre-commit checks..."
npm run lint
if [ $? -ne 0 ]; then
  echo "❌ Linting failed! Fix errors before committing."
  exit 1
fi
echo "βœ… Pre-commit checks passed!"
EOF
chmod +x .git/hooks/pre-commit

# ── CHERRY-PICK (Port a specific commit) ──────────────
git log --oneline develop | head -5
git cherry-pick abc1234  # pick specific commit hash

# ── STASH (Save work temporarily) ─────────────────────
git stash push -m "WIP: payment integration"
git stash list
git stash pop              # restore latest stash

πŸ› οΈ Chapter Projects

PROJECT 3.1

GitFlow Branching Automation

Write a shell script that automates the GitFlow workflow β€” feature start, finish, and release process.

GitFlowBashAutomation
PROJECT 3.2

Git Hooks CI Enforcer

Implement pre-commit and commit-msg hooks to enforce code style, conventional commits, and secrets detection.

Git HooksHuskySecurity
PROJECT 3.3

Monorepo Pipeline Strategy

Set up a monorepo with selective CI β€” only build/test the service that changed using path filters.

MonorepoGitHub Actionsnx
PROJECT 3.4

Semantic Versioning Bot

Auto-bump version (patch/minor/major) based on conventional commit messages using semantic-release.

semantic-releaseVersioningGitHub Actions
PROJECT 3.5

Protected Branch Policies

Configure GitHub branch protection: require PR reviews, status checks, and signed commits for main branch.

GitHubBranch ProtectionSecurity

CH 04

Continuous Integration (CI) Concepts

Har commit pe build + test β€” manual testing ka zamana gaya

CI ka matlab hai: jab bhi koi developer code push kare, automatically ek pipeline trigger ho jaaye jo code ko build kare, test kare, aur quality check kare. Agar koi cheez fail ho, developer ko turant pata chale β€” na ki 2 hafte baad production crash hone pe.

🧱 CI Pipeline Building Blocks

  • Source Trigger β€” Git push/PR webhook se pipeline start hoti hai
  • Dependency Installation β€” npm install / mvn dependency:resolve / pip install
  • Static Code Analysis β€” ESLint, Checkstyle, SonarQube
  • Unit Tests β€” JUnit, pytest, Jest β€” code ki individual units test karo
  • Integration Tests β€” Multiple components together test karo
  • Code Coverage β€” 80%+ coverage enforce karo
  • Artifact Generation β€” .jar, .war, Docker image, npm package
  • Artifact Storage β€” Nexus, JFrog Artifactory, ECR, GitHub Packages
⚠️

Common Mistake: Bahut log CI mein sirf unit tests run karte hain aur integration tests skip kar dete hain. Production mein 80% bugs integration issues se aate hain!

jenkinsfile β€” Production-Grade CI Pipeline
pipeline {
    agent {
        docker {
            image 'maven:3.9-eclipse-temurin-17'
            args '-v $HOME/.m2:/root/.m2'  // Cache maven deps
        }
    }

    options {
        timeout(time: 30, unit: 'MINUTES')
        buildDiscarder(logRotator(numToKeepStr: '10'))
        disableConcurrentBuilds()
    }

    environment {
        SONAR_HOST = 'http://sonarqube:9000'
        NEXUS_URL   = 'http://nexus:8081'
        NEXUS_CREDS = credentials('nexus-credentials')
        SONAR_TOKEN = credentials('sonar-token')
        APP_VERSION = sh(script: "mvn help:evaluate -Dexpression=project.version -q -DforceStdout", returnStdout: true).trim()
    }

    stages {
        stage('πŸ” Checkout & Validate') {
            steps {
                checkout scm
                sh 'git log --oneline -5'
                sh 'mvn validate'
            }
        }

        stage('πŸ“¦ Dependency Resolution') {
            steps {
                sh 'mvn dependency:resolve -B'
                sh 'mvn dependency:tree | head -50'
            }
        }

        stage('πŸ”¨ Compile') {
            steps {
                sh 'mvn compile -B'
            }
        }

        stage('πŸ”Ž Static Analysis') {
            parallel {
                stage('Checkstyle') {
                    steps { sh 'mvn checkstyle:check -B' }
                }
                stage('SpotBugs') {
                    steps { sh 'mvn spotbugs:check -B' }
                }
            }
        }

        stage('πŸ§ͺ Unit Tests') {
            steps {
                sh 'mvn test -B'
            }
            post {
                always {
                    junit '**/target/surefire-reports/*.xml'
                    jacoco(
                        execPattern: '**/target/jacoco.exec',
                        classPattern: '**/target/classes',
                        sourcePattern: '**/src/main/java',
                        minimumLineCoverage: '80'
                    )
                }
            }
        }

        stage('πŸ”¬ Integration Tests') {
            steps {
                sh 'mvn verify -Pintegration-tests -B'
            }
        }

        stage('πŸ“Š SonarQube Analysis') {
            steps {
                withSonarQubeEnv('SonarQube') {
                    sh """
                        mvn sonar:sonar \
                          -Dsonar.projectKey=my-app \
                          -Dsonar.host.url=${SONAR_HOST} \
                          -Dsonar.login=${SONAR_TOKEN}
                    """
                }
                timeout(time: 5, unit: 'MINUTES') {
                    waitForQualityGate abortPipeline: true
                }
            }
        }

        stage('πŸ“¦ Build & Package') {
            steps {
                sh 'mvn package -DskipTests -B'
                archiveArtifacts artifacts: 'target/*.jar', fingerprint: true
            }
        }

        stage('πŸ“€ Publish to Nexus') {
            steps {
                nexusArtifactUploader(
                    nexusVersion: 'nexus3',
                    protocol: 'http',
                    nexusUrl: "${NEXUS_URL}",
                    groupId: 'com.myapp',
                    version: "${APP_VERSION}",
                    repository: 'maven-releases',
                    credentialsId: 'nexus-credentials',
                    artifacts: [[
                        artifactId: 'my-app',
                        classifier: '',
                        file: "target/my-app-${APP_VERSION}.jar",
                        type: 'jar'
                    ]]
                )
            }
        }
    }
}

πŸ› οΈ Chapter Projects

PROJECT 4.1

Java Spring Boot CI Pipeline

Complete CI pipeline for a Spring Boot app: Maven build, JUnit tests, JaCoCo coverage, SonarQube analysis, Nexus publish.

JavaMavenSonarQubeJenkins
PROJECT 4.2

Node.js CI with Jest & Coverage

Build a CI pipeline for a Node.js Express API β€” npm install, ESLint, Jest tests, 85% coverage gate.

Node.jsJestESLintGitHub Actions
PROJECT 4.3

Python FastAPI CI Pipeline

CI for Python FastAPI: pip install, flake8 lint, pytest with coverage, Bandit security scan.

PythonFastAPIpytestBandit
PROJECT 4.4

Parallel Testing Strategy

Split test suite into 4 parallel jobs to reduce CI time from 20 minutes to 5 minutes.

ParallelMatrixPerformance
PROJECT 4.5

Quality Gate Enforcement

Implement strict quality gates: fail build if code coverage below 80%, critical SonarQube issues found, or dependency vulnerabilities detected.

Quality GatesSonarQubeOWASP

CH 05

Continuous Delivery & Deployment

Build se production tak β€” safely, quickly, reliably

CI builds the artifact. CD delivers it. The difference between Continuous Delivery and Continuous Deployment is just one button β€” a manual approval gate. In CD (Delivery), a human approves production pushes. In Continuous Deployment, even that approval is automated.

🌍 Multi-Environment Deployment Strategy

Git Push β†’ CI Pipeline β†’ Artifact Registry β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β–Ό β–Ό β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ DEV β”‚ ──auto─▢ STAGING β”‚ ─approveβ–Ά PROD β”‚ β”‚ always β”‚ β”‚ smoke β”‚ β”‚ canary β”‚ β”‚ deploy β”‚ β”‚ tests β”‚ β”‚ rollout β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”‚ β”‚ localhost:3001 qa.myapp.com myapp.com
jenkinsfile β€” Multi-Env CD Pipeline
pipeline {
    agent any

    parameters {
        choice(name: 'DEPLOY_ENV', choices: ['dev', 'staging', 'production'])
        booleanParam(name: 'RUN_SMOKE_TESTS', defaultValue: true)
        string(name: 'IMAGE_TAG', defaultValue: 'latest')
    }

    environment {
        REGISTRY    = 'your-registry.io/myapp'
        DEPLOY_USER = credentials('deploy-ssh-key')
        SLACK_HOOK   = credentials('slack-webhook')
    }

    stages {
        stage('πŸ” Pre-Deploy Checks') {
            steps {
                sh """
                    echo "Deploying: ${REGISTRY}:${params.IMAGE_TAG}"
                    echo "Target env: ${params.DEPLOY_ENV}"
                    docker pull ${REGISTRY}:${params.IMAGE_TAG}
                """
            }
        }

        stage('πŸš€ Deploy to Dev') {
            when { expression { params.DEPLOY_ENV == 'dev' } }
            steps {
                sh """
                    ssh deploy@dev.myapp.com \\
                      "docker pull ${REGISTRY}:${params.IMAGE_TAG} && \\
                       docker stop myapp || true && \\
                       docker rm myapp || true && \\
                       docker run -d --name myapp \\
                         -p 3001:3000 \\
                         -e NODE_ENV=development \\
                         ${REGISTRY}:${params.IMAGE_TAG}"
                """
            }
        }

        stage('πŸ§ͺ Staging Smoke Tests') {
            when {
                allOf {
                    expression { params.DEPLOY_ENV == 'staging' }
                    expression { params.RUN_SMOKE_TESTS }
                }
            }
            steps {
                sh '''
                    # Deploy to staging first
                    docker service update \
                      --image ${REGISTRY}:${IMAGE_TAG} \
                      myapp_staging

                    # Wait for service to stabilize
                    sleep 15

                    # Run smoke tests
                    curl -f https://staging.myapp.com/health || exit 1
                    curl -f https://staging.myapp.com/api/status || exit 1
                    echo "βœ… Smoke tests passed!"
                '''
            }
        }

        stage('⏸️ Production Approval') {
            when { expression { params.DEPLOY_ENV == 'production' } }
            steps {
                script {
                    def approved = input(
                        message: "Deploy ${params.IMAGE_TAG} to PRODUCTION?",
                        ok: 'Yes, Deploy!',
                        submitterParameter: 'APPROVED_BY',
                        parameters: [
                            choice(name: 'STRATEGY',
                                   choices: ['canary', 'blue-green', 'rolling'])
                        ]
                    )
                    env.APPROVED_BY = approved.APPROVED_BY
                    env.DEPLOY_STRATEGY = approved.STRATEGY
                }
            }
        }

        stage('🟒 Blue-Green Production Deploy') {
            when { expression { params.DEPLOY_ENV == 'production' } }
            steps {
                sh """
                    # Get current active slot (blue or green)
                    CURRENT=\$(aws elbv2 describe-target-groups \
                      --names myapp-prod --query \
                      'TargetGroups[0].Tags[?Key==`slot`].Value' \
                      --output text)

                    if [ "\$CURRENT" == "blue" ]; then
                        DEPLOY_SLOT="green"
                        CURRENT_SLOT="blue"
                    else
                        DEPLOY_SLOT="blue"
                        CURRENT_SLOT="green"
                    fi

                    echo "Deploying to \$DEPLOY_SLOT slot..."
                    # Deploy to inactive slot
                    # Run tests on inactive slot
                    # Switch load balancer
                    echo "βœ… Production deploy complete via \$DEPLOY_SLOT slot"
                    echo "Approved by: ${env.APPROVED_BY}"
                """
            }
        }
    }

    post {
        success {
            sh """
                curl -X POST ${SLACK_HOOK} \
                  -d '{"text":"βœ… Deploy SUCCESS: ${params.IMAGE_TAG} β†’ ${params.DEPLOY_ENV}"}'
            """
        }
        failure {
            sh """
                curl -X POST ${SLACK_HOOK} \
                  -d '{"text":"❌ Deploy FAILED: ${params.IMAGE_TAG} β†’ ${params.DEPLOY_ENV}"}'
            """
        }
    }
}

πŸ› οΈ Chapter Projects

PROJECT 5.1

Blue-Green Deployment

Implement zero-downtime blue-green deployment using two identical environments and an nginx load balancer.

Blue-GreennginxZero Downtime
PROJECT 5.2

Canary Release Pipeline

Route 5% of traffic to new version, monitor error rates, then gradually increase to 100% or auto-rollback.

CanaryArgo RolloutsTraffic Split
PROJECT 5.3

Automated Rollback on Failure

If post-deploy health checks fail, automatically roll back to the last stable version within 2 minutes.

RollbackHealth ChecksAutomation
PROJECT 5.4

Feature Flags with LaunchDarkly

Deploy code with features hidden behind flags β€” enable per user/region without redeployment.

Feature FlagsLaunchDarklyA/B Testing
PROJECT 5.5

Multi-Region CD Pipeline

Deploy simultaneously to us-east-1 and ap-south-1 regions with region-specific config injection.

Multi-RegionAWSParallel

CH 06

Jenkins β€” Full Deep Dive

The OG CI/CD server β€” still the most powerful tool in enterprise DevOps

Jenkins ek open-source automation server hai jo 2011 mein release hua aur aaj bhi enterprise mein #1 CI/CD tool hai. 1800+ plugins ke saath, Jenkins kisi bhi tech stack ke saath kaam karta hai. Jenkins ko properly configure karna ek art hai β€” yahan hum sab sikhenge.

πŸ—οΈ Jenkins Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ JENKINS CONTROLLER β”‚ β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚ β”‚ Web UI / β”‚ β”‚ Job β”‚ β”‚ Plugin Manager β”‚ β”‚ β”‚ β”‚ API β”‚ β”‚ Scheduler β”‚ β”‚ 1800+ plugins β”‚ β”‚ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ JNLP / SSH β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β–Ό β–Ό β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ AGENT 1 β”‚ β”‚ AGENT 2 β”‚ β”‚ DOCKER AGENTβ”‚ β”‚ Linux β”‚ β”‚ Windows β”‚ β”‚ maven:3.9 β”‚ β”‚ (Java) β”‚ β”‚ (.NET) β”‚ β”‚ node:20 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
bash β€” Jenkins Agent Setup (via SSH)
# On the AGENT machine (Ubuntu 22.04)
sudo apt update && sudo apt install openjdk-17-jdk -y

# Create jenkins user for running jobs
sudo useradd -m -s /bin/bash jenkins-agent
sudo mkdir -p /opt/jenkins-agent
sudo chown jenkins-agent:jenkins-agent /opt/jenkins-agent

# Setup SSH key authentication
sudo su - jenkins-agent
ssh-keygen -t ed25519 -C "jenkins-agent-key" -f ~/.ssh/jenkins_agent_key

# Add public key to authorized_keys
cat ~/.ssh/jenkins_agent_key.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys

# Copy PRIVATE key to Jenkins β†’ Manage Jenkins β†’ Credentials
cat ~/.ssh/jenkins_agent_key  # copy this to Jenkins as SSH private key credential
jenkinsfile β€” Advanced Shared Library Usage
// vars/buildAndPush.groovy  (Shared Library)
def call(Map config) {
    def registry  = config.registry  ?: 'docker.io'
    def imageName = config.imageName ?: error('imageName required')
    def tag       = config.tag       ?: 'latest'

    stage('🐳 Docker Build') {
        sh """
            docker build \
              --build-arg BUILD_DATE=\$(date -u +%Y-%m-%dT%H:%M:%SZ) \
              --build-arg VCS_REF=\$(git rev-parse --short HEAD) \
              --label org.opencontainers.image.revision=\$(git rev-parse HEAD) \
              -t ${registry}/${imageName}:${tag} \
              -t ${registry}/${imageName}:latest \
              .
        """
    }

    stage('πŸ”’ Image Security Scan') {
        sh """
            trivy image \
              --exit-code 1 \
              --severity HIGH,CRITICAL \
              ${registry}/${imageName}:${tag}
        """
    }

    stage('πŸ“€ Push to Registry') {
        withCredentials([usernamePassword(
            credentialsId: 'docker-registry-creds',
            usernameVariable: 'DOCKER_USER',
            passwordVariable: 'DOCKER_PASS'
        )]) {
            sh """
                echo "\$DOCKER_PASS" | docker login ${registry} -u "\$DOCKER_USER" --password-stdin
                docker push ${registry}/${imageName}:${tag}
                docker push ${registry}/${imageName}:latest
                docker logout ${registry}
            """
        }
    }
}

// ─────────────────────────────────────────────────────
// Using shared library in Jenkinsfile:

@Library('my-shared-library@main') _

pipeline {
    agent any
    stages {
        stage('CI') {
            steps {
                script {
                    buildAndPush(
                        registry:  'your-ecr-account.dkr.ecr.ap-south-1.amazonaws.com',
                        imageName: 'my-webapp',
                        tag:       "${env.BUILD_NUMBER}"
                    )
                }
            }
        }
    }
}

πŸ› οΈ Chapter Projects

PROJECT 6.1

Jenkins as Code (JCasC)

Configure entire Jenkins setup (plugins, credentials, agents, jobs) via YAML using Configuration as Code plugin.

JCasCYAMLGitOps
PROJECT 6.2

Jenkins Shared Library

Build a Groovy shared library with reusable functions for build, test, scan, and deploy β€” used across 10+ pipelines.

GroovyShared LibraryDRY
PROJECT 6.3

Dynamic Agent Provisioning

Configure Jenkins to spin up ephemeral Docker agents on demand and destroy them after the build completes.

Docker AgentsEphemeralScale
PROJECT 6.4

Kubernetes Jenkins Agents

Run Jenkins agents as Kubernetes pods using the Kubernetes plugin β€” auto-scale based on build queue.

KubernetesPodsAuto-Scale
PROJECT 6.5

Pipeline as Code Migration

Migrate 20 freestyle Jenkins jobs to declarative Jenkinsfile-based pipelines with version-controlled config.

MigrationDeclarativeGitOps

CH 07

GitHub Actions

GitHub ka built-in CI/CD β€” setup minimal, power maximum

GitHub Actions ek native CI/CD system hai jo directly GitHub repositories ke saath integrate hota hai. Koi alag server maintain nahi karna, koi plugin install nahi karna β€” bas ek YAML file likhao aur pipeline ready! Open-source projects ke liye unlimited free minutes milte hain.

βš™οΈ GitHub Actions Core Concepts

  • Workflow β€” YAML file in .github/workflows/ directory
  • Event β€” trigger: push, pull_request, schedule, workflow_dispatch
  • Job β€” a set of steps that run on the same runner
  • Step β€” individual task (shell command or action)
  • Action β€” reusable unit (like actions/checkout@v4)
  • Runner β€” Ubuntu, Windows, macOS VMs provided by GitHub
  • Matrix β€” run same job across multiple configurations
  • Secrets β€” encrypted environment variables for credentials
yaml β€” .github/workflows/ci-cd.yml
name: πŸš€ CI/CD Pipeline

on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main]
  workflow_dispatch:
    inputs:
      environment:
        description: 'Target environment'
        required: true
        default: 'staging'
        type: choice
        options: [staging, production]

env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}

jobs:
  lint-and-test:
    name: πŸ” Lint & Test
    runs-on: ubuntu-latest
    strategy:
      matrix:
        node-version: [18, 20, 22]
    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Setup Node.js ${{ matrix.node-version }}
        uses: actions/setup-node@v4
        with:
          node-version: ${{ matrix.node-version }}
          cache: 'npm'

      - name: Install dependencies
        run: npm ci

      - name: Run ESLint
        run: npm run lint

      - name: Run tests with coverage
        run: npm test -- --coverage --coverageReporters=lcov

      - name: Upload coverage to Codecov
        uses: codecov/codecov-action@v4
        with:
          token: ${{ secrets.CODECOV_TOKEN }}

  security-scan:
    name: πŸ”’ Security Scan
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Run Trivy vulnerability scanner
        uses: aquasecurity/trivy-action@master
        with:
          scan-type: 'fs'
          severity: 'CRITICAL,HIGH'
          exit-code: '1'

      - name: SAST with CodeQL
        uses: github/codeql-action/init@v3
        with:
          languages: javascript

  build-and-push:
    name: 🐳 Build & Push Image
    runs-on: ubuntu-latest
    needs: [lint-and-test, security-scan]
    permissions:
      contents: read
      packages: write
    steps:
      - uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Log in to GitHub Container Registry
        uses: docker/login-action@v3
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Extract metadata
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
          tags: |
            type=ref,event=branch
            type=sha,prefix=sha-
            type=semver,pattern={{version}}

      - name: Build and push Docker image
        uses: docker/build-push-action@v5
        with:
          context: .
          push: true
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          cache-from: type=gha
          cache-to: type=gha,mode=max

  deploy-staging:
    name: 🌐 Deploy to Staging
    runs-on: ubuntu-latest
    needs: build-and-push
    environment:
      name: staging
      url: https://staging.myapp.com
    steps:
      - uses: actions/checkout@v4
      - name: Deploy to staging via SSH
        uses: appleboy/ssh-action@v1.0.0
        with:
          host: ${{ secrets.STAGING_HOST }}
          username: deploy
          key: ${{ secrets.STAGING_SSH_KEY }}
          script: |
            docker pull ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:sha-${{ github.sha }}
            docker compose -f /opt/myapp/docker-compose.yml up -d
            docker image prune -f

πŸ› οΈ Chapter Projects

PROJECT 7.1

Full-Stack App CI/CD

React frontend + Node.js backend CI/CD: parallel jobs, matrix testing, Docker build, deploy to AWS EC2.

ReactNode.jsAWS EC2
PROJECT 7.2

Automated Release Pipeline

On git tag push, auto-generate changelog, create GitHub Release, build binaries for Linux/Mac/Windows.

ReleasesGoReleaserChangelog
PROJECT 7.3

Terraform Plan in PR Comments

On every PR, run terraform plan and post the output as a PR comment so reviewers see infrastructure changes.

TerraformPR CommentsIaC
PROJECT 7.4

Custom GitHub Action

Build a reusable composite action that runs your company’s standard security + quality checks.

Custom ActionCompositeReusable
PROJECT 7.5

Scheduled Database Backup

Use cron-triggered GitHub Actions workflow to backup PostgreSQL to S3 every night at 2 AM UTC.

CronPostgreSQLS3

CH 08

Docker Integration in CI/CD

“Works on my machine” ka problem hamesha ke liye khatam karo

Docker ne CI/CD ko revolutionize kar diya. Ab application ke saath environment bhi package hota hai. Dockerfile β†’ Image β†’ Container β†’ Registry β†’ Deploy β€” yeh chain CI/CD ka backbone hai aaj ke time mein.

🐳 Production-Grade Dockerfile

dockerfile β€” Multi-Stage Build (Node.js)
# Stage 1: Dependencies
FROM node:20-alpine AS deps
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && npm cache clean --force

# Stage 2: Build
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

# Stage 3: Test
FROM builder AS tester
RUN npm test -- --passWithNoTests

# Stage 4: Production Runtime (minimal)
FROM node:20-alpine AS production

# Security: run as non-root user
RUN addgroup -g 1001 -S nodejs && \
    adduser  -S nextjs -u 1001

WORKDIR /app

# Copy only what's needed
COPY --from=deps     --chown=nextjs:nodejs /app/node_modules ./node_modules
COPY --from=builder  --chown=nextjs:nodejs /app/dist        ./dist
COPY --from=builder  --chown=nextjs:nodejs /app/package.json ./

USER nextjs

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD wget -qO- http://localhost:3000/health || exit 1

EXPOSE 3000
ENV NODE_ENV=production

CMD ["node", "dist/server.js"]
bash β€” Docker in CI/CD Commands
# ── BUILD ─────────────────────────────────────────────────
# Multi-platform build (amd64 + arm64)
docker buildx build \
  --platform linux/amd64,linux/arm64 \
  --build-arg BUILD_DATE=$(date -u +%Y-%m-%dT%H:%M:%SZ) \
  --build-arg GIT_COMMIT=$(git rev-parse HEAD) \
  --target production \
  -t myapp:$(git rev-parse --short HEAD) \
  -t myapp:latest \
  --push \
  .

# ── SCAN ──────────────────────────────────────────────────
# Trivy vulnerability scan
trivy image --exit-code 1 \
  --severity HIGH,CRITICAL \
  --format table \
  myapp:latest

# Docker Scout (new Docker vulnerability tool)
docker scout cves myapp:latest

# ── TEST ──────────────────────────────────────────────────
# Run tests inside container
docker run --rm \
  -e NODE_ENV=test \
  -v $(pwd)/reports:/app/reports \
  myapp:latest \
  npm test

# ── PUSH TO ECR ───────────────────────────────────────────
AWS_REGION="ap-south-1"
AWS_ACCOUNT="123456789012"
ECR_REPO="${AWS_ACCOUNT}.dkr.ecr.${AWS_REGION}.amazonaws.com/myapp"

# Login to ECR
aws ecr get-login-password --region ${AWS_REGION} | \
  docker login --username AWS --password-stdin ${ECR_REPO}

# Tag and push
docker tag myapp:latest ${ECR_REPO}:latest
docker tag myapp:latest ${ECR_REPO}:$(git rev-parse --short HEAD)
docker push ${ECR_REPO}:latest
docker push ${ECR_REPO}:$(git rev-parse --short HEAD)

# ── CLEAN UP ──────────────────────────────────────────────
docker image prune -f
docker system prune --volumes -f  # CAUTION: removes all unused

πŸ› οΈ Chapter Projects

PROJECT 8.1

Dockerized Microservices CI

3 microservices (user, product, order) each with own Dockerfile, built and pushed in parallel CI pipeline.

MicroservicesParallelECR
PROJECT 8.2

Docker Image Size Optimization

Take a 1.2GB Docker image and optimize to under 80MB using multi-stage builds and Alpine base images.

OptimizationAlpineMulti-Stage
PROJECT 8.3

Docker Compose E2E Testing

Spin up full stack (app + DB + Redis) using docker-compose in CI, run integration tests, tear down.

Docker ComposeE2EServices
PROJECT 8.4

Private Registry with Harbor

Set up Harbor private container registry with RBAC, image signing, and vulnerability scanning integration.

HarborRBACImage Signing
PROJECT 8.5

Container Security Hardening

Apply CIS Docker Benchmark: non-root user, read-only filesystem, dropped capabilities, seccomp profiles.

SecurityCISHardening

CH 09

Kubernetes CI/CD

Container orchestration + GitOps = production ka future

Kubernetes (K8s) sirf container run karne ka tool nahi hai β€” yeh ek complete production platform hai. CI/CD with Kubernetes matlab: Docker image build karo, Kubernetes manifests update karo, ArgoCD automatically deploy kar dega. Yeh GitOps hai aur yeh future hai.

☸️ Kubernetes Deployment Manifests

yaml β€” Kubernetes Deployment + Service + HPA
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp
  namespace: production
  labels:
    app: myapp
    version: "1.0.42"
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  template:
    metadata:
      labels:
        app: myapp
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "3000"
    spec:
      serviceAccountName: myapp-sa
      securityContext:
        runAsNonRoot: true
        runAsUser: 1001
        fsGroup: 2000
      containers:
        - name: myapp
          image: "ghcr.io/myorg/myapp:1.0.42"  # Updated by CI
          ports:
            - containerPort: 3000
          env:
            - name: NODE_ENV
              value: production
            - name: DB_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: myapp-secrets
                  key: db-password
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "512Mi"
              cpu: "500m"
          livenessProbe:
            httpGet:
              path: /health
              port: 3000
            initialDelaySeconds: 30
            periodSeconds: 10
          readinessProbe:
            httpGet:
              path: /ready
              port: 3000
            initialDelaySeconds: 5
            periodSeconds: 5
---
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: myapp-hpa
  namespace: production
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: myapp
  minReplicas: 3
  maxReplicas: 20
  metrics:
    - type: Resource
      resource:
        name: cpu
        target:
          type: Utilization
          averageUtilization: 70
    - type: Resource
      resource:
        name: memory
        target:
          type: Utilization
          averageUtilization: 80
yaml β€” ArgoCD Application (GitOps)
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: myapp-production
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/myorg/myapp-k8s-config.git
    targetRevision: main
    path: overlays/production
  destination:
    server: https://kubernetes.default.svc
    namespace: production
  syncPolicy:
    automated:
      prune: true      # Delete resources removed from Git
      selfHeal: true   # Auto-fix manual changes in cluster
    syncOptions:
      - CreateNamespace=true
      - PrunePropagationPolicy=foreground
    retry:
      limit: 5
      backoff:
        duration: 5s
        maxDuration: 3m
        factor: 2

πŸ› οΈ Chapter Projects

PROJECT 9.1

Full GitOps with ArgoCD

Set up a complete GitOps workflow: CI builds image β†’ updates Helm values β†’ ArgoCD auto-syncs to cluster.

ArgoCDGitOpsHelm
PROJECT 9.2

Canary with Argo Rollouts

Implement canary deployment: 10% β†’ analysis β†’ 50% β†’ analysis β†’ 100% with automatic rollback on errors.

Argo RolloutsCanaryAnalysis
PROJECT 9.3

Multi-Cluster CD Pipeline

Deploy same application to dev cluster (on-prem), staging (EKS), and production (GKE) using ArgoCD ApplicationSets.

Multi-ClusterApplicationSetArgoCD
PROJECT 9.4

Helm Chart for Microservices

Create a reusable Helm chart for all microservices, with environment-specific values files for each deployment target.

HelmTemplatingReusable
PROJECT 9.5

K8s RBAC for CI/CD

Create service accounts, ClusterRoles, and RoleBindings with minimal permissions for CI agents to deploy safely.

RBACSecurityLeast Privilege

CH 10

Terraform + CI/CD

Infrastructure bhi code hai β€” version control mein, pipeline mein

Terraform ke saath CI/CD matlab: infrastructure changes bhi automatically test aur apply hote hain. Koi bhi directly cloud console mein jaake kuch nahi badal sakta β€” har change Git se hona chahiye, pipeline se validate hona chahiye. Yeh hai Infrastructure as Code + GitOps.

yaml β€” GitHub Actions: Terraform CI/CD
name: Terraform CI/CD

on:
  pull_request:
    paths: ['terraform/**']
  push:
    branches: [main]
    paths: ['terraform/**']

env:
  TF_VERSION: '1.7.0'
  TF_WORKING_DIR: terraform/environments/production

jobs:
  terraform-plan:
    name: πŸ“‹ Terraform Plan
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read
      pull-requests: write
    steps:
      - uses: actions/checkout@v4

      - name: Configure AWS Credentials (OIDC)
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets.AWS_TERRAFORM_ROLE }}
          aws-region: ap-south-1

      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v3
        with:
          terraform_version: ${{ env.TF_VERSION }}

      - name: Terraform Format Check
        run: terraform fmt -check -recursive
        working-directory: ${{ env.TF_WORKING_DIR }}

      - name: Terraform Init
        run: terraform init -backend-config=backends/prod.hcl
        working-directory: ${{ env.TF_WORKING_DIR }}

      - name: Terraform Validate
        run: terraform validate
        working-directory: ${{ env.TF_WORKING_DIR }}

      - name: tfsec Security Scan
        uses: aquasecurity/tfsec-action@v1.0.0
        with:
          working_directory: ${{ env.TF_WORKING_DIR }}

      - name: Terraform Plan
        id: plan
        run: |
          terraform plan \
            -var-file=vars/production.tfvars \
            -out=tfplan \
            -no-color 2>&1 | tee plan_output.txt
        working-directory: ${{ env.TF_WORKING_DIR }}

      - name: Comment Plan on PR
        uses: actions/github-script@v7
        if: github.event_name == 'pull_request'
        with:
          script: |
            const fs = require('fs');
            const plan = fs.readFileSync('${{ env.TF_WORKING_DIR }}/plan_output.txt', 'utf8');
            const maxLen = 65000;
            const truncated = plan.length > maxLen ? plan.substring(0, maxLen) + '\n...(truncated)' : plan;
            github.rest.issues.createComment({
              issue_number: context.issue.number,
              owner: context.repo.owner,
              repo: context.repo.repo,
              body: '## πŸ“‹ Terraform Plan\n```hcl\n' + truncated + '\n```'
            })

  terraform-apply:
    name: πŸš€ Terraform Apply
    runs-on: ubuntu-latest
    needs: terraform-plan
    if: github.ref == 'refs/heads/main' && github.event_name == 'push'
    environment:
      name: production-infra
    steps:
      - uses: actions/checkout@v4
      - name: Configure AWS Credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets.AWS_TERRAFORM_ROLE }}
          aws-region: ap-south-1
      - name: Setup Terraform
        uses: hashicorp/setup-terraform@v3
        with:
          terraform_version: ${{ env.TF_VERSION }}
      - name: Terraform Init & Apply
        run: |
          terraform init -backend-config=backends/prod.hcl
          terraform apply -var-file=vars/production.tfvars -auto-approve
        working-directory: ${{ env.TF_WORKING_DIR }}

πŸ› οΈ Chapter Projects

PROJECT 10.1

AWS EKS via Terraform + CI

Provision EKS cluster, VPC, node groups, and IAM roles using Terraform with full CI/CD pipeline.

EKSTerraformAWS
PROJECT 10.2

Atlantis for Terraform

Deploy Atlantis to auto-plan on PR and auto-apply on merge β€” full self-service Terraform workflow.

AtlantisSelf-ServiceGitOps
PROJECT 10.3

Terraform Workspace Strategy

Use Terraform workspaces to manage dev/staging/prod environments from a single codebase.

WorkspacesDRYMulti-Env
PROJECT 10.4

Drift Detection Pipeline

Scheduled pipeline that runs terraform plan and alerts if infrastructure has drifted from the desired state.

Drift DetectionScheduledAlerts
PROJECT 10.5

Terratest β€” Infrastructure Tests

Write Go-based Terratest tests that validate Terraform modules actually create the expected AWS resources.

TerratestGoTesting

CH 11

Monitoring & Logging in CI/CD

Deploy karna kaafi nahi β€” yeh jaanna bhi zaroori hai ki sab theek chal raha hai

Deploy ke baad ka kaam shuru hota hai β€” monitoring. Prometheus + Grafana + ELK Stack + Jaeger β€” yeh combo production mein kya chal raha hai yeh dikhata hai. Ek achha DevOps engineer “deploy and forget” nahi karta β€” woh observe karta hai, alert set karta hai, aur proactively improve karta hai.

yaml β€” Prometheus Alert Rules for CI/CD
groups:
  - name: cicd-alerts
    rules:
      - alert: HighDeploymentFailureRate
        expr: |
          rate(jenkins_builds_failed_total[5m]) /
          rate(jenkins_builds_total[5m]) > 0.2
        for: 2m
        labels:
          severity: warning
        annotations:
          summary: "High build failure rate detected"
          description: "{{ $value | humanizePercentage }} of builds failing"

      - alert: AppErrorRateHigh
        expr: |
          rate(http_requests_total{status=~"5.."}[5m]) /
          rate(http_requests_total[5m]) > 0.05
        for: 1m
        labels:
          severity: critical
        annotations:
          summary: "Application error rate > 5%"
          runbook_url: https://wiki.company.com/runbooks/high-error-rate

      - alert: PodCrashLooping
        expr: |
          kube_pod_container_status_restarts_total > 5
        for: 5m
        labels:
          severity: critical
        annotations:
          summary: "Pod {{ $labels.pod }} is crash looping"

πŸ› οΈ Chapter Projects

PROJECT 11.1

Prometheus + Grafana Stack

Deploy full Prometheus stack on Kubernetes using kube-prometheus-stack Helm chart with pre-built dashboards.

PrometheusGrafanaHelm
PROJECT 11.2

ELK Stack Centralized Logging

Ship logs from all containers to Elasticsearch via Fluentd. Build Kibana dashboards for error analysis.

ELKFluentdKibana
PROJECT 11.3

Distributed Tracing with Jaeger

Add OpenTelemetry instrumentation to microservices. Visualize request traces across 5 services in Jaeger UI.

JaegerOpenTelemetryTracing
PROJECT 11.4

PagerDuty Incident Automation

Auto-create PagerDuty incidents from Prometheus alerts and auto-resolve when metrics return to normal.

PagerDutyAlertManagerIncident
PROJECT 11.5

SLO Dashboard

Define SLOs (99.9% availability, P95 latency <200ms) and build Grafana dashboard tracking error budget burn rate.

SLOError BudgetSRE

CH 12

Security in CI/CD (DevSecOps)

Security ko pipeline mein shift-left karo β€” baad mein nahi, abhi

DevSecOps = Development + Security + Operations. Security ko end mein nahi, har pipeline stage mein integrate karo. Yeh “shift-left security” hai β€” earlier you find a bug, cheaper it is to fix. Production mein security issue = incident, legal issues, data breach. Pipeline mein = one failed build.

🚨

Critical: Kabhi bhi secrets (passwords, API keys, tokens) ko code mein hardcode mat karo! Git history mein ek baar push hua secret practically leaked ho jaata hai β€” chahe baad mein delete bhi karo.

yaml β€” GitHub Actions: Complete Security Pipeline
name: πŸ”’ Security Pipeline

on: [push, pull_request]

jobs:
  secrets-scan:
    name: πŸ”‘ Secrets Detection
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with:
          fetch-depth: 0  # Full history for git-secrets
      - name: TruffleHog Secrets Scan
        uses: trufflesecurity/trufflehog@main
        with:
          path: ./
          base: ${{ github.event.repository.default_branch }}
          extra_args: --only-verified

  sast:
    name: πŸ”¬ SAST Analysis
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: CodeQL Analysis
        uses: github/codeql-action/init@v3
        with:
          languages: javascript, python
      - name: Autobuild
        uses: github/codeql-action/autobuild@v3
      - name: Perform CodeQL Analysis
        uses: github/codeql-action/analyze@v3

  dependency-scan:
    name: πŸ“¦ Dependency Audit
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Run npm audit
        run: npm audit --audit-level=high
      - name: OWASP Dependency-Check
        uses: dependency-check/Dependency-Check_Action@main
        with:
          project: myapp
          path: .
          format: HTML
          args: >
            --enableRetired
            --failOnCVSS 7

  container-scan:
    name: 🐳 Container Scan
    runs-on: ubuntu-latest
    needs: [secrets-scan, sast]
    steps:
      - uses: actions/checkout@v4
      - name: Build image
        run: docker build -t scan-target:latest .
      - name: Run Trivy container scan
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: scan-target:latest
          severity: 'CRITICAL,HIGH'
          exit-code: '1'
          format: sarif
          output: trivy-results.sarif
      - name: Upload scan results
        uses: github/codeql-action/upload-sarif@v3
        with:
          sarif_file: trivy-results.sarif

  iac-scan:
    name: πŸ—οΈ IaC Security
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Checkov IaC Scan (Terraform + K8s)
        uses: bridgecrewio/checkov-action@master
        with:
          directory: .
          framework: terraform,kubernetes,dockerfile
          soft_fail: false

πŸ› οΈ Chapter Projects

PROJECT 12.1

HashiCorp Vault Secrets

Integrate Vault with Jenkins/GitHub Actions β€” dynamic secrets injection, no hardcoded credentials anywhere.

VaultSecretsDynamic
PROJECT 12.2

Supply Chain Security (SLSA)

Implement SLSA Level 3: generate SBOM, sign artifacts with Sigstore/Cosign, verify signatures on deploy.

SLSASBOMSigstore
PROJECT 12.3

OPA Policy as Code

Use Open Policy Agent to enforce security policies: no root containers, required labels, allowed registries only.

OPAGatekeeperPolicy
PROJECT 12.4

DAST with OWASP ZAP

Automated DAST: deploy to staging, run OWASP ZAP scan against it, parse results, block deploy on critical findings.

DASTOWASP ZAPDynamic
PROJECT 12.5

Compliance as Code

InSpec tests that validate deployed infrastructure meets CIS benchmarks β€” runs as a pipeline stage.

InSpecCISCompliance

CH 13

Real Production Pipelines

Theory khatam, ab real duniya ka pipeline dekho

Yeh chapter mein hum dekhenge ki actual companies apne CI/CD pipelines kaise structure karti hain. Netflix, Spotify, Uber β€” inke pipelines ki architecture, challenges, aur solutions. Plus, ek complete end-to-end production pipeline build karenge from scratch.

🏭 Enterprise Pipeline Architecture

ENTERPRISE CI/CD PIPELINE (E-Commerce Platform) ════════════════════════════════════════════════ Developer pushes to feature branch β”‚ β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ PULL REQUEST CHECKS (5-8 min) β”‚ β”‚ β”œβ”€β”€ Secrets scan (TruffleHog) β”‚ β”‚ β”œβ”€β”€ Lint + Format check β”‚ β”‚ β”œβ”€β”€ Unit tests (parallel, 3 shards) β”‚ β”‚ β”œβ”€β”€ Type check (TypeScript) β”‚ β”‚ └── Code review (2 approvals required) β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ PR merged to main β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ CI BUILD PIPELINE (12-15 min) β”‚ β”‚ β”œβ”€β”€ Full test suite (unit + integration) β”‚ β”‚ β”œβ”€β”€ SonarQube quality gate β”‚ β”‚ β”œβ”€β”€ Docker multi-arch build (amd64+arm64) β”‚ β”‚ β”œβ”€β”€ Trivy container scan β”‚ β”‚ β”œβ”€β”€ SBOM generation + signing β”‚ β”‚ └── Push to ECR + Nexus β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ auto-trigger β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ STAGING DEPLOY (ArgoCD, auto) (3 min) β”‚ β”‚ β”œβ”€β”€ Helm upgrade to staging namespace β”‚ β”‚ β”œβ”€β”€ Smoke tests (curl health endpoints) β”‚ β”‚ β”œβ”€β”€ E2E tests (Playwright headless) β”‚ β”‚ β”œβ”€β”€ Performance baseline (k6 load test) β”‚ β”‚ └── Slack: “staging.myapp.com updated βœ…” β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ manual approval gate β–Ό β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ PRODUCTION DEPLOY (Argo Rollouts) (15 min) β”‚ β”‚ β”œβ”€β”€ Canary: 5% traffic β”‚ β”‚ β”œβ”€β”€ Prometheus metric analysis (5 min) β”‚ β”‚ β”œβ”€β”€ Canary: 25% β†’ 50% β†’ 100% β”‚ β”‚ β”œβ”€β”€ Auto-rollback if error rate > 1% β”‚ β”‚ └── PagerDuty: deployment event logged β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
yaml β€” Production Argo Rollout with Analysis
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  name: myapp-rollout
  namespace: production
spec:
  replicas: 10
  strategy:
    canary:
      canaryService: myapp-canary
      stableService: myapp-stable
      trafficRouting:
        istio:
          virtualService:
            name: myapp-vsvc
      analysis:
        templates:
          - templateName: success-rate
        startingStep: 2
        args:
          - name: service-name
            value: myapp-canary
      steps:
        - setWeight: 5
        - pause: {duration: 2m}
        - setWeight: 20
        - pause: {duration: 5m}
        - setWeight: 50
        - pause: {duration: 5m}
        - setWeight: 100
---
apiVersion: argoproj.io/v1alpha1
kind: AnalysisTemplate
metadata:
  name: success-rate
spec:
  args:
    - name: service-name
  metrics:
    - name: success-rate
      interval: 1m
      successCondition: result[0] >= 0.99
      failureLimit: 3
      provider:
        prometheus:
          address: http://prometheus:9090
          query: |
            sum(rate(http_requests_total{
              service="{{args.service-name}}",
              status!~"5.."
            }[2m])) /
            sum(rate(http_requests_total{
              service="{{args.service-name}}"
            }[2m]))

πŸ› οΈ Chapter Projects

PROJECT 13.1

E-Commerce Full Pipeline

Build complete CI/CD for a 5-service e-commerce app: auth, product, cart, order, notification. Full pipeline end-to-end.

MicroservicesFull PipelineProduction
PROJECT 13.2

SaaS Multi-Tenant Deploy

Deploy a multi-tenant SaaS app where each customer gets an isolated namespace with separate configs and secrets.

Multi-TenantNamespacesIsolation
PROJECT 13.3

ML Model CI/CD with DVC

CI/CD for machine learning: data versioning with DVC, model training, evaluation gates, and model serving deploy.

MLOpsDVCModel Serving
PROJECT 13.4

Disaster Recovery Pipeline

Build a DR pipeline: nightly backup, cross-region replication, and 15-minute RTO restore with automated testing.

DRBackupRTO
PROJECT 13.5

Cost-Optimized Spot Pipeline

Run CI jobs on AWS Spot instances β€” up to 70% cost reduction with intelligent fallback to on-demand.

Spot InstancesCostAWS

CH 14

Interview Preparation

Job chahiye? Yeh questions aayenge β€” guarantee ke saath

DevOps interviews mein theory aur practical dono test hote hain. Top companies jaise Amazon, Google, Microsoft, Flipkart, Infosys, TCS, Wipro β€” sab ka ek common question bank hai. Aao dekhte hain kya poochha jaata hai aur kaise perfect answer dena hai.

🎯 Top Interview Questions with Answers

πŸ’¬ Q1: What’s the difference between CI and CD? β–Ό

Answer: CI (Continuous Integration) is the practice of automatically building and testing code every time a developer pushes a change. The goal is to detect integration errors quickly.

CD has two meanings: Continuous Delivery means the code is always in a deployable state and can be released to production at any time with a manual trigger. Continuous Deployment goes one step further β€” every passing change is automatically deployed to production without manual intervention.

Analogy: Think of a restaurant. CI = kitchen quality check on every dish before it leaves the kitchen. CD (Delivery) = dish is ready and can be sent to table anytime. CD (Deployment) = dish is automatically delivered the moment it’s ready.

πŸ’¬ Q2: Explain a Jenkins pipeline you’ve built in production. β–Ό

STAR Format Answer:

Situation: Our team had a manual deployment process that took 4 hours and had a 30% failure rate due to human errors.

Task: I was tasked with building a fully automated CI/CD pipeline for our Java Spring Boot microservices.

Action: I built a Jenkins declarative pipeline with stages: Checkout β†’ Maven Build β†’ JUnit Tests (with JaCoCo coverage gate at 80%) β†’ SonarQube analysis β†’ Docker multi-stage build β†’ Trivy scan β†’ push to Nexus β†’ deploy to K8s via Helm. Used Jenkins Shared Library for reusability across 12 microservices.

Result: Deployment time reduced from 4 hours to 18 minutes. Failure rate dropped to under 3%. Team could deploy 10+ times per day vs once per week.

πŸ’¬ Q3: What is GitOps and how does ArgoCD implement it? β–Ό

Answer: GitOps is an operational framework where Git is the single source of truth for both application code AND infrastructure configuration. Any desired state change must go through a Git commit β€” you never directly apply kubectl or run Terraform manually.

ArgoCD implements GitOps by continuously watching a Git repository (containing Helm charts / K8s manifests). When it detects a difference between the desired state (Git) and the actual state (cluster), it automatically reconciles β€” either alerting or auto-syncing. selfHeal: true means if someone manually changes something in the cluster, ArgoCD reverts it back to what Git says.

πŸ’¬ Q4: How do you handle secrets in CI/CD pipelines? β–Ό

Answer: Secrets must NEVER be hardcoded in code or Dockerfiles. Here’s the layered approach I use:

  • Short-lived: GitHub Actions Secrets / Jenkins Credentials for CI pipeline variables
  • Kubernetes: Sealed Secrets or External Secrets Operator syncing from AWS Secrets Manager / HashiCorp Vault
  • Dynamic secrets: HashiCorp Vault generates short-lived database credentials per-service
  • Detection: TruffleHog / gitleaks in pre-commit hooks AND as a CI pipeline stage
  • Rotation: Secrets are rotated every 90 days, automated via Vault + AWS Secrets Manager
πŸ’¬ Q5: Blue-Green vs Canary vs Rolling β€” when to use what? β–Ό

Rolling Update: Gradually replace old pods with new ones. Good for stateless apps with backward-compatible changes. Minimal resource overhead. Risk: if new version has bugs, some users see old, some see new during rollout.

Blue-Green: Run two identical environments. Switch traffic instantly. Perfect for major version changes needing instant rollback. Downside: 2x resource cost. Use for: DB schema migrations, major UI changes, compliance-critical deployments.

Canary: Route small % of traffic to new version, monitor metrics, gradually increase. Best for high-traffic production systems where you want real-world validation before full rollout. Use with Argo Rollouts + Prometheus analysis for automated decisions.

My recommendation: For stateless microservices β†’ Canary. For batch/data processing β†’ Blue-Green. For low-risk config changes β†’ Rolling.

πŸ“ Common DevOps Interview Topics

πŸ™

Git

Rebase vs merge, cherry-pick, conflict resolution, branching strategies.

πŸ—οΈ

Jenkins

Declarative vs scripted, shared libraries, agents, plugins, JCasC.

🐳

Docker

Multi-stage builds, networking, volumes, security, image optimization.

☸️

Kubernetes

Deployments, services, ingress, RBAC, HPA, PVC, network policies.

πŸ”οΈ

Terraform

State management, modules, workspaces, import, taint, drift.

☁️

AWS

EKS, ECR, ECS, IAM roles, VPC, ALB, Route53, CloudWatch.

πŸ“Š

Monitoring

Prometheus, Grafana, ELK, alerting, SLO/SLA, DORA metrics.

πŸ”’

Security

DevSecOps, SAST/DAST, secrets management, RBAC, compliance.


CH 15

Final Capstone Project

Sab kuch ek pipeline mein β€” yeh banao, portfolio mein daalo

Yeh final capstone project hai β€” ek production-grade, end-to-end CI/CD system jo sab chapters ke concepts use karta hai. Isko build karo, GitHub pe push karo, aur interview mein confidently present karo. Companies isko dekhke hire karti hain!

πŸ† Capstone: Full-Stack E-Commerce Platform CI/CD

CAPSTONE ARCHITECTURE ══════════════════════════════════════════════════════════ REPOSITORIES (GitHub): myapp-frontend/ β†’ React + TypeScript myapp-backend/ β†’ Node.js + Express + PostgreSQL myapp-k8s-config/ β†’ GitOps repo (Helm values) myapp-infrastructure/ β†’ Terraform (AWS EKS, RDS, ElastiCache) CI PIPELINE (GitHub Actions) per service: Trigger: push to main / PR Jobs: β”œβ”€β”€ secrets-scan (TruffleHog) β”œβ”€β”€ lint-test (ESLint + Jest + Coverage β‰₯ 85%) β”œβ”€β”€ sonarqube (Quality gate: no critical issues) β”œβ”€β”€ docker-build (Multi-stage, multi-arch) β”œβ”€β”€ trivy-scan (CRITICAL/HIGH = fail) β”œβ”€β”€ sbom-sign (Syft SBOM + Cosign signing) └── push-ecr (tag: git sha + semver) CD PIPELINE (ArgoCD + Argo Rollouts): myapp-k8s-config repo updated β†’ ArgoCD detects β”œβ”€β”€ Staging: auto-sync β†’ E2E tests (Playwright) └── Production: canary rollout 5% β†’ analysis β†’ 25% β†’ 50% β†’ 100% Analysis: error rate < 0.5%, p95 < 300ms INFRASTRUCTURE (Terraform): β”œβ”€β”€ AWS EKS (3 node groups: system, app, spot) β”œβ”€β”€ RDS PostgreSQL (Multi-AZ, encrypted) β”œβ”€β”€ ElastiCache Redis (cluster mode) β”œβ”€β”€ ALB + WAF + ACM certificates β”œβ”€β”€ Route53 + CloudFront (CDN) └── S3 (Terraform state + backups) MONITORING: β”œβ”€β”€ Prometheus + Grafana (kube-prometheus-stack) β”œβ”€β”€ ELK Stack (application logs) β”œβ”€β”€ Jaeger (distributed tracing) └── PagerDuty (on-call alerts)
bash β€” Capstone Bootstrap Script
#!/bin/bash
# bootstrap.sh β€” Capstone project full setup
set -euo pipefail

echo "πŸš€ Starting Capstone CI/CD Platform Setup..."

# ── 1. INFRASTRUCTURE ──────────────────────────────────
echo "πŸ“¦ Provisioning AWS Infrastructure with Terraform..."
cd infrastructure/terraform

# Initialize with remote state (S3 backend)
terraform init \
  -backend-config="bucket=myapp-terraform-state" \
  -backend-config="key=prod/terraform.tfstate" \
  -backend-config="region=ap-south-1"

terraform workspace select production || terraform workspace new production
terraform plan -var-file=vars/production.tfvars -out=tfplan
terraform apply tfplan

# Get EKS kubeconfig
aws eks update-kubeconfig \
  --name myapp-production \
  --region ap-south-1

echo "βœ… EKS cluster ready!"

# ── 2. KUBERNETES BOOTSTRAP ────────────────────────────
echo "☸️  Installing core Kubernetes components..."

# Create namespaces
kubectl create namespace production  --dry-run=client -o yaml | kubectl apply -f -
kubectl create namespace staging     --dry-run=client -o yaml | kubectl apply -f -
kubectl create namespace monitoring  --dry-run=client -o yaml | kubectl apply -f -
kubectl create namespace argocd      --dry-run=client -o yaml | kubectl apply -f -

# Install ArgoCD
kubectl apply -n argocd \
  -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

# Wait for ArgoCD to be ready
kubectl wait --for=condition=available --timeout=300s \
  deployment/argocd-server -n argocd

# Install Argo Rollouts
kubectl create namespace argo-rollouts --dry-run=client -o yaml | kubectl apply -f -
kubectl apply -n argo-rollouts \
  -f https://github.com/argoproj/argo-rollouts/releases/latest/download/install.yaml

# Install monitoring stack
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm upgrade --install kube-prometheus prometheus-community/kube-prometheus-stack \
  --namespace monitoring \
  --values monitoring/prometheus-values.yaml \
  --wait

# Install External Secrets Operator (for AWS Secrets Manager)
helm repo add external-secrets https://charts.external-secrets.io
helm upgrade --install external-secrets external-secrets/external-secrets \
  -n external-secrets-system \
  --create-namespace \
  --wait

echo "βœ… Kubernetes platform ready!"

# ── 3. ARGOCD APPLICATIONS ─────────────────────────────
echo "πŸ”„ Configuring ArgoCD Applications..."

# Apply ArgoCD app-of-apps pattern
kubectl apply -f argocd/app-of-apps.yaml

# Get ArgoCD admin password
ARGOCD_PASS=$(kubectl -n argocd get secret argocd-initial-admin-secret \
  -o jsonpath="{.data.password}" | base64 -d)

echo ""
echo "══════════════════════════════════════════════"
echo " πŸŽ‰ CAPSTONE PLATFORM DEPLOYED SUCCESSFULLY!"
echo "══════════════════════════════════════════════"
echo " ArgoCD UI:    https://$(kubectl get svc argocd-server -n argocd -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')"
echo " ArgoCD Pass:  ${ARGOCD_PASS}"
echo " Grafana URL:  https://grafana.myapp.com"
echo " App URL:      https://myapp.com"
echo "══════════════════════════════════════════════"
yaml β€” Capstone: Complete GitHub Actions Pipeline
name: πŸš€ Capstone CI/CD Pipeline

on:
  push:
    branches: [main]
    tags: ['v*.*.*']
  pull_request:
    branches: [main]

concurrency:
  group: ${{ github.workflow }}-${{ github.ref }}
  cancel-in-progress: true

jobs:
  # ─── Quality Gate ──────────────────────────────
  quality:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with: {fetch-depth: 0}
      - uses: actions/setup-node@v4
        with: {node-version: '20', cache: 'npm'}
      - run: npm ci
      - run: npm run lint && npm run type-check
      - run: npm test -- --coverage
      - name: SonarQube Scan
        uses: SonarSource/sonarcloud-github-action@master
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
          SONAR_TOKEN: ${{ secrets.SONAR_TOKEN }}

  # ─── Security ──────────────────────────────────
  security:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
        with: {fetch-depth: 0}
      - uses: trufflesecurity/trufflehog@main
        with: {path: ./, base: main}
      - uses: github/codeql-action/init@v3
        with: {languages: javascript}
      - uses: github/codeql-action/autobuild@v3
      - uses: github/codeql-action/analyze@v3

  # ─── Build & Push ──────────────────────────────
  build:
    needs: [quality, security]
    runs-on: ubuntu-latest
    permissions:
      id-token: write
      contents: read
    outputs:
      image-tag: ${{ steps.meta.outputs.version }}
      image-digest: ${{ steps.push.outputs.digest }}
    steps:
      - uses: actions/checkout@v4
      - uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ${{ secrets.AWS_CI_ROLE }}
          aws-region: ap-south-1
      - uses: docker/setup-buildx-action@v3
      - id: login-ecr
        uses: aws-actions/amazon-ecr-login@v2
      - id: meta
        uses: docker/metadata-action@v5
        with:
          images: ${{ steps.login-ecr.outputs.registry }}/myapp
          tags: |
            type=sha,prefix=,format=short
            type=semver,pattern={{version}}
            type=raw,value=latest,enable={{is_default_branch}}
      - id: push
        uses: docker/build-push-action@v5
        with:
          context: .
          platforms: linux/amd64,linux/arm64
          push: ${{ github.event_name != 'pull_request' }}
          tags: ${{ steps.meta.outputs.tags }}
          cache-from: type=gha
          cache-to: type=gha,mode=max
      - name: Trivy scan on pushed image
        uses: aquasecurity/trivy-action@master
        with:
          image-ref: ${{ steps.login-ecr.outputs.registry }}/myapp:${{ steps.meta.outputs.version }}
          severity: CRITICAL,HIGH
          exit-code: '1'

  # ─── Update K8s Config ─────────────────────────
  update-gitops:
    needs: build
    runs-on: ubuntu-latest
    if: github.event_name != 'pull_request'
    steps:
      - name: Checkout k8s config repo
        uses: actions/checkout@v4
        with:
          repository: myorg/myapp-k8s-config
          token: ${{ secrets.GITOPS_PAT }}
      - name: Update image tag in Helm values
        run: |
          sed -i "s|tag:.*|tag: ${{ needs.build.outputs.image-tag }}|" \
            charts/myapp/values/staging.yaml
          git config user.name  "github-actions[bot]"
          git config user.email "actions@github.com"
          git commit -am "ci: update image to ${{ needs.build.outputs.image-tag }}"
          git push
          echo "βœ… GitOps repo updated β€” ArgoCD will auto-sync to staging"

πŸ› οΈ Capstone Sub-Projects

CAPSTONE A

Infrastructure Provisioning

Terraform: provision EKS, RDS, ElastiCache, ALB, Route53, CloudFront, S3, IAM with OIDC federation.

TerraformAWSEKS
CAPSTONE B

Platform Engineering

Install and configure: ArgoCD, Argo Rollouts, Prometheus, Grafana, ELK, Jaeger, External Secrets, Ingress-NGINX.

PlatformHelmOperators
CAPSTONE C

Application CI Pipelines

GitHub Actions pipelines for frontend (React) and backend (Node.js) with full security and quality gates.

GitHub ActionsDockerECR
CAPSTONE D

GitOps CD with ArgoCD

ArgoCD App-of-Apps pattern managing all services across staging and production with automated canary rollouts.

ArgoCDGitOpsCanary
CAPSTONE E

Observability Stack

Full-stack observability: metrics β†’ Prometheus/Grafana, logs β†’ ELK, traces β†’ Jaeger, alerts β†’ PagerDuty.

ObservabilitySLOOn-Call
πŸŽ“

Course Complete! Yeh capstone project build karo, GitHub pe push karo, aur apni portfolio site pe link karo. Companies directly GitHub dekhti hain β€” ek achha CI/CD project 100 resumes se zyada powerful hota hai. Ab DevOps engineer ho tum β€” seriously! πŸš€

πŸ“¬

Next Steps: CKA (Certified Kubernetes Administrator), AWS DevOps Professional, HashiCorp Terraform Associate certifications lo. Yeh course ke baad in certifications prepare karna bahut easy lagega. All the best bhai/behen! πŸ’ͺ