π³ What is Docker?
π Definitionβ
Docker is a platform that runs applications in isolated environments called containers. Containers package an application with all its dependencies, ensuring it runs identically anywhere. It solves the "works on my machine..." problem and makes development, testing, and deployment simple and consistent.
π― Understanding Through Analogiesβ
Shipping Containersβ
Think of Docker like shipping containers in logistics:
Traditional Shipping (VM)
ββ Different packaging for each item
ββ Varying sizes/weights
ββ Different transportation methods
ββ Inefficient, complex
Container Shipping (Docker)
ββ Standardized containers
ββ Can contain any goods
ββ Ship, truck, train all compatible
ββ Efficient, simple
Docker Container = Standardized software package
Apartments vs Housesβ
Virtual Machine (VM) = Detached House
ββ Complete infrastructure for each
ββ Land, building, utilities all separate
ββ Expensive
ββ Slow startup
Docker Container = Apartment
ββ Shared infrastructure (land, building)
ββ Each unit independent
ββ Efficient
ββ Quick move-in
βοΈ How It Worksβ
1. Docker Architectureβ
βββββββββββββββββββββββββββββββββββββββ
β Docker Client (CLI) β
β docker run, docker build, etc. β
βββββββββββββββββ¬ββββββββββββββββββββββ
β API calls
βββββββββββββββββΌββββββββββββββββββββββ
β Docker Daemon β
β Container/Image Management β
βββββββββββββββββ¬ββββββββββββββββββββββ
β
βββββββββββββββββΌββββββββββββββββββββββ
β Containers (Running Apps) β
β βββββββ βββββββ βββββββ β
β βApp 1β βApp 2β βApp 3β β
β βββββββ βββββββ ββ βββββ β
βββββββββββββββββββββββββββββββββββββββ
β
βββββββββββββββββΌββββββββββββββββββββββ
β Host OS (Linux Kernel) β
βββββββββββββββββββββββββββββββββββββββ
2. Image vs Containerβ
Image = Blueprint, Class
ββ Read-only
ββ Layered structure
ββ Reusable
ββ Example: ubuntu, node, nginx
Container = Running Instance, Object
ββ Created from image
ββ Executable
ββ Isolated environment
ββ Multiple instances possible
Relationship:
Image β Container 1
β Container 2
β Container 3
3. Dockerfile β Image β Containerβ
1. Write Dockerfile (Recipe)
ββ FROM node:18
ββ COPY package.json .
ββ RUN npm install
ββ CMD ["npm", "start"]
2. Build Image (Cooking)
docker build -t my-app .
3. Run Container (Serving)
docker run -p 3000:3000 my-app
π‘ Practical Examplesβ
Writing a Dockerfileβ
# Node.js Application Dockerfile
# Base image
FROM node:18-alpine
# Set working directory
WORKDIR /app
# Copy package.json
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy source code
COPY . .
# Expose port
EXPOSE 3000
# Environment variables
ENV NODE_ENV=production
# Command to run on container start
CMD ["node", "server.js"]
Building and Running Imagesβ
# 1. Build image
docker build -t my-node-app:1.0 .
# -t: tag (name:version)
# .: use Dockerfile in current directory
# 2. Check images
docker images
# REPOSITORY TAG IMAGE ID CREATED
# my-node-app 1.0 abc123 2 minutes ago
# 3. Run container
docker run -d \
--name my-app \
-p 3000:3000 \
-e NODE_ENV=production \
my-node-app:1.0
# -d: run in background
# --name: container name
# -p: port mapping (host:container)
# -e: environment variable
# 4. Check running containers
docker ps
# 5. View logs
docker logs my-app
# 6. Access container shell
docker exec -it my-app sh
# 7. Stop container
docker stop my-app
# 8. Remove container
docker rm my-app
# 9. Remove image
docker rmi my-node-app:1.0
Docker Compose (Managing Multiple Containers)β
# docker-compose.yml
version: '3.8'
services:
# Web application
web:
build: .
ports:
- "3000:3000"
environment:
- NODE_ENV=production
- DB_HOST=database
depends_on:
- database
- redis
volumes:
- ./logs:/app/logs
# Database
database:
image: mysql:8.0
environment:
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_DATABASE=myapp
volumes:
- db-data:/var/lib/mysql
ports:
- "3306:3306"
# Cache
redis:
image: redis:7-alpine
ports:
- "6379:6379"
# Nginx (Reverse Proxy)
nginx:
image: nginx:alpine
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- web
volumes:
db-data:
# Docker Compose commands
# Start all services
docker-compose up -d
# View logs
docker-compose logs -f web
# Restart specific service
docker-compose restart web
# Scale service
docker-compose up -d --scale web=3
# Stop and remove all services
docker-compose down
# Remove with volumes
docker-compose down -v
Real-World Example: React + Node.js + MongoDBβ
# frontend/Dockerfile
FROM node:18-alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Serve static files with Nginx
FROM nginx:alpine
COPY --from=build /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
# backend/Dockerfile
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production
COPY . .
EXPOSE 5000
CMD ["node", "server.js"]
# docker-compose.yml
version: '3.8'
services:
frontend:
build:
context: ./frontend
ports:
- "80:80"
depends_on:
- backend
backend:
build:
context: ./backend
ports:
- "5000:5000"
environment:
- MONGO_URL=mongodb://mongodb:27017/myapp
depends_on:
- mongodb
mongodb:
image: mongo:6
volumes:
- mongo-data:/data/db
ports:
- "27017:27017"
volumes:
mongo-data:
Managing Data Volumesβ
# Create volume
docker volume create my-data
# List volumes
docker volume ls
# Mount volume to container
docker run -v my-data:/app/data my-app
# Mount host directory (useful for development)
docker run -v $(pwd):/app my-app
# Read-only mount
docker run -v $(pwd):/app:ro my-app
# Remove volume
docker volume rm my-data
# Remove all unused volumes
docker volume prune
.dockerignore Fileβ
# .dockerignore
# Git
.git
.gitignore
# Node
node_modules
npm-debug.log
# Environment
.env
.env.local
# Build output
dist
build
# IDE
.vscode
.idea
# Tests
coverage
*.test.js
# Documentation
README.md
docs/
π€ Frequently Asked Questionsβ
Q1. Docker vs Virtual Machine?β
A:
Virtual Machine (VM)
βββββββββββββββββββ βββββββββββββββββββ
β App A β β App B β
β Libs β β Libs β
β Guest OS β β Guest OS β
β (Linux) β β (Ubuntu) β
βββββββββββββββββββ βββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββ
β Hypervisor (VMware, etc.) β
βββββββββββββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββ
β Host OS β
βββββββββββββββββββββββββββββββββββββββ
Docker Container
βββββββββ βββββββββ βββββββββ
β App A β β App B β β App C β
β Libs β β Libs β β Libs β
βββββββββ βββββββββ βββββββββ
βββββββββββββββββββββββββββββββ
β Docker Engine β
βββββββββββββββββββββββββββββββ
βββββββββββ ββββββββββββββββββββ
β Host OS (Linux Kernel) β
βββββββββββββββββββββββββββββββ
Comparison:
VM Docker
Size GB MB
Startup Minutes Seconds
Isolation Complete Process-level
Performance High overhead Near-native
Portability Low High
Q2. What are Docker Image Layers?β
A:
FROM node:18 # Layer 1: Base image
WORKDIR /app # Layer 2: Working directory
COPY package.json . # Layer 3: package.json
RUN npm install # Layer 4: Install dependencies
COPY . . # Layer 5: Source code
CMD ["node", "app.js"]# Layer 6: Start command
# Layer caching
# - Unchanged layers are reused
# - Faster builds
# β
Good: Frequently changed items last
COPY package.json .
RUN npm install # Cached (if package.json unchanged)
COPY . . # Only code changes
# β Bad: Frequently changed items first
COPY . . # Code changes every time
RUN npm install # npm install runs again!
Q3. What is Container Orchestration?β
A:
Single Server
βββββββββββββββββββββββ
β βββββ βββββ βββββ β
β βC1 β βC2 β βC3 β β
β βββββ βββββ βββββ β
βββββββββββββββββββββββ
Orchestration (Kubernetes)
βββββββββββ βββββββββββ βββββββββββ
β Server1 β β Server2 β β Server3 β
β βββββ β β βββββ β β βββββ β
β βC1 β β β βC2 β β β βC3 β β
β βββββ β β βββββ β β βββββ β
βββββββββββ βββββββββββ βββββββββββ
Features:
- Automated deployment
- Scaling (auto scale up/down)
- Load balancing
- Self-healing (restart dead containers)
- Rolling updates
Tools:
- Kubernetes (most popular)
- Docker Swarm
- Amazon ECS
Q4. Using Docker in Development?β
A:
# Scenario: Different environments per team member
# Team Member A: macOS, Node 16
# Team Member B: Windows, Node 18
# Team Member C: Linux, Node 14
# Problem: "Works on my machine..."
# Solution: Unify environment with Docker
# docker-compose.yml
version: '3.8'
services:
app:
image: node:18
volumes:
- .:/app
working_dir: /app
command: npm run dev
ports:
- "3000:3000"
# All team members use same environment
docker-compose up
# Benefits:
# 1. No Node.js installation needed
# 2. Version unified
# 3. Dependencies isolated
# 4. Environment configuration shared
Q5. Docker Security?β
A:
# β
Security Best Practices
# 1. Minimal privilege base image
FROM node:18-alpine # β
Small and secure
FROM node:18 # β Unnecessary tools
# 2. Run as non-root user
RUN addgroup -g 1001 -S nodejs
RUN adduser -S nodejs -u 1001
USER nodejs # β
Minimize privileges
# 3. Exclude sensitive information
# Add to .dockerignore
.env
*.key
secrets/
# 4. Multi-stage build (remove unnecessary tools)
FROM node:18 AS builder
WORKDIR /app
COPY . .
RUN npm run build
FROM node:18-alpine # Smaller image
COPY --from=builder /app/dist ./dist
CMD ["node", "dist/main.js"]
# 5. Scan images
docker scan my-image
# 6. Read-only root filesystem
docker run --read-only my-app
# 7. Resource limits
docker run --memory="512m" --cpus="1" my-app
π Next Stepsβ
After understanding Docker, explore:
- What is Git? (Coming soon) - Version control and deployment
- What is Node.js? (Coming soon) - Deploy Node apps with Docker
- What is TDD? - Run tests in Docker
Try It Yourselfβ
# 1. Check Docker installation
docker --version
# 2. Hello World
docker run hello-world
# 3. Run Nginx
docker run -d -p 8080:80 nginx
# Visit http://localhost:8080
# 4. Check containers
docker ps
# 5. Stop and remove
docker stop <container-id>
docker rm <container-id>
# 6. Containerize your app
# - Write Dockerfile
# - docker build
# - docker run
Useful Commandsβ
# Image management
docker pull nginx # Download image
docker images # List images
docker rmi nginx # Remove image
# Container management
docker run # Run container
docker ps # Running containers
docker ps -a # All containers
docker stop <id> # Stop container
docker start <id> # Start container
docker restart <id> # Restart container
docker rm <id> # Remove container
# Logs and debugging
docker logs <id> # View logs
docker logs -f <id> # Follow logs
docker exec -it <id> sh # Access container shell
docker inspect <id> # Detailed info
# Cleanup
docker system prune # Remove unused resources
docker system prune -a # Remove all unused resources
docker volume prune # Remove unused volumes
π¬ Conclusionβ
Docker is an essential tool for modern development:
- Containers: Isolated execution environments
- Images: Application packages
- Portability: Runs identically anywhere
- Efficiency: Lightweight and fast
Solve "works on my machine" problems with Docker! π³β¨