Skip to main content

🧩 Microservices Architecture

πŸ“– Definition​

Microservices Architecture (MSA) is an architectural pattern that divides a large application into multiple small, independent services for development and deployment. Each service is responsible for specific business functions and can be deployed and scaled independently. Unlike Monolithic architecture, it provides flexibility and scalability through loose coupling between services.

🎯 Understanding Through Analogies​

Large Corporation vs Startup Alliance​

Monolithic = Large Corporation
β”œβ”€ All departments in one building
β”œβ”€ Centralized management
β”œβ”€ One department's problem β†’ affects entire company
β”œβ”€ Difficult to change
└─ Slow decision-making

Microservices = Startup Alliance
β”œβ”€ Each team has independent office
β”œβ”€ Autonomous decision-making
β”œβ”€ One team's problem β†’ other teams work normally
β”œβ”€ Quick changes
└─ Flexible scaling

LEGO vs Clay​

Monolithic = Clay Lump
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ User β”‚ Product β”‚ Order β”‚
β”‚ Mgmt β”‚ Mgmt β”‚ Mgmt β”‚
β”‚ Everything as One β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
- Must remake entire thing
- Modifying one part β†’ affects whole
- Difficult to scale

Microservices = LEGO Blocks
β”Œβ”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”
β”‚User β”‚ β”‚Productβ”‚ β”‚Orderβ”‚
β”‚Serviceβ”‚ β”‚Serviceβ”‚ β”‚Serviceβ”‚
β””β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”˜
- Easy to replace blocks
- Modify independently
- Scale only needed parts

βš™οΈ How It Works​

1. Monolithic vs Microservices​

========== Monolithic ==========
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Single Application β”‚
β”‚ β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ User Management Module β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ Product Management Moduleβ”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ Order Management Module β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ Payment Module β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚ β”‚
β”‚ One Database β”‚
β”‚ One Codebase β”‚
β”‚ One Deployment Unit β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Pros:
βœ… Fast initial development
βœ… Simple testing
βœ… Simple deployment (single unit)
βœ… Easy debugging

Cons:
❌ Complex when scaled
❌ Entire service down during deployment
❌ Cannot scale partially
❌ Difficult to change tech stack

========== Microservices ==========
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚User β”‚ β”‚Product β”‚ β”‚Order β”‚
β”‚Service β”‚ β”‚Service β”‚ β”‚Service β”‚
β”‚ β”‚ β”‚ β”‚ β”‚ β”‚
β”‚Node.js β”‚ β”‚Java β”‚ β”‚Go β”‚
β”‚MongoDB β”‚ β”‚MySQL β”‚ β”‚PostgreSQLβ”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
↓ ↓ ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚Payment β”‚ β”‚Notificationβ”‚ β”‚Review β”‚
β”‚Service β”‚ β”‚Service β”‚ β”‚Service β”‚
β”‚ β”‚ β”‚ β”‚ β”‚ β”‚
β”‚Python β”‚ β”‚Node.js β”‚ β”‚Ruby β”‚
β”‚Redis β”‚ β”‚Kafka β”‚ β”‚Cassandra β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Pros:
βœ… Independent deployment
βœ… Technology stack freedom
βœ… Partial scaling possible
βœ… Team independence
βœ… Failure isolation

Cons:
❌ High initial complexity
❌ Network communication overhead
❌ Distributed transactions difficult
❌ Complex testing
❌ Increased operational costs

2. Inter-Service Communication​

========== Synchronous Communication (HTTP/REST) ==========
Order Service β†’ Product Service
"Is product 123 in stock?"
↓
"Yes, 5 available"
↓
Order Service β†’ Payment Service
"Please process $10,000 payment"
↓
"Payment completed"
↓
Order Complete

Pros: Simple, intuitive
Cons: Entire process fails if one service fails

========== Asynchronous Communication (Message Queue) ==========
Order Service β†’ Message Queue
Publish "Order Created" message
↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Message Queue β”‚
β”‚ (RabbitMQ, β”‚
β”‚ Kafka, etc) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
↓
β”Œβ”€β”€β”€β”€β”΄β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”
↓ ↓ ↓
Payment Notification Inventory
Service Service Service
Each processes independently

Pros: Loose coupling, failure isolation
Cons: Increased complexity, difficult debugging

========== API Gateway ==========
Client (Mobile/Web)
↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ API Gateway β”‚
β”‚ - Routing β”‚
β”‚ - Authenticationβ”‚
β”‚ - Load Balancingβ”‚
β”‚ - Logging β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”Œβ”€β”€β”€β”€β”΄β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”
↓ ↓ ↓
ServiceA ServiceB ServiceC

Role:
- Single entry point
- Simplify client
- Handle common functionality

3. Data Management​

========== Monolithic: Shared Database ==========
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Application β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚ModuleAβ”‚ β”‚ModuleBβ”‚ β”‚ModuleCβ”‚ β”‚
β”‚ β””β”€β”€β”¬β”€β”€β”˜ β””β”€β”€β”¬β”€β”€β”˜ β””β”€β”€β”¬β”€β”€β”˜ β”‚
β””β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β””β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”˜
↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Single Database β”‚
β”‚ β”Œβ”€β”€β”€β”€β”¬β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”β”‚
β”‚ β”‚TAβ”‚TBβ”‚TCβ”‚β”‚
β”‚ β””β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”˜β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Pros: Easy JOIN, consistency guaranteed
Cons: High coupling, difficult to scale

========== Microservices: DB per Service ==========
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚Service A β”‚ β”‚Service B β”‚ β”‚Service C β”‚
β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”˜
↓ ↓ ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ DB A β”‚ β”‚ DB B β”‚ β”‚ DB C β”‚
β”‚ (MySQL) β”‚ β”‚(MongoDB)β”‚ β”‚(PostgreSQL)β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Pros: Independence, technology choice freedom
Cons: No JOIN, difficult consistency

========== Data Consistency ==========
// Saga Pattern
1. Order Service: Create order
2. Payment Service: Process payment
3. Inventory Service: Decrease inventory
4. Shipping Service: Start shipping

If step 3 fails?
β†’ Compensating Transaction
4. Inventory decrease failed
3. Cancel payment ← Compensation
2. Cancel order ← Compensation

πŸ’‘ Real Examples​

Monolithic Example (Express.js)​

// ========== Monolithic Application ==========
// server.js - All features in one file

const express = require('express');
const app = express();

app.use(express.json());

// Single database
const db = require('./database');

// ========== User Management ==========
app.post('/api/users', async (req, res) => {
const { username, email, password } = req.body;
const user = await db.users.create({ username, email, password });
res.json(user);
});

app.get('/api/users/:id', async (req, res) => {
const user = await db.users.findById(req.params.id);
res.json(user);
});

// ========== Product Management ==========
app.post('/api/products', async (req, res) => {
const { name, price, stock } = req.body;
const product = await db.products.create({ name, price, stock });
res.json(product);
});

app.get('/api/products', async (req, res) => {
const products = await db.products.findAll();
res.json(products);
});

// ========== Order Management ==========
app.post('/api/orders', async (req, res) => {
const { userId, productId, quantity } = req.body;

// Transaction ensures consistency
const transaction = await db.sequelize.transaction();

try {
// 1. Check inventory
const product = await db.products.findById(productId, { transaction });
if (product.stock < quantity) {
throw new Error('Insufficient inventory');
}

// 2. Decrease inventory
await product.update(
{ stock: product.stock - quantity },
{ transaction }
);

// 3. Create order
const order = await db.orders.create(
{ userId, productId, quantity, total: product.price * quantity },
{ transaction }
);

// 4. Process payment
await processPayment(order.total);

await transaction.commit();
res.json(order);
} catch (error) {
await transaction.rollback();
res.status(400).json({ error: error.message });
}
});

// ========== Payment Processing ==========
app.post('/api/payments', async (req, res) => {
const { orderId, amount } = req.body;
const payment = await db.payments.create({ orderId, amount });
res.json(payment);
});

// Run as single server
app.listen(3000, () => {
console.log('Monolithic server running: http://localhost:3000');
});

/*
Pros:
- Code in one place
- Fast development
- Easy debugging
- Simple transactions

Cons:
- Complex when scaled
- Entire restart during deployment
- Cannot scale partially
- One feature failure β†’ affects entire system
*/

Microservices Example​

// ========== 1. User Service (user-service.js) ==========
// Port: 3001
const express = require('express');
const app = express();
const mongoose = require('mongoose');

app.use(express.json());

// Independent database
mongoose.connect('mongodb://localhost/users-db');

const User = mongoose.model('User', {
username: String,
email: String,
password: String
});

// Create user
app.post('/users', async (req, res) => {
const { username, email, password } = req.body;

try {
const user = new User({ username, email, password });
await user.save();

// Publish event (notify other services)
await publishEvent('user.created', { userId: user._id, email });

res.json(user);
} catch (error) {
res.status(400).json({ error: error.message });
}
});

// Get user
app.get('/users/:id', async (req, res) => {
const user = await User.findById(req.params.id);
res.json(user);
});

app.listen(3001, () => {
console.log('User service: http://localhost:3001');
});

// ========== 2. Product Service (product-service.js) ==========
// Port: 3002
const express = require('express');
const app = express();
const { Pool } = require('pg');

app.use(express.json());

// Using PostgreSQL (different DB!)
const pool = new Pool({
host: 'localhost',
database: 'products-db',
port: 5432
});

// Product list
app.get('/products', async (req, res) => {
const result = await pool.query('SELECT * FROM products');
res.json(result.rows);
});

// Product detail
app.get('/products/:id', async (req, res) => {
const result = await pool.query(
'SELECT * FROM products WHERE id = $1',
[req.params.id]
);
res.json(result.rows[0]);
});

// Check stock
app.get('/products/:id/stock', async (req, res) => {
const result = await pool.query(
'SELECT stock FROM products WHERE id = $1',
[req.params.id]
);
res.json({ stock: result.rows[0].stock });
});

// Decrease stock
app.post('/products/:id/decrease-stock', async (req, res) => {
const { quantity } = req.body;

const client = await pool.connect();
try {
await client.query('BEGIN');

// Check current stock
const result = await client.query(
'SELECT stock FROM products WHERE id = $1 FOR UPDATE',
[req.params.id]
);

const currentStock = result.rows[0].stock;
if (currentStock < quantity) {
throw new Error('Insufficient inventory');
}

// Decrease stock
await client.query(
'UPDATE products SET stock = stock - $1 WHERE id = $2',
[quantity, req.params.id]
);

await client.query('COMMIT');
res.json({ success: true });
} catch (error) {
await client.query('ROLLBACK');
res.status(400).json({ error: error.message });
} finally {
client.release();
}
});

app.listen(3002, () => {
console.log('Product service: http://localhost:3002');
});

// ========== 3. Order Service (order-service.js) ==========
// Port: 3003
const express = require('express');
const axios = require('axios');
const app = express();

app.use(express.json());

const orders = []; // In practice, use a database

// Create order
app.post('/orders', async (req, res) => {
const { userId, productId, quantity } = req.body;

try {
// 1. Check user (call user service)
const userResponse = await axios.get(
`http://localhost:3001/users/${userId}`
);
const user = userResponse.data;

if (!user) {
return res.status(404).json({ error: 'User not found' });
}

// 2. Get product info (call product service)
const productResponse = await axios.get(
`http://localhost:3002/products/${productId}`
);
const product = productResponse.data;

// 3. Check stock
const stockResponse = await axios.get(
`http://localhost:3002/products/${productId}/stock`
);
const { stock } = stockResponse.data;

if (stock < quantity) {
return res.status(400).json({ error: 'Insufficient inventory' });
}

// 4. Decrease stock
await axios.post(
`http://localhost:3002/products/${productId}/decrease-stock`,
{ quantity }
);

// 5. Process payment (call payment service)
const total = product.price * quantity;
const paymentResponse = await axios.post(
'http://localhost:3004/payments',
{ userId, amount: total }
);

// 6. Create order
const order = {
id: orders.length + 1,
userId,
productId,
quantity,
total,
status: 'completed',
createdAt: new Date()
};
orders.push(order);

// 7. Publish event
await publishEvent('order.created', order);

res.json(order);
} catch (error) {
// Saga pattern: compensating transaction
console.error('Order failed:', error.message);

// Restore inventory
try {
await axios.post(
`http://localhost:3002/products/${productId}/increase-stock`,
{ quantity }
);
} catch (rollbackError) {
console.error('Failed to restore inventory:', rollbackError.message);
}

res.status(500).json({ error: 'Order processing failed' });
}
});

// Get order
app.get('/orders/:id', (req, res) => {
const order = orders.find(o => o.id === parseInt(req.params.id));
res.json(order);
});

app.listen(3003, () => {
console.log('Order service: http://localhost:3003');
});

// ========== 4. Payment Service (payment-service.js) ==========
// Port: 3004
const express = require('express');
const app = express();

app.use(express.json());

const payments = [];

app.post('/payments', async (req, res) => {
const { userId, amount } = req.body;

// Call external payment API (e.g., Stripe, Toss Payments)
try {
// Process actual payment
const payment = {
id: payments.length + 1,
userId,
amount,
status: 'success',
createdAt: new Date()
};
payments.push(payment);

// Publish event
await publishEvent('payment.completed', payment);

res.json(payment);
} catch (error) {
res.status(400).json({ error: 'Payment failed' });
}
});

app.listen(3004, () => {
console.log('Payment service: http://localhost:3004');
});

// ========== 5. API Gateway (gateway.js) ==========
// Port: 3000
const express = require('express');
const { createProxyMiddleware } = require('http-proxy-middleware');
const app = express();

// Authentication middleware
function authenticate(req, res, next) {
const token = req.headers.authorization;
if (!token) {
return res.status(401).json({ error: 'Authentication required' });
}
// JWT verification, etc.
next();
}

// Logging middleware
app.use((req, res, next) => {
console.log(`${req.method} ${req.path}`);
next();
});

// User service proxy
app.use('/api/users', authenticate, createProxyMiddleware({
target: 'http://localhost:3001',
pathRewrite: { '^/api/users': '/users' },
changeOrigin: true
}));

// Product service proxy
app.use('/api/products', createProxyMiddleware({
target: 'http://localhost:3002',
pathRewrite: { '^/api/products': '/products' },
changeOrigin: true
}));

// Order service proxy
app.use('/api/orders', authenticate, createProxyMiddleware({
target: 'http://localhost:3003',
pathRewrite: { '^/api/orders': '/orders' },
changeOrigin: true
}));

// Payment service proxy
app.use('/api/payments', authenticate, createProxyMiddleware({
target: 'http://localhost:3004',
pathRewrite: { '^/api/payments': '/payments' },
changeOrigin: true
}));

app.listen(3000, () => {
console.log('API Gateway: http://localhost:3000');
});

// ========== 6. Event Bus (event-bus.js) ==========
const amqp = require('amqplib');

let connection, channel;

// Connect to RabbitMQ
async function connect() {
connection = await amqp.connect('amqp://localhost');
channel = await connection.createChannel();
}

// Publish event
async function publishEvent(eventType, data) {
await channel.assertQueue(eventType);
channel.sendToQueue(
eventType,
Buffer.from(JSON.stringify(data))
);
console.log(`Event published: ${eventType}`, data);
}

// Subscribe to event
async function subscribeEvent(eventType, callback) {
await channel.assertQueue(eventType);
channel.consume(eventType, (msg) => {
const data = JSON.parse(msg.content.toString());
console.log(`Event received: ${eventType}`, data);
callback(data);
channel.ack(msg);
});
}

connect();

module.exports = { publishEvent, subscribeEvent };

Running Microservices with Docker Compose​

# docker-compose.yml
version: '3.8'

services:
# API Gateway
gateway:
build: ./gateway
ports:
- "3000:3000"
depends_on:
- user-service
- product-service
- order-service
- payment-service

# User service
user-service:
build: ./user-service
ports:
- "3001:3001"
environment:
- MONGO_URL=mongodb://mongo:27017/users
depends_on:
- mongo
- rabbitmq

# Product service
product-service:
build: ./product-service
ports:
- "3002:3002"
environment:
- POSTGRES_URL=postgres://postgres:password@postgres:5432/products
depends_on:
- postgres
- rabbitmq

# Order service
order-service:
build: ./order-service
ports:
- "3003:3003"
environment:
- MYSQL_URL=mysql://root:password@mysql:3306/orders
depends_on:
- mysql
- rabbitmq

# Payment service
payment-service:
build: ./payment-service
ports:
- "3004:3004"
depends_on:
- rabbitmq

# Databases
mongo:
image: mongo:6
volumes:
- mongo-data:/data/db

postgres:
image: postgres:15
environment:
- POSTGRES_PASSWORD=password
- POSTGRES_DB=products
volumes:
- postgres-data:/var/lib/postgresql/data

mysql:
image: mysql:8
environment:
- MYSQL_ROOT_PASSWORD=password
- MYSQL_DATABASE=orders
volumes:
- mysql-data:/var/lib/mysql

# Message Queue
rabbitmq:
image: rabbitmq:3-management
ports:
- "5672:5672"
- "15672:15672" # Management UI

volumes:
mongo-data:
postgres-data:
mysql-data:

Service Mesh (Istio Example)​

# istio-config.yaml
# Service mesh - managing inter-service communication

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: order-service
spec:
hosts:
- order-service
http:
# Traffic distribution (canary deployment)
- match:
- headers:
user-type:
exact: beta
route:
- destination:
host: order-service
subset: v2 # New version
weight: 20
- destination:
host: order-service
subset: v1 # Existing version
weight: 80

# Retry policy
- route:
- destination:
host: order-service
retries:
attempts: 3
perTryTimeout: 2s

# Timeout
timeout: 10s

---
# Circuit Breaker
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: payment-service
spec:
host: payment-service
trafficPolicy:
connectionPool:
tcp:
maxConnections: 100
http:
http1MaxPendingRequests: 50
maxRequestsPerConnection: 2
outlierDetection:
consecutiveErrors: 5
interval: 30s
baseEjectionTime: 30s
maxEjectionPercent: 50

πŸ€” Frequently Asked Questions​

Q1. When should I use microservices?​

A:

βœ… When microservices are suitable:

1. Large-scale applications
- Team: 10+ members
- Code: 100k+ lines
- Users: Hundreds of thousands+

2. Need for rapid deployment
- Deploy multiple times a day
- Independent feature releases
- Frequent A/B testing

3. Need for diverse tech stack
- Choose optimal tech per service
- Integrate with legacy systems

4. Need for independent scaling
- Specific features have high traffic
- Different resource requirements per service

5. Team independence is important
- Multiple teams developing simultaneously
- Minimize inter-team dependencies

Examples:
- Netflix: Hundreds of microservices
- Amazon: 2-pizza teams (team per service)
- Uber: Services separated by region and feature

❌ When monolithic is suitable:

1. Small applications
- Team: 5 or fewer members
- Features: Clear and simple
- Traffic: Low

2. Early-stage startup
- Need fast MVP development
- Requirements change frequently
- Limited resources

3. Simple CRUD
- No complex business logic
- Unclear service boundaries

4. Lack of operational experience
- No DevOps team
- No distributed systems experience

Examples:
- Blog, portfolio
- Small e-commerce
- Internal tools

πŸ“Š Decision Checklist:

β–‘ Team size 10+ members?
β–‘ Codebase 100k+ lines?
β–‘ Need frequent independent deployment?
β–‘ Need partial scaling?
β–‘ Have DevOps team?
β–‘ Have distributed systems experience?

3+ checks β†’ Consider microservices
2 or fewer β†’ Keep monolithic

Q2. What is the biggest challenge of microservices?​

A:

// ========== 1. Distributed Transactions ==========

// Monolithic: Simple transaction
await db.transaction(async (t) => {
await createOrder(data, t);
await decreaseStock(productId, t);
await processPayment(amount, t);
// Rollback entire transaction if any fails
});

// Microservices: Complex Saga pattern
async function createOrderSaga(data) {
try {
// Step 1
const order = await orderService.create(data);

// Step 2
await productService.decreaseStock(data.productId);

// Step 3
await paymentService.process(order.total);

return order;
} catch (error) {
// Compensating transactions (in reverse order)
await paymentService.refund(order.total);
await productService.increaseStock(data.productId);
await orderService.cancel(order.id);

throw error;
}
}

// ========== 2. Data Consistency ==========

// Problem: Data scattered across services
// User service: userId, name
// Order service: userId, orders
// Payment service: userId, payments

// Solution 1: Event Sourcing
eventBus.on('user.updated', async (event) => {
// When user info changes, update other services too
await orderService.updateUserInfo(event.userId, event.name);
await paymentService.updateUserInfo(event.userId, event.name);
});

// Solution 2: CQRS (Command Query Responsibility Segregation)
// Separate write and read
// Write: Each service independent
// Read: Unified view (Read Model)

// ========== 3. Network Latency ==========

// Monolithic: Function call (fast)
const user = getUser(userId); // 1ms

// Microservices: HTTP request (slow)
const user = await axios.get(`http://user-service/users/${userId}`); // 50ms

// Solution: Caching
const redis = require('redis');
const cache = redis.createClient();

async function getUser(userId) {
// 1. Check cache
const cached = await cache.get(`user:${userId}`);
if (cached) return JSON.parse(cached);

// 2. Call service
const response = await axios.get(`http://user-service/users/${userId}`);
const user = response.data;

// 3. Save to cache
await cache.setex(`user:${userId}`, 3600, JSON.stringify(user));

return user;
}

// ========== 4. Service Failure Handling ==========

// Circuit Breaker pattern
const CircuitBreaker = require('opossum');

const options = {
timeout: 3000, // 3 second timeout
errorThresholdPercentage: 50, // When 50% fail
resetTimeout: 30000 // Retry after 30 seconds
};

const breaker = new CircuitBreaker(async (userId) => {
return await axios.get(`http://user-service/users/${userId}`);
}, options);

breaker.fallback(() => ({
id: userId,
name: 'Unknown', // Fallback data
cached: true
}));

// Usage
breaker.fire(userId)
.then(console.log)
.catch(console.error);

// ========== 5. Monitoring and Debugging ==========

// Distributed Tracing
// Using Jaeger, Zipkin

const tracer = require('jaeger-client').initTracer(config);

app.use((req, res, next) => {
const span = tracer.startSpan('http_request');
span.setTag('http.method', req.method);
span.setTag('http.url', req.url);

req.span = span;
next();
});

// Pass trace ID when calling between services
await axios.get('http://order-service/orders', {
headers: {
'x-trace-id': req.span.context().toTraceId()
}
});

Q3. What is the role of an API Gateway?​

A:

// ========== API Gateway Main Functions ==========

const express = require('express');
const rateLimit = require('express-rate-limit');
const jwt = require('jsonwebtoken');
const { createProxyMiddleware } = require('http-proxy-middleware');

const app = express();

// ========== 1. Routing ==========
// Client only needs to know one endpoint
app.use('/api/users', createProxyMiddleware({
target: 'http://user-service:3001',
changeOrigin: true
}));

app.use('/api/products', createProxyMiddleware({
target: 'http://product-service:3002',
changeOrigin: true
}));

// ========== 2. Authentication and Authorization ==========
function authenticate(req, res, next) {
const token = req.headers.authorization?.split(' ')[1];

if (!token) {
return res.status(401).json({ error: 'Token required' });
}

try {
const decoded = jwt.verify(token, process.env.JWT_SECRET);
req.user = decoded;
next();
} catch (error) {
res.status(401).json({ error: 'Invalid token' });
}
}

app.use('/api/orders', authenticate, createProxyMiddleware({
target: 'http://order-service:3003'
}));

// ========== 3. Rate Limiting ==========
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100 // Max 100 requests
});

app.use('/api/', limiter);

// ========== 4. Load Balancing ==========
const productServiceInstances = [
'http://product-service-1:3002',
'http://product-service-2:3002',
'http://product-service-3:3002'
];

let currentIndex = 0;

app.use('/api/products', createProxyMiddleware({
target: productServiceInstances[currentIndex],
router: () => {
// Round robin
const target = productServiceInstances[currentIndex];
currentIndex = (currentIndex + 1) % productServiceInstances.length;
return target;
}
}));

// ========== 5. Request/Response Transformation ==========
app.use('/api/legacy', createProxyMiddleware({
target: 'http://legacy-service:8080',
onProxyReq: (proxyReq, req) => {
// Transform request
proxyReq.setHeader('X-API-Version', '2.0');
},
onProxyRes: (proxyRes, req, res) => {
// Transform response
proxyRes.headers['X-Custom-Header'] = 'Gateway';
}
}));

// ========== 6. Caching ==========
const redis = require('redis');
const cache = redis.createClient();

app.get('/api/products/:id', async (req, res) => {
const cacheKey = `product:${req.params.id}`;

// Check cache
const cached = await cache.get(cacheKey);
if (cached) {
return res.json(JSON.parse(cached));
}

// Call service
const response = await axios.get(
`http://product-service:3002/products/${req.params.id}`
);

// Save to cache
await cache.setex(cacheKey, 3600, JSON.stringify(response.data));

res.json(response.data);
});

// ========== 7. Logging and Monitoring ==========
app.use((req, res, next) => {
const start = Date.now();

res.on('finish', () => {
const duration = Date.now() - start;
console.log({
method: req.method,
path: req.path,
status: res.statusCode,
duration: `${duration}ms`,
user: req.user?.id
});
});

next();
});

// ========== 8. Error Handling ==========
app.use((err, req, res, next) => {
console.error('Gateway Error:', err);

if (err.code === 'ECONNREFUSED') {
return res.status(503).json({
error: 'Service unavailable'
});
}

res.status(500).json({
error: 'Server error occurred'
});
});

// ========== 9. Service Discovery ==========
const consul = require('consul')();

async function getServiceUrl(serviceName) {
const result = await consul.health.service({
service: serviceName,
passing: true // Only instances that passed health check
});

if (result.length === 0) {
throw new Error(`${serviceName} service not found`);
}

// Select randomly
const instance = result[Math.floor(Math.random() * result.length)];
return `http://${instance.Service.Address}:${instance.Service.Port}`;
}

app.listen(3000);

Q4. What are microservices deployment strategies?​

A:

# ========== 1. Blue-Green Deployment ==========
# Deploy new version (Green) and switch traffic at once

# Blue (current version)
apiVersion: v1
kind: Service
metadata:
name: order-service
spec:
selector:
app: order-service
version: blue # Current traffic
ports:
- port: 80

---
# Deploy Green (new version)
kubectl apply -f order-service-green.yaml

# Switch traffic after testing
kubectl patch service order-service -p '{"spec":{"selector":{"version":"green"}}}'

# Rollback immediately if problems
kubectl patch service order-service -p '{"spec":{"selector":{"version":"blue"}}}'

# ========== 2. Canary Deployment ==========
# Send only some traffic to new version and gradually increase

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: order-service
spec:
hosts:
- order-service
http:
- route:
- destination:
host: order-service
subset: v1 # Existing version
weight: 90 # 90% traffic
- destination:
host: order-service
subset: v2 # New version
weight: 10 # 10% traffic

# Gradually increase
# 10% β†’ 25% β†’ 50% β†’ 75% β†’ 100%

# ========== 3. Rolling Update ==========
# Kubernetes default strategy

apiVersion: apps/v1
kind: Deployment
metadata:
name: order-service
spec:
replicas: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # Max 1 additional creation
maxUnavailable: 1 # Max 1 unavailable allowed
template:
spec:
containers:
- name: order-service
image: order-service:v2

# Order:
# 1. Start 1 new Pod
# 2. When health check passes, terminate 1 old Pod
# 3. Repeat (until all 5 are replaced)

# ========== 4. Docker Compose Deployment ==========
# docker-compose.yml

version: '3.8'

services:
order-service:
image: order-service:latest
deploy:
replicas: 3
update_config:
parallelism: 1 # 1 at a time
delay: 10s # 10 second interval
failure_action: rollback # Rollback on failure
restart_policy:
condition: on-failure

# Deploy
docker stack deploy -c docker-compose.yml myapp

# ========== 5. CI/CD Pipeline ==========
# .github/workflows/deploy.yml

name: Deploy Microservices

on:
push:
branches: [main]

jobs:
deploy:
runs-on: ubuntu-latest
steps:
# Detect changed services
- name: Detect changed services
uses: dorny/paths-filter@v2
id: changes
with:
filters: |
user-service:
- 'services/user/**'
product-service:
- 'services/product/**'

- name: Deploy user service
if: steps.changes.outputs.user-service == 'true'
run: |
docker build -t user-service:${{ github.sha }} services/user
docker push user-service:${{ github.sha }}
kubectl set image deployment/user-service user-service=user-service:${{ github.sha }}

- name: Deploy product service
if: steps.changes.outputs.product-service == 'true'
run: |
docker build -t product-service:${{ github.sha }} services/product
docker push product-service:${{ github.sha }}
kubectl set image deployment/product-service product-service=product-service:${{ github.sha }}

Q5. How to migrate from monolithic to microservices?​

A:

// ========== Step-by-Step Migration Strategy ==========

// ========== Step 1: Strangler Fig Pattern ==========
// New features as microservices, maintain existing features

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Monolithic Application β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚User Mgmt β”‚Product β”‚Order β”‚ β”‚
β”‚ β”‚ β”‚Mgmt β”‚Mgmt β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”˜ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

// Step 1: New feature (notification) as microservice
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Monolithic β”‚ β”‚Notification β”‚
β”‚ Userβ”‚Productβ”‚Order β”‚ β”‚Service (new) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

// Step 2: Extract order feature
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Monolithic β”‚ β”‚Order Service β”‚
β”‚ Userβ”‚Product β”‚ β”‚(extracted) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚Notification β”‚
β”‚Service β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

// Step 3: Extract all features
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚User β”‚ β”‚Product β”‚ β”‚Order β”‚
β”‚Service β”‚ β”‚Service β”‚ β”‚Service β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚Notificationβ”‚
β”‚Service β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

// ========== Step 2: Introduce API Gateway ==========

// Before: Client calls monolith directly
const response = await fetch('http://monolith/api/orders');

// After: Call through API Gateway
const response = await fetch('http://api-gateway/api/orders');

// API Gateway routes
if (route === '/api/orders') {
// Route to new service
proxy('http://order-service/orders');
} else {
// Still to monolith
proxy('http://monolith/api');
}

// ========== Step 3: Database Separation ==========

// Problem: Shared database
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Monolithic DB β”‚
β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚
β”‚ β”‚Users β”‚ β”‚
β”‚ β”‚Products β”‚ β”‚
β”‚ β”‚Orders β”‚ β”‚
β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

// Solution: Database per Service

// 1) Dual Write
async function createOrder(data) {
// Write to monolithic DB
await monolithDB.orders.create(data);

// Write to new service DB
await orderServiceDB.orders.create(data);
}

// 2) Change Data Capture (CDC)
// Automatically sync changes from monolithic DB
const debezium = require('debezium');

debezium.on('orders.insert', async (change) => {
// Reflect in new service DB
await orderServiceDB.orders.create(change.data);
});

// 3) Complete separation
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚Users DB β”‚ β”‚Products β”‚ β”‚Orders DB β”‚
β”‚(MongoDB) β”‚ β”‚DB(MySQL) β”‚ β”‚(Postgres)β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

// ========== Step 4: Gradual Traffic Switching ==========

// Adjust ratio in API Gateway
const MIGRATION_PERCENTAGE = 10; // Only 10% to new service

app.use('/api/orders', (req, res, next) => {
if (Math.random() * 100 < MIGRATION_PERCENTAGE) {
// To new service
proxy('http://order-service/orders')(req, res, next);
} else {
// To monolith
proxy('http://monolith/api/orders')(req, res, next);
}
});

// Gradually increase
// 10% β†’ 25% β†’ 50% β†’ 75% β†’ 100%

// ========== Step 5: Monitoring and Rollback Preparation ==========

const NEW_SERVICE_ERROR_THRESHOLD = 0.05; // 5% error rate

async function monitorNewService() {
const errorRate = await getErrorRate('order-service');

if (errorRate > NEW_SERVICE_ERROR_THRESHOLD) {
// Rollback if error rate high
console.error('High error rate! Rolling back to monolith');
MIGRATION_PERCENTAGE = 0;

// Alert
await sendAlert('Migration rollback occurred');
}
}

// ========== Practical Checklist ==========

Migration Checklist:

β–‘ 1. Identify boundaries
- Domain-Driven Design (DDD)
- Divide by business functions

β–‘ 2. Start with most independent features
- Low dependencies
- Low business impact
- Ex: notification, logging, search

β–‘ 3. Introduce API Gateway
- Gradual traffic switching

β–‘ 4. Database separation strategy
- Dual write β†’ CDC β†’ Complete separation

β–‘ 5. Enhance monitoring
- Error rate, response time, traffic
- Prepare for rollback

β–‘ 6. Team training
- Microservices architecture
- DevOps tools (Docker, Kubernetes)

β–‘ 7. Documentation
- Service catalog
- API documentation
- Deployment guide

πŸŽ“ Next Steps​

Once you understand microservices architecture, learn:

  1. What is Docker? (Document to be created) - Containerization
  2. What is CI/CD? - Automated deployment
  3. REST API vs GraphQL - API design

Practice​

# ========== 1. Simple Microservices Practice ==========

# Project structure
mkdir microservices-demo
cd microservices-demo

# Create services
mkdir -p services/{user,product,order}
mkdir gateway

# Write Docker Compose
cat > docker-compose.yml

# Run
docker-compose up -d

# Check logs
docker-compose logs -f

# ========== 2. Kubernetes Deployment ==========

# Install minikube (local K8s)
brew install minikube
minikube start

# Deploy
kubectl apply -f kubernetes/

# Check services
kubectl get pods
kubectl get services

# Check logs
kubectl logs <pod-name>

# ========== 3. Install Istio (Service Mesh) ==========

# Install Istio
istioctl install --set profile=demo -y

# Inject sidecar into services
kubectl label namespace default istio-injection=enabled

# Istio dashboard
istioctl dashboard kiali

🎬 Conclusion​

Microservices architecture provides scalability and flexibility:

  • Independence: Independent development, deployment, scaling per service
  • Technology diversity: Choose optimal tech per service
  • Failure isolation: One service failure doesn't affect entire system
  • Team autonomy: Each team responsible for services

However, complexity increases, so choose based on project scale and team capability! 🧩