Published on

Docker Complete Guide for Developers: Modern Practices for 2025

Authors

⚠️ Note: This article is AI-generated and each step is yet to be verified in a real-world environment. Use at your own discretion and always test thoroughly in a non-production environment first.

Introduction

Docker has evolved significantly in 2024-2025. This guide focuses on modern Docker practices, avoiding deprecated features and showcasing the latest tools and techniques for containerizing applications in production.

What's New in Modern Docker:

  • BuildKit for faster, cached builds
  • Docker Init for instant project setup
  • Docker Scout for vulnerability scanning
  • Compose Watch for hot-reloading
  • Rootless mode for enhanced security
  • Multi-platform builds out of the box

Table of Contents

Part 1: Modern Docker Setup

  1. Installation & Modern Setup
  2. Docker Init - Quick Start
  3. BuildKit & Modern Builds

Part 2: Modern Dockerfile Practices

  1. Writing Modern Dockerfiles
  2. Multi-Platform Builds
  3. Security with Docker Scout

Part 3: Latest Data Services

  1. PostgreSQL 17
  2. Redis 7.4 with Valkey
  3. Apache Kafka 3.8
  4. ClickHouse Latest

Part 4: Modern Development Workflows

  1. Compose Watch for Hot Reload
  2. Modern Full-Stack Setup
  3. Real-Time Data Pipeline

Part 5: Production with Kubernetes

  1. Kubernetes on Docker Desktop
  2. Deploying to Kubernetes
  3. Modern CI/CD with GitHub Actions

Part 6: Advanced Modern Features

  1. Rootless Docker
  2. Advanced Networking
  3. Monitoring with OTLP
  4. Troubleshooting Modern Issues

Part 1: Modern Docker Setup

Topic: Installation & Modern Setup

Linux (Ubuntu/Debian) - Latest Method

# Install using Docker's official convenience script (2024 method)
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh

# Enable rootless mode (recommended for security)
dockerd-rootless-setuptool.sh install

# Add user to docker group (traditional method)
sudo usermod -aG docker $USER
newgrp docker

# Verify installation
docker version
docker compose version  # Note: No hyphen, built into docker CLI

# Enable BuildKit by default
echo '{ "features": { "buildkit": true } }' | sudo tee /etc/docker/daemon.json
sudo systemctl restart docker

macOS/Windows - Docker Desktop Latest

# macOS with Homebrew
brew install --cask docker

# Windows with winget
winget install Docker.DockerDesktop

# Enable Kubernetes in Docker Desktop
# Settings → Kubernetes → Enable Kubernetes

# Verify
docker version
docker compose version
kubectl version --client

Post-Installation Checks

# Verify BuildKit is enabled
docker buildx version
docker buildx ls

# Check Docker Scout
docker scout version

# Test multi-platform support
docker buildx create --name mybuilder --use
docker buildx inspect --bootstrap

# Check system info
docker info
docker system df

Topic: Docker Init - Quick Start

Docker Init (introduced 2023, stable 2024) scaffolds Dockerfiles automatically.

# Initialize Docker in your project
cd my-node-app
docker init

# Interactive prompts:
# ? What application platform does your project use? Node
# ? What version of Node do you want to use? 22
# ? Which package manager do you want to use? npm
# ? What command do you want to use to start the app? npm start
# ? What port does your server listen on? 3000

# Generated files:
# - Dockerfile (multi-stage, optimized)
# - compose.yaml (no version needed in modern compose)
# - .dockerignore
# - README.Docker.md

# Start immediately
docker compose up --build

Generated Modern Dockerfile (by docker init)

# syntax=docker/dockerfile:1

FROM node:22-alpine AS base
WORKDIR /app
EXPOSE 3000

FROM base AS deps
COPY package*.json ./
RUN npm ci --only=production

FROM base AS build
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM base AS final
RUN apk add --no-cache dumb-init
USER node
COPY --from=deps --chown=node:node /app/node_modules ./node_modules
COPY --from=build --chown=node:node /app/dist ./dist
ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "dist/server.js"]

Topic: BuildKit & Modern Builds

BuildKit is the modern build engine (default in Docker 23.0+).

Enable BuildKit Features

# Set environment variable (for older Docker versions)
export DOCKER_BUILDKIT=1

# Or enable in daemon.json (already shown above)

Modern Build Commands

# Build with BuildKit cache
docker buildx build \
  --cache-from type=registry,ref=myapp:cache \
  --cache-to type=registry,ref=myapp:cache,mode=max \
  -t myapp:latest \
  --push \
  .

# Multi-platform build (ARM + x86)
docker buildx build \
  --platform linux/amd64,linux/arm64 \
  -t myuser/myapp:latest \
  --push \
  .

# Build with secrets (no secrets in layers!)
docker buildx build \
  --secret id=npmtoken,src=$HOME/.npmrc \
  -t myapp:latest \
  .

# Use in Dockerfile:
# RUN --mount=type=secret,id=npmtoken \
#     npm config set //registry.npmjs.org/:_authToken=$(cat /run/secrets/npmtoken)

BuildKit Cache Mounts (Faster Builds)

# syntax=docker/dockerfile:1

FROM node:22-alpine

WORKDIR /app

# Cache npm packages between builds
RUN --mount=type=cache,target=/root/.npm \
    npm install -g pnpm

COPY package.json pnpm-lock.yaml ./

# Cache pnpm store
RUN --mount=type=cache,target=/root/.local/share/pnpm/store \
    pnpm install --frozen-lockfile

COPY . .
RUN pnpm build

FROM node:22-alpine
WORKDIR /app
COPY --from=0 /app/dist ./dist
COPY --from=0 /app/node_modules ./node_modules
CMD ["node", "dist/server.js"]

Part 2: Modern Dockerfile Practices

Topic: Writing Modern Dockerfiles

Modern Node.js Dockerfile (2025 Best Practices)

# syntax=docker/dockerfile:1

# Use specific versions with SHA256
FROM node:22.11-alpine3.20@sha256:example AS base

# Enable Corepack for pnpm/yarn
RUN corepack enable && corepack prepare pnpm@latest --activate

WORKDIR /app

# Install security updates
RUN apk upgrade --no-cache

# Development stage
FROM base AS development

COPY package.json pnpm-lock.yaml ./
RUN --mount=type=cache,target=/root/.local/share/pnpm/store \
    pnpm install --frozen-lockfile

COPY . .

USER node
EXPOSE 3000
CMD ["pnpm", "dev"]

# Build stage
FROM base AS build

COPY package.json pnpm-lock.yaml ./
RUN --mount=type=cache,target=/root/.local/share/pnpm/store \
    pnpm install --frozen-lockfile

COPY . .
RUN pnpm build && pnpm prune --prod

# Production stage
FROM base AS production

# Add dumb-init for proper signal handling
RUN apk add --no-cache dumb-init

# Create non-root user
RUN addgroup -g 1001 nodejs && \
    adduser -S nodejs -u 1001 -G nodejs

COPY --from=build --chown=nodejs:nodejs /app/dist ./dist
COPY --from=build --chown=nodejs:nodejs /app/node_modules ./node_modules
COPY --chown=nodejs:nodejs package.json ./

USER nodejs
EXPOSE 3000

# Modern healthcheck using HEREDOC
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD node <<'EOF'
const http = require('http');
const options = { host: 'localhost', port: 3000, path: '/health', timeout: 2000 };
http.get(options, (res) => process.exit(res.statusCode === 200 ? 0 : 1))
    .on('error', () => process.exit(1));
EOF

ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "dist/server.js"]

Modern Python Dockerfile

# syntax=docker/dockerfile:1

FROM python:3.12-slim@sha256:example AS base

# Install system dependencies and security updates
RUN apt-get update && \
    apt-get upgrade -y && \
    apt-get install -y --no-install-recommends \
      build-essential \
      curl && \
    rm -rf /var/lib/apt/lists/*

WORKDIR /app

# Install uv for faster pip installs (modern Python package manager)
RUN pip install --no-cache-dir uv

# Development stage
FROM base AS development

COPY requirements.txt ./
RUN --mount=type=cache,target=/root/.cache/uv \
    uv pip install --system -r requirements.txt

COPY . .

CMD ["python", "-m", "uvicorn", "app.main:app", "--reload", "--host", "0.0.0.0"]

# Production stage
FROM base AS production

COPY requirements.txt ./
RUN --mount=type=cache,target=/root/.cache/uv \
    uv pip install --system --no-dev -r requirements.txt

COPY . .

# Create non-root user
RUN useradd -m -u 1001 appuser && \
    chown -R appuser:appuser /app

USER appuser

EXPOSE 8000

HEALTHCHECK --interval=30s --timeout=3s \
  CMD python <<'EOF'
import urllib.request
try:
    urllib.request.urlopen('http://localhost:8000/health', timeout=2)
    exit(0)
except:
    exit(1)
EOF

CMD ["python", "-m", "uvicorn", "app.main:app", "--host", "0.0.0.0", "--workers", "4"]

Topic: Multi-Platform Builds

Build once, run anywhere (x86, ARM, RISC-V).

# Create builder
docker buildx create --name multiplatform --driver docker-container --use

# Build for multiple platforms
docker buildx build \
  --platform linux/amd64,linux/arm64,linux/arm/v7 \
  --tag myuser/myapp:latest \
  --tag myuser/myapp:1.0.0 \
  --push \
  .

# Build platform-specific variants
docker buildx build \
  --platform linux/arm64 \
  --tag myuser/myapp:latest-arm64 \
  --load \
  .

# Inspect what platforms an image supports
docker buildx imagetools inspect myuser/myapp:latest

Multi-Platform Dockerfile with Conditional Dependencies

# syntax=docker/dockerfile:1

FROM --platform=$BUILDPLATFORM node:22-alpine AS base

ARG TARGETPLATFORM
ARG BUILDPLATFORM

RUN echo "Building on $BUILDPLATFORM for $TARGETPLATFORM"

# Install platform-specific binaries
RUN case "$TARGETPLATFORM" in \
      "linux/amd64")  apk add --no-cache some-amd64-lib ;; \
      "linux/arm64")  apk add --no-cache some-arm64-lib ;; \
      "linux/arm/v7") apk add --no-cache some-armv7-lib ;; \
    esac

WORKDIR /app
COPY . .
RUN npm install && npm run build

FROM node:22-alpine
COPY --from=base /app/dist ./dist
CMD ["node", "dist/server.js"]

Topic: Security with Docker Scout

Docker Scout (2024) provides vulnerability scanning and remediation advice.

# Scan image for vulnerabilities
docker scout cves myapp:latest

# Get quick overview
docker scout quickview myapp:latest

# Compare with base image
docker scout compare --to node:22-alpine myapp:latest

# Get recommendations
docker scout recommendations myapp:latest

# Enable SBOM (Software Bill of Materials)
docker scout sbom myapp:latest

# Scan during build
docker buildx build \
  --sbom=true \
  --provenance=true \
  -t myapp:latest \
  .

# View policy violations
docker scout policy myapp:latest

Dockerfile with Scout Attestations

# syntax=docker/dockerfile:1

FROM node:22-alpine AS base

LABEL org.opencontainers.image.source="https://github.com/user/repo"
LABEL org.opencontainers.image.description="My App"
LABEL org.opencontainers.image.licenses="MIT"

# ... rest of Dockerfile
# Build with attestations
docker buildx build \
  --sbom=true \
  --provenance=mode=max \
  --tag myapp:latest \
  --push \
  .

Part 3: Latest Data Services

Topic: PostgreSQL 17

PostgreSQL 17 (released Sept 2024) with modern features.

Quick Start

docker run -d \
  --name postgres17 \
  -e POSTGRES_PASSWORD=postgres \
  -e POSTGRES_USER=postgres \
  -e POSTGRES_DB=mydb \
  -p 5432:5432 \
  -v postgres-data:/var/lib/postgresql/data \
  postgres:17-alpine

# Connect
docker exec -it postgres17 psql -U postgres -d mydb

Modern compose.yaml (No Version Field!)

name: postgres-stack

services:
  postgres:
    image: postgres:17-alpine
    container_name: postgres17
    restart: unless-stopped
    environment:
      POSTGRES_PASSWORD_FILE: /run/secrets/db_password
      POSTGRES_USER: postgres
      POSTGRES_DB: mydb
      # New in PG17: Better performance defaults
      POSTGRES_INITDB_ARGS: '--data-checksums --wal-compression=on'
    secrets:
      - db_password
    ports:
      - '5432:5432'
    volumes:
      - postgres-data:/var/lib/postgresql/data
      - ./init.sql:/docker-entrypoint-initdb.d/01-init.sql:ro
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -U postgres -d mydb']
      interval: 10s
      timeout: 5s
      retries: 5
      start_period: 10s
    networks:
      - db-network

  # pgAdmin 4 (latest)
  pgadmin:
    image: dpage/pgadmin4:latest
    container_name: pgadmin
    restart: unless-stopped
    environment:
      PGADMIN_DEFAULT_EMAIL: admin@example.com
      PGADMIN_DEFAULT_PASSWORD_FILE: /run/secrets/pgadmin_password
      PGADMIN_CONFIG_SERVER_MODE: 'False'
      PGADMIN_CONFIG_MASTER_PASSWORD_REQUIRED: 'False'
    secrets:
      - pgadmin_password
    ports:
      - '5050:80'
    volumes:
      - pgadmin-data:/var/lib/pgadmin
    networks:
      - db-network
    depends_on:
      postgres:
        condition: service_healthy

networks:
  db-network:
    driver: bridge

volumes:
  postgres-data:
  pgadmin-data:

secrets:
  db_password:
    file: ./secrets/db_password.txt
  pgadmin_password:
    file: ./secrets/pgadmin_password.txt

init.sql with PostgreSQL 17 Features

-- Use new PG17 JSON features
CREATE TABLE users (
    id SERIAL PRIMARY KEY,
    username VARCHAR(50) UNIQUE NOT NULL,
    email VARCHAR(100) UNIQUE NOT NULL,
    profile JSONB,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);

-- PG17: Improved JSON subscripting
CREATE INDEX idx_users_profile ON users ((profile['country']));

-- PG17: Incremental backup support
CREATE TABLE events (
    id BIGSERIAL PRIMARY KEY,
    user_id INTEGER REFERENCES users(id),
    event_type VARCHAR(50),
    event_data JSONB,
    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
) PARTITION BY RANGE (created_at);

-- Create partitions
CREATE TABLE events_2025_q1 PARTITION OF events
    FOR VALUES FROM ('2025-01-01') TO ('2025-04-01');

-- Insert sample data
INSERT INTO users (username, email, profile) VALUES
    ('john_doe', 'john@example.com', '{"country": "US", "age": 30}'::jsonb),
    ('jane_smith', 'jane@example.com', '{"country": "UK", "age": 28}'::jsonb);
# Start stack
docker compose up -d

# Access pgAdmin at http://localhost:5050
# Email: admin@example.com, Password: (from secrets file)

# Backup database
docker exec postgres17 pg_dump -U postgres mydb > backup_$(date +%Y%m%d).sql

# Restore database
docker exec -i postgres17 psql -U postgres mydb < backup_20251009.sql

Topic: Redis 7.4 with Valkey

Redis 7.4 (or Valkey - the open-source fork).

Using Valkey (Redis Fork)

name: redis-stack

services:
  # Valkey - Modern Redis alternative
  valkey:
    image: valkey/valkey:latest
    container_name: valkey
    restart: unless-stopped
    command: >
      valkey-server
      --requirepass ${REDIS_PASSWORD}
      --maxmemory 256mb
      --maxmemory-policy allkeys-lru
      --save 60 1000
      --appendonly yes
    ports:
      - '6379:6379'
    volumes:
      - valkey-data:/data
    healthcheck:
      test: ['CMD', 'valkey-cli', 'ping']
      interval: 10s
      timeout: 3s
      retries: 5
    networks:
      - cache-network

  # RedisInsight for GUI (works with Valkey too)
  redisinsight:
    image: redis/redisinsight:latest
    container_name: redisinsight
    restart: unless-stopped
    ports:
      - '5540:5540'
    volumes:
      - redisinsight-data:/data
    networks:
      - cache-network
    depends_on:
      valkey:
        condition: service_healthy

networks:
  cache-network:

volumes:
  valkey-data:
  redisinsight-data:

Redis Stack (All Redis Modules)

name: redis-stack

services:
  redis-stack:
    image: redis/redis-stack:latest
    container_name: redis-stack
    restart: unless-stopped
    environment:
      REDIS_ARGS: >
        --requirepass ${REDIS_PASSWORD}
        --maxmemory 512mb
    ports:
      - '6379:6379' # Redis
      - '8001:8001' # RedisInsight
    volumes:
      - redis-stack-data:/data
    healthcheck:
      test: ['CMD', 'redis-cli', '--raw', 'incr', 'ping']
      interval: 10s
      timeout: 3s
      retries: 5

volumes:
  redis-stack-data:
docker compose up -d

# Test Redis/Valkey
docker exec -it valkey valkey-cli -a $REDIS_PASSWORD ping

# Use Redis modules (if using redis-stack)
docker exec -it redis-stack redis-cli -a $REDIS_PASSWORD

# JSON operations
> JSON.SET user:1 $ '{"name":"John","age":30}'
> JSON.GET user:1

# Search operations
> FT.CREATE idx:users ON JSON PREFIX 1 user: SCHEMA $.name AS name TEXT $.age AS age NUMERIC
> FT.SEARCH idx:users '@name:John'

Topic: Apache Kafka 3.8

Kafka 3.8 with KRaft (no Zookeeper needed - deprecated since Kafka 3.0).

Modern Kafka with KRaft

name: kafka-stack

services:
  kafka:
    image: apache/kafka:latest
    container_name: kafka
    restart: unless-stopped
    ports:
      - '9092:9092'
      - '9093:9093'
    environment:
      # KRaft mode configuration
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@localhost:9093
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
      KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
      KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
      KAFKA_NUM_PARTITIONS: 3
      # New in Kafka 3.8: Tiered storage support
      KAFKA_LOG_DIRS: /var/lib/kafka/data
      CLUSTER_ID: 'MkU3OEVBNTcwNTJENDM2Qk'
    volumes:
      - kafka-data:/var/lib/kafka/data
    healthcheck:
      test: ['CMD-SHELL', 'kafka-broker-api-versions.sh --bootstrap-server localhost:9092']
      interval: 10s
      timeout: 10s
      retries: 5
      start_period: 20s
    networks:
      - kafka-network

  # Kafka UI (replaces deprecated tools)
  kafka-ui:
    image: provectuslabs/kafka-ui:latest
    container_name: kafka-ui
    restart: unless-stopped
    ports:
      - '8080:8080'
    environment:
      DYNAMIC_CONFIG_ENABLED: 'true'
      KAFKA_CLUSTERS_0_NAME: local
      KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9092
    networks:
      - kafka-network
    depends_on:
      kafka:
        condition: service_healthy

networks:
  kafka-network:

volumes:
  kafka-data:
# Start Kafka
docker compose up -d

# Access Kafka UI at http://localhost:8080

# Create topic with modern CLI
docker exec -it kafka kafka-topics.sh --create \
  --topic orders \
  --bootstrap-server localhost:9092 \
  --partitions 3 \
  --replication-factor 1 \
  --config retention.ms=86400000 \
  --config compression.type=snappy

# List topics
docker exec -it kafka kafka-topics.sh --list \
  --bootstrap-server localhost:9092

# Produce messages
docker exec -it kafka kafka-console-producer.sh \
  --topic orders \
  --bootstrap-server localhost:9092 \
  --property "parse.key=true" \
  --property "key.separator=:"

# Consume messages
docker exec -it kafka kafka-console-consumer.sh \
  --topic orders \
  --from-beginning \
  --bootstrap-server localhost:9092 \
  --property print.key=true

# Check consumer lag
docker exec -it kafka kafka-consumer-groups.sh \
  --bootstrap-server localhost:9092 \
  --describe --group my-consumer-group

Topic: ClickHouse Latest

ClickHouse with modern features and Kafka integration.

name: clickhouse-stack

services:
  clickhouse:
    image: clickhouse/clickhouse-server:latest
    container_name: clickhouse
    restart: unless-stopped
    ports:
      - '8123:8123' # HTTP
      - '9000:9000' # Native
    environment:
      CLICKHOUSE_DB: analytics
      CLICKHOUSE_USER: default
      CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD}
      CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT: 1
    volumes:
      - clickhouse-data:/var/lib/clickhouse
      - ./clickhouse-config.xml:/etc/clickhouse-server/config.d/custom.xml:ro
      - ./init-clickhouse.sql:/docker-entrypoint-initdb.d/init.sql:ro
    ulimits:
      nofile:
        soft: 262144
        hard: 262144
    healthcheck:
      test:
        ['CMD', 'clickhouse-client', '--password', '${CLICKHOUSE_PASSWORD}', '--query', 'SELECT 1']
      interval: 10s
      timeout: 5s
      retries: 5
      start_period: 20s
    networks:
      - analytics-network

  # Modern ClickHouse UI
  tabix:
    image: spoonest/clickhouse-tabix-web-client:latest
    container_name: clickhouse-ui
    restart: unless-stopped
    ports:
      - '8124:80'
    networks:
      - analytics-network
    depends_on:
      clickhouse:
        condition: service_healthy

networks:
  analytics-network:

volumes:
  clickhouse-data:

init-clickhouse.sql

CREATE DATABASE IF NOT EXISTS analytics;

-- Modern ClickHouse table with OrderingMergeTree
CREATE TABLE IF NOT EXISTS analytics.events (
    event_id UUID DEFAULT generateUUIDv4(),
    event_time DateTime64(3) DEFAULT now64(),
    event_date Date DEFAULT toDate(event_time),
    user_id UInt64,
    event_type LowCardinality(String),
    properties Map(String, String),
    user_agent String,
    ip String,
    country LowCardinality(String),
    city String,
    INDEX idx_event_type event_type TYPE bloom_filter GRANULARITY 1,
    INDEX idx_country country TYPE set(100) GRANULARITY 1
) ENGINE = MergeTree()
PARTITION BY toYYYYMM(event_date)
ORDER BY (event_date, event_time, user_id)
SETTINGS index_granularity = 8192;

-- Materialized view for real-time aggregations
CREATE MATERIALIZED VIEW IF NOT EXISTS analytics.events_hourly
ENGINE = SummingMergeTree()
PARTITION BY toYYYYMM(event_date)
ORDER BY (event_date, event_hour, event_type, country)
AS SELECT
    event_date,
    toHour(event_time) AS event_hour,
    event_type,
    country,
    count() AS event_count,
    uniqExact(user_id) AS unique_users
FROM analytics.events
GROUP BY event_date, event_hour, event_type, country;

-- Insert sample data
INSERT INTO analytics.events (user_id, event_type, properties, country, city) VALUES
    (1001, 'page_view', map('page', '/home'), 'US', 'New York'),
    (1002, 'click', map('element', 'button'), 'UK', 'London'),
    (1003, 'purchase', map('amount', '99.99'), 'IN', 'Mumbai');
# Start ClickHouse
docker compose up -d

# Query via HTTP
curl 'http://localhost:8123/?user=default&password=yourpassword&query=SELECT+version()'

# Connect via CLI
docker exec -it clickhouse clickhouse-client --password yourpassword

# Example query
SELECT
    country,
    event_type,
    count() AS events,
    uniqExact(user_id) AS users
FROM analytics.events
WHERE event_date = today()
GROUP BY country, event_type
ORDER BY events DESC;

Part 4: Modern Development Workflows

Topic: Compose Watch for Hot Reload

Compose Watch (added in Docker Compose v2.22) enables automatic container rebuilds.

compose.yaml with Watch

name: dev-app

services:
  backend:
    build:
      context: ./backend
      dockerfile: Dockerfile
      target: development
    container_name: backend-dev
    ports:
      - '3000:3000'
    volumes:
      - ./backend:/app
      - /app/node_modules
    environment:
      NODE_ENV: development
    develop:
      watch:
        # Sync source code changes
        - path: ./backend/src
          target: /app/src
          action: sync

        # Rebuild on package.json changes
        - path: ./backend/package.json
          action: rebuild

        # Restart on config changes
        - path: ./backend/config
          target: /app/config
          action: sync+restart

  frontend:
    build:
      context: ./frontend
      target: development
    container_name: frontend-dev
    ports:
      - '5173:5173'
    volumes:
      - ./frontend:/app
      - /app/node_modules
    develop:
      watch:
        - path: ./frontend/src
          target: /app/src
          action: sync
        - path: ./frontend/package.json
          action: rebuild
# Start with watch mode (hot reload)
docker compose watch

# Or combine with up
docker compose up --watch

# Watch will:
# - sync: Copy files instantly
# - rebuild: Rebuild image on dependency changes
# - sync+restart: Sync and restart container

Topic: Modern Full-Stack Setup

Complete modern stack with hot-reload, latest versions.

compose.yaml

name: modern-fullstack

services:
  postgres:
    image: postgres:17-alpine
    container_name: postgres
    restart: unless-stopped
    environment:
      POSTGRES_PASSWORD_FILE: /run/secrets/db_password
      POSTGRES_USER: appuser
      POSTGRES_DB: appdb
    secrets:
      - db_password
    ports:
      - '5432:5432'
    volumes:
      - postgres-data:/var/lib/postgresql/data
    healthcheck:
      test: ['CMD-SHELL', 'pg_isready -U appuser']
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - backend

  redis:
    image: valkey/valkey:latest
    container_name: redis
    restart: unless-stopped
    command: valkey-server --requirepass ${REDIS_PASSWORD}
    ports:
      - '6379:6379'
    volumes:
      - redis-data:/data
    healthcheck:
      test: ['CMD', 'valkey-cli', 'ping']
      interval: 10s
      timeout: 3s
      retries: 5
    networks:
      - backend

  backend:
    build:
      context: ./backend
      dockerfile: Dockerfile
      target: development
    container_name: backend
    restart: unless-stopped
    ports:
      - '3000:3000'
    environment:
      NODE_ENV: development
      DATABASE_URL: postgresql://appuser@postgres:5432/appdb
      REDIS_URL: redis://:${REDIS_PASSWORD}@redis:6379
    volumes:
      - ./backend:/app
      - /app/node_modules
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
    healthcheck:
      test: ['CMD', 'curl', '-f', 'http://localhost:3000/health']
      interval: 30s
      timeout: 3s
      retries: 3
    networks:
      - backend
      - frontend
    develop:
      watch:
        - path: ./backend/src
          target: /app/src
          action: sync
        - path: ./backend/package.json
          action: rebuild

  frontend:
    build:
      context: ./frontend
      dockerfile: Dockerfile
      target: development
    container_name: frontend
    restart: unless-stopped
    ports:
      - '5173:5173'
    environment:
      VITE_API_URL: http://localhost:3000
    volumes:
      - ./frontend:/app
      - /app/node_modules
    depends_on:
      - backend
    networks:
      - frontend
    develop:
      watch:
        - path: ./frontend/src
          target: /app/src
          action: sync
        - path: ./frontend/package.json
          action: rebuild

networks:
  backend:
  frontend:

volumes:
  postgres-data:
  redis-data:

secrets:
  db_password:
    file: ./secrets/db_password.txt
# Start with hot reload
docker compose up --watch

# Or without watch
docker compose up -d

# View logs
docker compose logs -f backend

# Execute commands
docker compose exec backend pnpm prisma migrate dev
docker compose exec postgres psql -U appuser -d appdb

Topic: Real-Time Data Pipeline

Modern pipeline with Kafka 3.8 + ClickHouse.

compose.yaml

name: data-pipeline

services:
  kafka:
    image: apache/kafka:latest
    container_name: kafka
    restart: unless-stopped
    ports:
      - '9092:9092'
    environment:
      KAFKA_NODE_ID: 1
      KAFKA_PROCESS_ROLES: broker,controller
      KAFKA_LISTENERS: PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9093
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
      KAFKA_CONTROLLER_LISTENER_NAMES: CONTROLLER
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
      KAFKA_CONTROLLER_QUORUM_VOTERS: 1@kafka:9093
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_LOG_DIRS: /var/lib/kafka/data
      CLUSTER_ID: 'MkU3OEVBNTcwNTJENDM2Qk'
    volumes:
      - kafka-data:/var/lib/kafka/data
    healthcheck:
      test: ['CMD-SHELL', 'kafka-broker-api-versions.sh --bootstrap-server localhost:9092']
      interval: 10s
      timeout: 10s
      retries: 5
    networks:
      - pipeline

  clickhouse:
    image: clickhouse/clickhouse-server:latest
    container_name: clickhouse
    restart: unless-stopped
    ports:
      - '8123:8123'
      - '9000:9000'
    environment:
      CLICKHOUSE_DB: analytics
      CLICKHOUSE_PASSWORD: ${CLICKHOUSE_PASSWORD}
    volumes:
      - clickhouse-data:/var/lib/clickhouse
      - ./kafka-clickhouse-init.sql:/docker-entrypoint-initdb.d/init.sql:ro
    healthcheck:
      test: ['CMD', 'clickhouse-client', '--query', 'SELECT 1']
      interval: 10s
      timeout: 5s
      retries: 5
    networks:
      - pipeline
    depends_on:
      kafka:
        condition: service_healthy

  producer:
    build: ./producer
    container_name: producer
    restart: unless-stopped
    environment:
      KAFKA_BROKERS: kafka:9092
      KAFKA_TOPIC: events
    networks:
      - pipeline
    depends_on:
      kafka:
        condition: service_healthy

  kafka-ui:
    image: provectuslabs/kafka-ui:latest
    container_name: kafka-ui
    restart: unless-stopped
    ports:
      - '8080:8080'
    environment:
      KAFKA_CLUSTERS_0_NAME: pipeline
      KAFKA_CLUSTERS_0_BOOTSTRAPSERVERS: kafka:9092
    networks:
      - pipeline

networks:
  pipeline:

volumes:
  kafka-data:
  clickhouse-data:

kafka-clickhouse-init.sql

CREATE DATABASE IF NOT EXISTS analytics;

-- Kafka engine table (reads from Kafka)
CREATE TABLE IF NOT EXISTS analytics.events_queue (
    event_id UUID,
    event_time DateTime64(3),
    user_id UInt64,
    event_type String,
    properties String
) ENGINE = Kafka()
SETTINGS
    kafka_broker_list = 'kafka:9092',
    kafka_topic_list = 'events',
    kafka_group_name = 'clickhouse',
    kafka_format = 'JSONEachRow',
    kafka_num_consumers = 3;

-- Target table
CREATE TABLE IF NOT EXISTS analytics.events (
    event_id UUID,
    event_time DateTime64(3),
    event_date Date DEFAULT toDate(event_time),
    user_id UInt64,
    event_type LowCardinality(String),
    properties String
) ENGINE = MergeTree()
PARTITION BY toYYYYMM(event_date)
ORDER BY (event_date, event_time, user_id);

-- Materialized view to move data from Kafka to MergeTree
CREATE MATERIALIZED VIEW IF NOT EXISTS analytics.events_mv TO analytics.events
AS SELECT * FROM analytics.events_queue;

Part 5: Production with Kubernetes

Topic: Kubernetes on Docker Desktop

Docker Desktop includes Kubernetes (easier than Swarm for production).

# Enable Kubernetes in Docker Desktop
# Settings → Kubernetes → Enable Kubernetes

# Verify
kubectl version --client
kubectl cluster-info
kubectl get nodes

# Install Helm (package manager for K8s)
brew install helm  # macOS
winget install Helm.Helm  # Windows

# Verify Helm
helm version

Topic: Deploying to Kubernetes

Modern K8s deployment (replaces Docker Swarm).

kubernetes/namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: myapp

kubernetes/configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
  namespace: myapp
data:
  DATABASE_HOST: postgres
  REDIS_HOST: redis
  NODE_ENV: production

kubernetes/secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: app-secrets
  namespace: myapp
type: Opaque
stringData:
  database-password: yourdbpassword
  redis-password: yourredispassword
  jwt-secret: yourjwtsecret

kubernetes/postgres.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-pvc
  namespace: myapp
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgres
  namespace: myapp
spec:
  serviceName: postgres
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
        - name: postgres
          image: postgres:17-alpine
          env:
            - name: POSTGRES_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: app-secrets
                  key: database-password
            - name: POSTGRES_USER
              value: appuser
            - name: POSTGRES_DB
              value: appdb
          ports:
            - containerPort: 5432
              name: postgres
          volumeMounts:
            - name: postgres-storage
              mountPath: /var/lib/postgresql/data
          livenessProbe:
            exec:
              command:
                - pg_isready
                - -U
                - appuser
            initialDelaySeconds: 30
            periodSeconds: 10
          readinessProbe:
            exec:
              command:
                - pg_isready
                - -U
                - appuser
            initialDelaySeconds: 5
            periodSeconds: 5
      volumes:
        - name: postgres-storage
          persistentVolumeClaim:
            claimName: postgres-pvc
---
apiVersion: v1
kind: Service
metadata:
  name: postgres
  namespace: myapp
spec:
  selector:
    app: postgres
  ports:
    - port: 5432
      targetPort: 5432
  clusterIP: None

kubernetes/deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
  namespace: myapp
spec:
  replicas: 3
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
        - name: backend
          image: myapp/backend:latest
          imagePullPolicy: Always
          ports:
            - containerPort: 3000
          env:
            - name: NODE_ENV
              valueFrom:
                configMapKeyRef:
                  name: app-config
                  key: NODE_ENV
            - name: DATABASE_URL
              value: postgresql://appuser:$(DATABASE_PASSWORD)@postgres:5432/appdb
            - name: DATABASE_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: app-secrets
                  key: database-password
          resources:
            requests:
              memory: '256Mi'
              cpu: '250m'
            limits:
              memory: '512Mi'
              cpu: '500m'
          livenessProbe:
            httpGet:
              path: /health
              port: 3000
            initialDelaySeconds: 30
            periodSeconds: 10
          readinessProbe:
            httpGet:
              path: /ready
              port: 3000
            initialDelaySeconds: 5
            periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
  name: backend
  namespace: myapp
spec:
  selector:
    app: backend
  ports:
    - port: 80
      targetPort: 3000
  type: LoadBalancer
# Apply configuration
kubectl apply -f kubernetes/

# Check status
kubectl get all -n myapp

# View logs
kubectl logs -f -n myapp deployment/backend

# Scale deployment
kubectl scale deployment/backend --replicas=5 -n myapp

# Port forward for testing
kubectl port-forward -n myapp service/backend 3000:80

# Execute command in pod
kubectl exec -it -n myapp deployment/backend -- sh

# Delete everything
kubectl delete namespace myapp

Topic: Modern CI/CD with GitHub Actions

. github/workflows/docker.yml

name: Docker Build and Deploy

on:
  push:
    branches: [main]
    tags: ['v*']
  pull_request:
    branches: [main]

env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}

jobs:
  build:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write
      security-events: write

    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Log in to GitHub Container Registry
        uses: docker/login-action@v3
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}

      - name: Extract metadata
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
          tags: |
            type=ref,event=branch
            type=ref,event=pr
            type=semver,pattern={{version}}
            type=semver,pattern={{major}}.{{minor}}
            type=sha,prefix=,format=long

      - name: Build and push
        uses: docker/build-push-action@v5
        with:
          context: .
          platforms: linux/amd64,linux/arm64
          push: ${{ github.event_name != 'pull_request' }}
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          cache-from: type=gha
          cache-to: type=gha,mode=max
          sbom: true
          provenance: mode=max

      - name: Run Docker Scout
        if: ${{ github.event_name != 'pull_request' }}
        uses: docker/scout-action@v1
        with:
          command: cves
          image: ${{ steps.meta.outputs.tags }}
          only-severities: critical,high
          exit-code: true

  deploy:
    needs: build
    if: github.ref == 'refs/heads/main'
    runs-on: ubuntu-latest
    steps:
      - name: Checkout
        uses: actions/checkout@v4

      - name: Set up kubectl
        uses: azure/setup-kubectl@v4

      - name: Deploy to Kubernetes
        env:
          KUBE_CONFIG: ${{ secrets.KUBE_CONFIG }}
        run: |
          echo "$KUBE_CONFIG" > kubeconfig
          export KUBECONFIG=kubeconfig
          kubectl apply -f kubernetes/
          kubectl rollout status deployment/backend -n myapp

Part 6: Advanced Modern Features

Topic: Rootless Docker

Run Docker without root privileges (security best practice).

# Install rootless Docker
curl -fsSL https://get.docker.com/rootless | sh

# Add to PATH
export PATH=/home/$USER/bin:$PATH
export DOCKER_HOST=unix:///run/user/$(id -u)/docker.sock

# Add to ~/.bashrc
echo 'export PATH=/home/$USER/bin:$PATH' >> ~/.bashrc
echo 'export DOCKER_HOST=unix:///run/user/$(id -u)/docker.sock' >> ~/.bashrc

# Start dockerd
systemctl --user start docker

# Enable on boot
systemctl --user enable docker
sudo loginctl enable-linger $USER

# Verify
docker run hello-world

Topic: Advanced Networking

Modern Docker networking with IPv6 and custom DNS.

name: advanced-networking

services:
  app:
    image: myapp:latest
    networks:
      app-network:
        ipv4_address: 172.28.0.2
        ipv6_address: 2001:db8::2

networks:
  app-network:
    enable_ipv6: true
    ipam:
      config:
        - subnet: 172.28.0.0/16
          gateway: 172.28.0.1
        - subnet: 2001:db8::/64
          gateway: 2001:db8::1
    driver_opts:
      com.docker.network.bridge.name: br-app

Topic: Monitoring with OTLP

Modern observability with OpenTelemetry.

name: monitoring

services:
  app:
    image: myapp:latest
    environment:
      OTEL_EXPORTER_OTLP_ENDPOINT: http://otel-collector:4318
      OTEL_SERVICE_NAME: myapp
    networks:
      - observability

  otel-collector:
    image: otel/opentelemetry-collector-contrib:latest
    command: ['--config=/etc/otel-collector-config.yaml']
    volumes:
      - ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
    ports:
      - '4318:4318'
    networks:
      - observability

  jaeger:
    image: jaegertracing/all-in-one:latest
    ports:
      - '16686:16686'
    networks:
      - observability

networks:
  observability:

Topic: Troubleshooting Modern Issues

# Modern debugging tools
docker debug <container>  # Interactive debugging (Docker Desktop)

# Check BuildKit cache
docker buildx du

# Inspect build history with BuildKit
docker buildx build --progress=plain .

# Check image provenance
docker buildx imagetools inspect myapp:latest --format '{{json .Provenance}}'

# Analyze container with dive
docker run --rm -it \
  -v /var/run/docker.sock:/var/run/docker.sock \
  wagoodman/dive:latest myapp:latest

Conclusion

This guide covered modern Docker practices for 2025:

Modern Tools: BuildKit, Docker Init, Compose Watch, Docker Scout

Latest Services: PostgreSQL 17, Valkey/Redis 7.4, Kafka 3.8 (KRaft), ClickHouse

Production Ready: Kubernetes over Swarm, multi-platform builds, SBOM/provenance

Security First: Rootless Docker, vulnerability scanning, secrets management

Developer Experience: Hot reload, instant setup, optimized builds

Quick Start Checklist

# 1. Install Docker with BuildKit
curl -fsSL https://get.docker.com | sh

# 2. Init your project
docker init

# 3. Develop with hot reload
docker compose watch

# 4. Scan for vulnerabilities
docker scout cves .

# 5. Build multi-platform
docker buildx build --platform linux/amd64,linux/arm64 -t myapp .

# 6. Deploy to Kubernetes
kubectl apply -f kubernetes/

Resources

Happy containerizing with modern Docker! 🚀

Related Posts