YouTube Summaries | Docker Tutorial for Beginners
January 26th, 2024
Table of Contents
- Table of Contents
- Introduction
- Docker Basics
- Setting Up Docker
- Docker Commands
- Dockerizing a Node.js App
- Working with Docker Compose
- Private Docker Registry
- Deploying Containerized Apps
- Docker Volumes - Persist Data in Docker
- Volumes Demo - Configure Persistence for Our Demo Project
Introduction
As with all of these summaries, this will help further cement my learnings from this tutorial, and hopefully will provide some value to you as well.
Docker Basics
In this foundational section, the tutorial delves into core Docker concepts, offering a comprehensive understanding of key components and their functionalities.
Docker Image
A Docker image serves as a lightweight, standalone, and executable package encapsulating an application and its dependencies. To illustrate, consider a Node.js application. The Dockerfile for such an application might look like this:
# Use an official Node.js runtime as a parent image
FROM node:14
# Set the working directory to /app
WORKDIR /app
# Copy package.json and package-lock.json to the working directory
COPY package*.json ./
# Install the application dependencies
RUN npm install
# Copy the content of the local src directory to the working directory
COPY . .
# Inform Docker that the application will run on port 3000
EXPOSE 3000
# Define the command to run the application
CMD ["node", "app.js"]
This Dockerfile instructs Docker to:
- Use the official Node.js image from the Docker Hub.
- Set the working directory within the container.
- Copy the application’s package files to the working directory and install dependencies.
- Copy the application code to the container.
- Expose port 3000, the default for the Node.js application.
- Specify the command to run the application.
Docker Container
A Docker container is an executable package that includes an application and its dependencies. To run a Node.js container based on the previously defined Dockerfile, the following command can be used:
docker build -t my-node-app .
docker run -p 4000:3000 my-node-app
Here, docker build
creates an image named my-node-app
from the Dockerfile in the current directory. Subsequently, docker run
initiates a container based on the created image, mapping port 4000 on the host to port 3000 within the container.
Docker Working Directory
In Docker, the working directory, set using the WORKDIR
instruction in a Dockerfile, defines the location within the container where subsequent instructions will be executed. It serves as the context for any relative paths specified in the Dockerfile.
Consider the following snippet from a typical Dockerfile:
# Set the working directory to /app
WORKDIR /app
In this example, WORKDIR /app
instructs Docker to set the working directory to /app
. Subsequent instructions, such as COPY
, RUN
, and CMD
, will operate within this directory unless specified otherwise.
Benefits of using the WORKDIR
instruction include:
- Clarity and Organization: Explicitly stating the working directory enhances the Dockerfile’s readability and organization.
- Relative Paths: Using relative paths in subsequent instructions becomes more straightforward. If files are being copied, the paths are relative to the working directory.
- Consistency Across Environments: Defining a consistent working directory ensures that the container behaves predictably across different environments.
Here’s an example to illustrate its usage:
# Set the working directory to /app
WORKDIR /app
# Copy the local package files to the working directory
COPY package*.json ./
# Install application dependencies
RUN npm install
# Copy the application code to the working directory
COPY . .
# Set environment variables
ENV NODE_ENV production
# Specify the command to run the application
CMD ["node", "app.js"]
In this scenario, all file operations and subsequent commands will occur within the /app
directory, providing a clear and organized structure for building and running the application within the Docker container.
Setting Up Docker
Setting up Docker involves installing Docker Engine, configuring it, and understanding basic Docker commands. Below is a comprehensive guide with code examples for each step.
- Install Docker: Install Docker by following the instructions for your specific operating system.
Linux:
# Update the apt package index
sudo apt-get update
# Install packages to allow apt to use a repository over HTTPS
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
software-properties-common
# Add Docker’s official GPG key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
# Set up the stable Docker repository
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
# Update the apt package index again
sudo apt-get update
# Install Docker Engine
sudo apt-get install docker-ce docker-ce-cli containerd.io
Windows and macOS: Download and install Docker Desktop from the official Docker website.
- Verify Docker Installation: Ensure Docker is installed correctly by running the following command:
docker --version
- Run a Simple Docker Container: Run a basic container to verify that Docker is working.
docker run hello-world
Docker Commands
Understanding key Docker commands is essential for efficiently managing containers and images. Here’s an in-depth exploration of important Docker commands with detailed code examples.
1. Basic Commands
1.1. docker version
Check Docker version installed on your system.
docker version
1.2. docker info
Display system-wide information about Docker.
docker info
2. Image Commands
2.1. docker images
List all available Docker images.
docker images
2.2. docker pull <image>
Download a Docker image from Docker Hub.
docker pull ubuntu:latest
2.3. docker rmi <image>
Remove a Docker image.
docker rmi ubuntu:latest
3. Container Lifecycle
3.1. docker ps
List running containers.
docker ps
3.2. docker ps -a
List all containers, including stopped ones.
docker ps -a
3.3. docker create <image>
Create a new container but do not start it.
docker create --name my-container ubuntu:latest
3.4. docker start <container>
Start a stopped container.
docker start my-container
3.5. docker stop <container>
Stop a running container.
docker stop my-container
3.6. docker rm <container>
Remove a container.
docker rm my-container
4. Container Interaction
4.1. docker exec -it <container> <command>
Run a command in a running container.
docker exec -it my-container /bin/bash
4.2. docker logs <container>
View logs of a container.
docker logs my-container
5. Networking
5.1. docker network ls
List Docker networks.
docker network ls
5.2. docker network create <network>
Create a new Docker network.
docker network create my-network
6. Volume Commands
6.1. docker volume ls
List Docker volumes.
docker volume ls
6.2. docker volume create <volume>
Create a Docker volume.
docker volume create my-volume
7. Dockerfile Build
7.1. docker build -t <tag> .
Build a Docker image from the Dockerfile in the current directory.
docker build -t my-node-app .
8. Docker Compose
8.1. docker-compose up
Start services defined in a docker-compose.yml
file.
docker-compose up
8.2. docker-compose down
Stop and remove containers defined in a docker-compose.yml
file.
docker-compose down
Dockerizing a Node.js App
Dockerizing a Node.js application involves creating a Docker image to package the app and its dependencies. Here’s a detailed guide with code examples to help you efficiently Dockerize your Node.js application.
1. Project Structure
Ensure your Node.js project has a well-defined structure. A typical structure might include:
my-node-app/
|-- Dockerfile
|-- package.json
|-- src/
| |-- app.js
|-- .dockerignore
2. Dockerfile
Create a Dockerfile in the project root to specify the image build process. Below is a sample Dockerfile for a Node.js app:
# Use an official Node.js runtime as a parent image
FROM node:14
# Set the working directory in the container
WORKDIR /usr/src/app
# Copy package.json and package-lock.json to the working directory
COPY package*.json ./
# Install app dependencies
RUN npm install
# Bundle app source
COPY . .
# Expose the port the app runs on
EXPOSE 3000
# Define the command to run your app
CMD ["node", "src/app.js"]
3. .dockerignore
Create a .dockerignore
file to exclude unnecessary files and directories from being copied into the Docker image. This helps reduce the image size.
node_modules
npm-debug.log
4. Building the Docker Image
Open a terminal in the project directory and run the following command to build the Docker image. Replace my-node-app
with your desired image tag.
docker build -t my-node-app .
5. Running the Dockerized Node.js App
Once the image is built, you can run a container using the following command. This maps port 3000 on the host to port 3000 in the container.
docker run -p 3000:3000 -d my-node-app
6. Verifying the Setup
Visit http://localhost:3000
in your browser to ensure the Node.js app is running correctly within the Docker container.
7. Optimizing for Development
During development, you may want to mount your local code into the container to enable live-reloading. Modify the docker run
command as follows:
docker run -p 3000:3000 -v $(pwd):/usr/src/app -d my-node-app
This mounts the current directory into the container, allowing changes in your local code to reflect immediately.
8. Docker Compose (Optional)
For a more complex setup or to define multi-container applications, consider using Docker Compose. Create a docker-compose.yml
file:
version: "3"
services:
my-node-app:
build:
context: .
ports:
- "3000:3000"
Run your app using:
docker-compose up
Context in Docker Compose
In Docker Compose, the context
is a parameter used to specify the build context for building Docker images. The build context is essentially the set of files located at the specified path (either a local directory or a URL) that are sent to the Docker daemon for building the image. Here’s a breakdown of how the context
is used in Docker Compose:
-
Local Directory:
-
If the
context
is set to a local directory, Docker Compose will send the contents of that directory (and its subdirectories) to the Docker daemon. -
Example using a local directory:
version: "3" services: my-service: build: context: ./my-app
-
-
URL:
-
The
context
can also be set to a URL, which can be a Git repository URL. Docker Compose will clone the repository and use its contents as the build context. -
Example using a Git repository URL:
version: "3" services: my-service: build: context: https://github.com/username/my-repo.git
-
Usage in Dockerfile
Within the specified build context, Docker Compose looks for a Dockerfile
(or a file specified using the dockerfile
option). The Dockerfile
contains instructions for building the Docker image.
Practical Considerations
-
Local Build Context:
- If using a local directory as the build context, ensure that only necessary files are included to minimize image size.
- Use a
.dockerignore
file to exclude unnecessary files and directories.
-
Remote Build Context (Git repository):
- Specify a branch, tag, or commit using the
git
syntax in the URL (e.g.,https://github.com/username/my-repo.git#branch
). - This approach allows you to build Docker images directly from a source code repository.
- Specify a branch, tag, or commit using the
Working with Docker Compose
Docker Compose is a powerful tool for defining and managing multi-container Docker applications. Let’s delve into various aspects of working with Docker Compose, backed by illustrative code examples.
1. Defining Services
In a Docker Compose file (docker-compose.yml
), services are defined as components of your application. Each service represents a container. Below is a sample Docker Compose file with two services - one for a Node.js app and another for a MongoDB database:
version: "3"
services:
web:
image: node:14
ports:
- "3000:3000"
volumes:
- ./app:/app
depends_on:
- database
database:
image: mongo:latest
In this example:
web
is the name of the Node.js service.database
is the name of the MongoDB service.- Services are defined with their respective base images (
node:14
andmongo:latest
). - Port mappings are specified for the Node.js app.
- A volume is mounted to persist the Node.js app code.
2. Environment Variables
Environment variables can be set for services in Docker Compose, facilitating configuration. Here’s an example:
version: "3"
services:
web:
image: node:14
environment:
NODE_ENV: production
ports:
- "3000:3000"
In this case, the NODE_ENV
environment variable is set to ‘production’ for the Node.js service.
3. Networks
Docker Compose automatically creates a default network for the services defined in the same compose file. You can also define custom networks. Example:
version: "3"
services:
web:
image: node:14
networks:
- my-network
networks:
my-network:
driver: bridge
Here, a custom network named my-network
is defined, and the web
service is connected to it.
4. Volumes
Volumes in Docker Compose allow for persistent data storage. Example:
version: "3"
services:
web:
image: node:14
volumes:
- my-data:/app/data
volumes:
my-data:
In this example, a volume named my-data
is created, and the web
service mounts it to the /app/data
path.
5. Docker Compose Commands
Docker Compose provides various commands to manage your application. Common commands include:
-
Build and Start:
docker-compose up --build
This command builds the images and starts the services defined in the compose file.
-
Stop:
docker-compose down
Stops and removes the services.
-
Scale:
docker-compose up --scale web=3
Scales the number of containers for the
web
service to 3.
6. Override Compose Files
Compose allows using multiple files and overriding configurations. Example:
docker-compose -f docker-compose.yml -f docker-compose.prod.yml up
Here, docker-compose.prod.yml
overrides settings from the base docker-compose.yml
file for a production environment.
Private Docker Registry
A private Docker registry serves as a centralized repository for managing and distributing Docker images within a secure, isolated environment. Below, we’ll extensively explore the setup and usage of a private Docker registry, supported by illustrative code examples.
1. Setting Up a Private Docker Registry
To establish a private registry, you can utilize the official Docker Registry image. Below is a simplified docker-compose.yml
file:
version: "3"
services:
registry:
image: registry:latest
ports:
- "5000:5000"
environment:
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /var/lib/registry
In this example:
- The
registry
service utilizes the official Docker Registry image. - Port
5000
is exposed to allow external access. - An environment variable configures the storage directory for the registry.
2. Running the Private Registry
To start the private registry, execute the following Docker Compose command:
docker-compose up -d
This command initializes the registry service in detached mode.
3. Pushing Images to the Private Registry
Assuming you have a Docker image to push, here’s an example of pushing an image to the private registry:
# Tag your local image with the private registry address
docker tag my_image localhost:5000/my_image
# Push the tagged image to the private registry
docker push localhost:5000/my_image
In this snippet, replace my_image
with the name of your Docker image. The tag ensures association with the private registry.
4. Pulling Images from the Private Registry
Pulling an image from the private registry is straightforward:
# Pull the image from the private registry
docker pull localhost:5000/my_image
This command fetches the image stored in the private registry.
5. Securing the Private Registry
For enhanced security, you can configure SSL/TLS and implement basic authentication. Below is an example docker-compose.yml
snippet:
version: "3"
services:
registry:
image: registry:latest
ports:
- "5000:5000"
environment:
REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY: /var/lib/registry
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
REGISTRY_HTTP_TLS_KEY: /certs/domain.key
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
volumes:
- /path/to/certs:/certs
- /path/to/auth:/auth
In this extended configuration:
- SSL/TLS certificates are mounted for encrypted communication.
- Basic authentication using an htpasswd file is enabled.
6. Authentication with Docker CLI
Before pushing or pulling images, authenticate with the private registry:
docker login localhost:5000
Enter the appropriate credentials when prompted.
Deploying Containerized Apps
Deploying containerized applications using Docker Compose is not limited to local environments; it seamlessly extends to external servers. Here’s a detailed overview:
1. Docker Compose File for External Deployment:
In your docker-compose.yml
file, specify the necessary configurations for external deployment. Assume the external server has Docker installed, and port 3000 is open for your Node.js application:
version: "3"
services:
web:
image: my-node-app:1.0
ports:
- "3000:3000"
database:
image: mongo:latest
frontend:
image: nginx:latest
ports:
- "80:80"
2. Deploying to an External Server:
-
Copy Docker Compose File:
Transfer your
docker-compose.yml
file to the external server. -
Remote Login:
Access the external server via SSH:
ssh user@your-external-server
-
Navigate to Project Directory:
Move to the directory containing your Docker Compose file.
-
Deploy with Docker Compose:
Run the following commands:
# Build images (if not done previously) docker-compose build # Deploy the stack docker-compose up -d
This orchestrates the deployment on the external server.
3. Viewing Deployed Containers on External Server:
Check running containers on the external server:
docker-compose ps
This provides insights into deployed containers, their status, and exposed ports.
4. Accessing Deployed Services:
Access your services through the external server’s IP or domain, using the specified ports.
5. Updating Services on External Server:
If updates are made to your application, remotely update the deployed services:
# Pull the latest images and recreate containers
docker-compose pull
docker-compose up -d
6. Stopping and Removing Services on External Server:
When done, stop and remove the deployed services:
docker-compose down
GitHub Actions for Building a Docker Image, Pushing to ECR, and Deploying to ECS
To automate the process of building a Docker image, pushing it to Amazon ECR (Elastic Container Registry), and triggering a deployment to Amazon ECS (Elastic Container Service) upon pushing to the main branch, use the following example GitHub Actions workflow (.github/workflows/deploy.yml
):
name: Build, Push, and Deploy to ECS
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout Repository
uses: actions/checkout@v2
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: <your-aws-region>
- name: Login to ECR
id: ecr-login
run: |
aws ecr get-login-password --region <your-aws-region> | docker login --username AWS --password-stdin <your-ecr-repository-url>
- name: Build Docker Image
run: |
docker build -t <your-image-name> .
docker tag <your-image-name>:latest <your-ecr-repository-url>/<your-image-name>:latest
- name: Push to ECR
run: |
docker push <your-ecr-repository-url>/<your-image-name>:latest
- name: Deploy to ECS
run: |
# Use AWS CLI or ECS CLI commands to update ECS service/task definition and trigger a deployment
Explanation:
-
Trigger:
The workflow is triggered on pushes to the main branch.
-
Steps:
-
Checkout Repository:
The workflow checks out your GitHub repository.
-
Configure AWS Credentials:
Sets up AWS credentials for subsequent AWS-related actions. Add
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
as secrets in your GitHub repository. -
Login to ECR:
Logs in to the Amazon ECR to authenticate Docker for pushing images.
-
Build Docker Image:
Builds the Docker image from the Dockerfile in the repository, tags it, and prepares it for pushing to ECR. Replace
<your-image-name>
with your desired image name. -
Push to ECR:
Pushes the Docker image to Amazon ECR. Ensure you’ve set up the ECR repository URL and replace
<your-ecr-repository-url>
and<your-image-name>
accordingly. -
Deploy to ECS:
Use AWS CLI or ECS CLI commands in this step to update the ECS service or task definition and trigger a deployment. Customize this part based on your ECS deployment strategy.
-
Notes:
-
Replace
<your-aws-region>
and<your-ecr-repository-url>
with your AWS region and ECR repository URL, respectively. -
Adapt the deployment step to your specific ECS deployment needs and strategy.
-
Ensure AWS CLI and Docker are available in the GitHub Actions runner environment.
This example assumes you have a Dockerfile, AWS ECS set up, and relevant deployment scripts for your ECS service/task. Customize it based on your project structure and deployment requirements.
Docker Volumes - Persist Data in Docker
Docker volumes are essential for persisting data across container restarts and ensuring seamless data management. Here’s an in-depth exploration of Docker volumes with practical code examples.
Creating a Docker Volume
To create a Docker volume named “mongo_data,” use the following Docker Compose snippet:
version: "3"
services:
mongodb:
image: mongo
volumes:
- mongo_data:/data/db
volumes:
mongo_data:
In this example, the volumes
section defines a named volume named “mongo_data,” and the mongodb
service uses this volume, mounting it to the /data/db
path within the container. This ensures that MongoDB data persists even if the container is stopped or removed.
Inspecting Docker Volumes
To inspect the details of Docker volumes, you can use the following commands:
# List all volumes
docker volume ls
# Inspect a specific volume (replace <volume-name> with your volume name)
docker volume inspect <volume-name>
Understanding the volume details helps you manage and troubleshoot data persistence effectively.
Data Persistence with Bind Mounts
Bind mounts allow you to link a directory on the host machine to a directory in the container. This method is useful during development when you want immediate feedback on code changes.
version: "3"
services:
web_app:
image: my_web_app
volumes:
- ./app:/app
In this example, changes in the ./app
directory on the host reflect instantly in the /app
directory within the container.
Anonymous vs. Named Volumes
Docker volumes can be anonymous or named. Anonymous volumes, without a specified name, are harder to manage and identify. Named volumes provide better clarity and control.
Docker Volume Paths on Host OS
The location of Docker volumes on the host machine varies by operating system:
-
Windows:
C:\ProgramData\Docker\volumes
-
Linux:
/var/lib/docker/volumes
-
Mac:
/var/leap/docker/volumes
Viewing Volume Contents
To explore the data stored in a Docker volume, you can access the host machine’s volume path or dive into the Docker volume from within a container.
# Access the host machine's volume path
# (replace <host-path> with the appropriate path for your OS)
cd <host-path>
# Enter the container and navigate to the volume path
docker exec -it <container-id> sh
cd /path/in/container
Understanding where Docker stores volumes aids in managing and validating data persistence.
Volumes Demo - Configure Persistence for Our Demo Project
Let’s delve into a detailed demonstration of configuring persistence for our demo project using Docker volumes. This practical guide will walk you through the process with code examples.
Project Structure
Consider a basic project structure with a Dockerized application, and we want to ensure data persistence for a PostgreSQL database.
project-root/
|-- docker-compose.yml
|-- app/
| |-- Dockerfile
|-- data/
Docker Compose Setup
Start by defining a Docker Compose file (docker-compose.yml
) to orchestrate the PostgreSQL service with a named volume.
version: "3"
services:
postgresql:
image: postgres
environment:
POSTGRES_DB: mydatabase
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypassword
volumes:
- pg_data:/var/lib/postgresql/data
volumes:
pg_data:
In this example, a service named postgresql
uses the official PostgreSQL image. The volumes
section introduces a named volume, pg_data
, mapped to the PostgreSQL data directory.
Dockerfile for the App
Assume our application has a Dockerfile (app/Dockerfile
). This file might look like the following:
FROM node:14
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD ["npm", "start"]
This Dockerfile sets up a Node.js environment for our application. Adjust the specifics based on your project requirements.
Starting the Containers
Execute the following command in the project root to initiate the containers defined in the Docker Compose file:
docker-compose up -d
This command orchestrates the PostgreSQL service, linking it to the specified named volume (pg_data
). The -d
flag runs the containers in the background.
Verifying Persistence
To validate persistence, make changes in the PostgreSQL database through your application. For example, insert data into a table.
-- Connect to the PostgreSQL container (replace <container-id> with the actual container ID)
docker exec -it <container-id> psql -U myuser mydatabase
-- Inside the PostgreSQL shell, execute SQL commands
INSERT INTO mytable (column1, column2) VALUES ('value1', 'value2');
After making changes, stop and remove the containers:
docker-compose down
Restarting Containers
Reinitiate the containers with the same docker-compose up -d
command. You’ll observe that the changes made to the PostgreSQL database persist across container restarts due to the named volume (pg_data
).