Docker for DevOps Engineers - Day 19: Docker Volumes & Docker Networks
In this post, I'll walk you through Docker Volumes and Docker Networks, which are essential concepts for managing persistent data and networking between containers in a multi-container setup. Let's dive in! 😃
What We Learned: Docker Volumes & Docker Networks
Docker is an amazing tool that allows us to create isolated environments called containers. However, containers by default are ephemeral, meaning when they are removed, all the data inside them is lost. This is where Docker Volumes come into play. Volumes allow us to store data outside containers, ensuring data persistence even after the container is deleted. Docker Networks, on the other hand, allow us to connect multiple containers so that they can communicate with each other.
Task 1: Multi-Container Application with Docker Compose
To demonstrate these concepts, I worked with a Spring Boot application and set up a multi-container environment with Docker Compose. The architecture includes a MySQL database, a Spring Boot application, and an Nginx reverse proxy.
1. Set Up Docker & Docker-Compose
First, I logged into my Ubuntu EC2 instance and installed Docker and Docker Compose:
# Update the package list
sudo apt-get update
# Install Docker
sudo apt-get install docker.io
# Start Docker service
sudo systemctl start docker
# Enable Docker to start on boot
sudo systemctl enable docker
# Install Docker Compose (v2)
sudo apt-get install docker-compose-plugin
2. Clone the Repository
I cloned the repository for the Spring Boot application:
git clone https://github.com/Amitabh-DevOps/Springboot-BankApp.git
cd Springboot-BankApp
3. Set Up docker-compose.yml
I created a docker-compose.yml
file that defines the services for MySQL, the Spring Boot application, and Nginx. Here's the structure:
version: "3.8"
services:
mysql:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=Test@123
- MYSQL_DATABASE=BankDB
volumes:
- ./mysql-data:/var/lib/mysql
networks:
- bankapp
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost"]
interval: 10s
timeout: 5s
retries: 3
start_period: 30s
mainapp:
build: .
environment:
- SPRING_DATASOURCE_USERNAME=root
- SPRING_DATASOURCE_URL=jdbc:mysql://mysql:3306/BankDB?useSSL=false&allowPublicKeyRetrieval=true&serverTimezone=UTC
- SPRING_DATASOURCE_PASSWORD=Test@123
depends_on:
mysql:
condition: service_healthy
networks:
- bankapp
restart: always
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:8080/actuator/health || exit 1"]
interval: 10s
timeout: 5s
retries: 5
start_period: 30s
nginx:
image: nginx:latest
ports:
- "8080:80"
depends_on:
- mainapp
networks:
- bankapp
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
networks:
bankapp:
volumes:
mysql-data:
4. Build the Application Using Dockerfile
In my project directory, I also created a Dockerfile
to build the Spring Boot app:
#----------------------------------
# Stage 1
#----------------------------------
FROM maven:3.8.3-openjdk-17 as builder
WORKDIR /src
COPY . /src
RUN mvn clean install -DskipTests=true
#--------------------------------------
# Stage 2
#--------------------------------------
FROM openjdk:17-alpine as deployer
COPY --from=builder /src/target/*.jar /src/target/bankapp.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "/src/target/bankapp.jar"]
5. Nginx Configuration
The Nginx configuration (nginx.conf
) sets up a reverse proxy to handle traffic to multiple instances of the Spring Boot app:
upstream mainapp {
server mainapp:8080;
server mainapp:8081;
server mainapp:8082;
}
server {
listen 80;
location / {
proxy_pass http://mainapp;
}
}
6. Run the Containers
I ran the containers in detached mode:
docker compose up -d
Next, I scaled the mainapp
service to 3 instances:
docker compose up --scale mainapp=3 -d
7. Shut Down the Environment
Once I finished testing the app, I shut down the environment:
docker compose down
Task 2: Docker Volumes
Now, let’s move on to Docker Volumes. Volumes are critical when you need to persist data across containers. Here's how I worked with Docker volumes:
1. Create Containers Sharing a Named Volume
I created a shared volume named shared-data
and mounted it to two containers:
docker volume create shared-data
docker run -dit --name container1 --mount source=shared-data,target=/data alpine
docker run -dit --name container2 --mount source=shared-data,target=/data alpine
2. Write and Read Data
I wrote data to the volume using container1
:
docker exec container1 sh -c "echo 'Hello from Container1' > /data/hello.txt"
Then, I read the data from container2
:
docker exec container2 cat /data/hello.txt
3. Verify the Shared Volume
To verify the volume, I listed all Docker volumes:
docker volume ls
Finally, I removed the shared volume after verifying it:
docker volume rm shared-data
Conclusion
Today, we explored the power of Docker Volumes and Networks. Docker Compose allows us to manage multi-container applications, while Docker Volumes help us persist data across containers. Docker Networks enable seamless communication between containers, making it easy to scale and manage complex applications.
I hope you found this guide helpful. Let me know in the comments if you have any questions or if you'd like to dive deeper into Docker concepts!
Check out my LinkedIn for more updates on my learning journey!
PR changes