What is docker
-
docker
is a container technology: a tool for creating and managing containers -
container: a standardized unit of software. Ex: a package of code and dependencies to run that code(NodeJS code + NodeJS runtime)
-
pros:
- want to have the exact same environment for development and production -> this ensures that it works exactly as tested
- it should be easy to share a common development environment/ setup with (new) employees and colleagues
- we don’t want to uninstall and re-install local dependencies and runtimes all the time
- compare to VM:
- low impact on OS, very fast, minimal disk space usage
- sharing, re-building and distribution is easy
- encapsulate apps/ environments instead of “whole machine”
- Images:
- Are template/blueprint for containers
- Contains code + required tools/runtimes
- Containers:
- The running :"unit of software"
- Multiple containers can be created based on one image
First example
- CMD will be executed when container is started based on the image
FROM node:14
WORKDIR /app
# first dot copy all the file in folder contain Dockerfile
# second dot is the destination to paste in the container
# cause we've set `WORDIR /app`. Dst will be at /app
# COPY . .
COPY . /app
RUN npm install
EXPOSE 80
CMD ["node", "server.js"]
- Create image: `docker build
- Create new container:
docker run -p 3000:80 image_id
- If we don't want docker to block terminal, then run it in detach mode:
docker run -p 3000:80 -d image_id
- Or attach again to detached container:
docker attach container_name
Basic commands
- Listing running containers:
docker ps
- Stop a container:
docker stop container_id|container_name
Images and Containers
- Running interactive mode:
docker run -it node
- Image is layer-based. Every instruction creates a layer. And these layers are cached
- Now we know each instrucion is cached. We can update Dockerfile like this
to prevent unnecessary installing packages when we change the code.
FROM node:14
WORKDIR /app
COPY package.json /app
RUN npm install
COPY . /app
EXPOSE 80
CMD ["node", "server.js"]
- Start a created container:
docker start -a container_id|container_name
- See the logs of a container:
docker logs -f container_name
- For console app, we enable interactive mode by
docker run -it image_id
ordocker start -a -i container_name
- Remove container:
docker rm container_name container_name container_name
- Listing images:
docker images
- To remove an image, we need to remove containers using it first:
docker rmi image_id
- Remove all unused images:
docker image prune -a
- Automatically remove container when it exits:
docker run -p 3000:80 -d --rm container_name
- Inspect an image:
docker image inspect image_id
- Copy files into a container:
docker cp dummy/. container_name:/test
- Copy files from a container:
docker cp container_name:/test dummy
- Create container with name:
docker run -p 3000:80 -d --rm --name myapp image_id
- Create image with name and tag.
docker build -t myapp:latest .
- Rename image:
docker tag image_name:latest new_name:latest
Data & Volumes
- Volume is a folder/file inside of a Docker container which is connected to some folder outside of the container
- Volumes persist if a container shuts down.
- A conatiner can write data into a volume and read data from it.
Named volumes
are not attached to a container. So it persist even if container is removed
docker run -d -p 3000:80 --rm --name my-app -v feedback:/app/feedback feedback-node:latest
Anonymous volumes
are removed automatically if you start/ run a container with the--rm
option. But if you start a container without that option, theanonymous volume
would NOT be removed, even if you remove the container (with docker rm ...). You can clear them viadocker volume rm VOL_NAME
ordocker volume prune
- Bind mount: a path on your host machine, which you know and specified, that is mapped to some container-internal path. It helps with direct container interaction, should use only in development
docker run -d -p 3000:80 --rm --name my-app
-v feedback:/app/feedback
-v "D:\projects\data-volumes-01-starting-setup:/app
feedback-node:latest
- We can use shortcut instead of full path
-v $(pwd):/app
on macOS/Linux or-v "%cd%":/app
on Windows - Execute the command above will cause error modules not found if our current project not yet run
npm install
. Because we're mapping the containers'sapp
folder to local project folder. Docker will use everything in the local folder. To ask docker usenode_modules
inside container, we'll use anonymous volume.
docker run -d -p 3000:80 --rm --name my-app
-v feedback:/app/feedback
-v "D:\projects\data-volumes-01-starting-setup:/app"
-v /app/node_modules
feedback-node:latest
or add VOLUME ["/app/node_modules"]
in Dockerfile
-
-v /app/node_modules
will overwrite bind mount volume because it has longer path -
Add
:ro
to prevent the container and the application running write to bind mount folder-v $(pwd):/app:ro
-
Add
-v /app/temp
next to allow write in folder /app/temp -
List volume:
docker volume ls
-
We can use env variables in docker file:
COPY . .
ENV PORT 80
EXPOSE $PORT
docker run -d --rm -p 3000:8000 --env|-e PORT=8000 image_name
or
`docker run -d --rm -p 3000:8000 ---env-file ./.env image_name
- Argument
FROM node:14
WORKDIR /app
COPY package.json
RUN npm install
COPY . .
ARG DEFAULT_PORT=80
ENV PORT $DEFAULT_PORT
EXPOSE $PORT
CMD ['npm','start']
docker build -t app:tag --build-arg DEFAULT_PORT=8000
Networking: (Cross-)Container Communication
-
Applications running in a container can communicate to local host machine via address
host.docker.internal
-
Container can communicate with other containers by manually finding the IP address or by using a network.
-
Can get container IP address via
docker inspect container_name
-
Create new network:
docker network create network_name
-
List network:
docker network ls
-
Run mongo container and run app container with option
--network network_name
. (remember to update mongourlmongodb://mongo_container:27017/mydb
)
Multi-Container Applications with Docker
- Mongo
docker run —name mongodb —rm -d -p 27017:27017 mongo
- Dockerize node app
- Add
Dockerfile
FROM node
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
EXPOSE 80
CMD [“node”, “app.js”]
docker build -t goals-node .
- Update
mongoUrl
localhost tohost.docker.internal
docker run --name goals-backend —rm -d -p 80:80 goals-node
- Dockerize react app
- Add
Dockerfile
FROM node
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
EXPOSE 3000
CMD [“npm”, ”start”]
docker build -t goals-react .
docker run —name goals-frontend —rm -d -p 3000:3000 -it goals-react
Now let’s try docker network
- Create a network
docker network create goals-net
- Run mongo container
docker run —name mongoldb —rm -d —network goals-net mongo
- Run node backend:
- Update mongoUrl to
mongodb
that is name of mongo container above. - Build image
docker build -t goals-node .
- We public port 80 to let browser request api to node app.
docker run --name goals-backend —rm -d —network -p 80:80 goals-net goals-node
- Run react app:
docker run —name goals-frontend —rm -d -p 3000:3000 -it goals-react
*** Add volume to persist data ***
- Mongo:
- Mongo default write data to
/data/db
. Use named volume to map/data/db
inside container to host machine’s folder docker run —name mongoldb -v data:/data/db —rm -d —network goals-net -e MONGO_INITDB_ROOT_USERNAME=ma -e MONGO_INITDB_ROOT_PASSWORD=secret mongo
- Update mongoUrl:
mongodb://ma:secret@mongodb:27017/course-goals?authSource=admin
- Backend:
- Update Dockerfile above CMD:
ENV MONGODB_USERNAME=root
ENV MONGODB_PASSWORD=secret
CMD [“npm”, “start”]
- Update mongoUrl:
mongodb://${process.env.MONGODB_USERNAME}:${process.env.MONGODB_PASSWORD}@mongodb:27017/course-goals?authSource=admin
- we’re gonna bind mount and use named volume to persist log file.
docker run --name goals-backend -v /path/to/project/backend:/app -v logs:/app/logs -v /app/node_modules —rm -e MONGODB_USERNAME=ma -d —network -p 80:80 goals-net goals-node
- Bind mount frontend:
docker run -v /path/to/frontend/src:/app/src
—name goals-frontend —rm -p 3000:3000 -it goals-react`
Docker-Compose
docker-compose up
: Start all containers / services mentioned in the Docker Compose file
-d : Start in detached mode
--build : Force Docker Compose to re-evaluate / rebuild all images (otherwise, it only
does that if an image is missing)docker-compose down
: Stop and remove all containers / services
-v : Remove all Volumes used for the Containers - otherwise they stay around, even if
the Containers are removed
https://docs.docker.com/compose/reference/
version: "3.8"
services:
mongodb:
image: 'mongo'
volumes:
- data:/data/db
# environment:
# MONGO_INITDB_ROOT_USERNAME: max
# MONGO_INITDB_ROOT_PASSWORD: secret
# - MONGO_INITDB_ROOT_USERNAME=max
env_file:
- ./env/mongo.env
backend:
build: ./backend
# build:
# context: ./backend
# dockerfile: Dockerfile
# args:
# some-arg: 1
ports:
- '80:80'
volumes:
- logs:/app/logs
- ./backend:/app
- /app/node_modules
env_file:
- ./env/backend.env
depends_on:
- mongodb
frontend:
build: ./frontend
ports:
- '3000:3000'
volumes:
- ./frontend/src:/app/src
stdin_open: true
tty: true
depends_on:
- backend
volumes:
data:
logs:
Docker utility
- execute command in running container:
docker exec -it container_name npm init
ENTRYPOINT [“npm”]
to add prefix to command- Run a service
docker-compose run —rm demo init
Deploy AWS
- Update
sudo yum update -y
- Install docker
sudo amazon-linux-extras install docker
- Installing docker on other OS
- Start docker service
sudo service docker start
docker build -t my-app .
- Rename image
docker tag my-app docker/hub/name
- Login to docker
docker login
- Push to docker hub
docker push docker/hub/name
- Now in remote machine run
docker run —d —rm -p 80:80 IMAGE_NAME
- Setup inbound rule for EC2 instance. Type
HTTP
sourceAnywhere
then save. - Note: we have to manually run
docker pull docker/hub/path
if there’s a new version
Multi-Stage Builds
- Add
—target stage_name
to build a specific stage only
FROM node:14-alpine as build
WORKDIR /app
COPY packge.json .
RUN nom install
COPY . .
RUN npm run build
FROM nginx:stable-alpine
# https://hub.docker.com/_/nginx
COPY —from=build /app/build /usr/share/nginx/html
EXPOSE 80
CMD [”nginx”, “-g”, “deamon off;”]
Kubernetes
- Kubernetes is like Docker-Compose for deployment
- Check minikube
minikube status
- Start minikube
minikube start —driver=driver_name
kubectl create deployment first-app —image=kub-first-app
kubectl get deployments
- `kubectl get pods
kubeclt delete deployment first-app
- Kubernetes can only pull image from docker hub
minikube dashboard
- Expose port of a deployment by creating a service
kubectl expose deployment first-app —type=LoadBalancer —port=8080
kubectl get services
- Due to minikube is just a virtual machine so
EXTERNAL-IP
alwayspending
. To see link to access to the service:minikube service first-app
- We can scale up the up
kubectl scale deployment/first-app —replicas=3
- Update new image, the new one must have a different tag:
kubectl set image deployment/first-app kub-first-app=academind/kub-first-app
- Checking update status
kubectl rollout status deployment/first-app
- Undo update
kubectl rollout undo deployment/first-app
- History
kubectl rollout history deployment/first-app
. Add—revision=rev_number
for details. - Add
—to-revision=rev_number
to specify revision we want to go back.
kubectl rollout status deployment/first-app —to-revision=1
- delete service
kubectl delete service first-app
- delete deployment
kubectl delete deployment first-app
Declarative approach
// deployment.yaml
// https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.20/#deployment-v1-apps
apiVersion: apps/v1
kind: Deployment
metadata:
name: second-app-deployment
labels:
group: example
spec:
replicas: 1
selector:
matchLabels:
app: second-app
tier: backend
#matchExpressions:
#- {key: app, operator: In, values: [second-app, first-app]}
template:
metadata:
labels:
app: second-app
tier: backend
spec:
containers:
- name: second-node
image: academind/kub-first-app
# (optional) this will pull even if we push new image with the same tag
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /
port: 8080
periodSeconds: 10
initialDelaySeconds: 5
- Apply config file
kubectl apply -f=deployment.yaml
- Create
services.yaml
apiVersion: v1
kind: Service
metadata:
name: backend
labels:
group: example
spec:
selector:
app: second-app
ports:
- protocol: ‘TCP’
port: 80
targetPort: 8080
type: LoadBalancer
- Create service:
kubectl apply -f service.yaml
- Run
minikube service backend
- For updating, just change yaml file then run
kubectl apply -f=file_name.yaml
- Still, we can delete deployment by its name
kubectl delete deployment second-app-deployment
- Or delete by file(s)
kubectl delete -f=deployment.yaml -f=service.yaml
- Or delete by label
kubectl delete deployments,services -l group=example
—
to separate object in yaml file if we combine config files into 1 file.
Kubernetes Volume
apiVersion: apps/v1
kind: Deployment
metadata:
name: story-development
spec:
replicas: 1
selector:
matchLabels:
app: story
template:
metadata:
labels:
app: story
spec:
containers:
- name: story
image: academind/image
volumeMounts:
- mountPath: /app/story
name: story-volume
volumes:
- name: story-volume
# emptyDir: {} # great for 1 replica
# share data in the same node
hostPath:
path: /data
type: DirectoryOrCreate
apiVersion: v1
kind: Service
metadata:
name: story-service
spec:
selector:
app: story
type: LoadBalancer
ports:
- protocol: "TCP"
port: 80
targetPort: 3000
kubectl apply -f=service.yaml -f=deployment.yaml
minikube service story-service
215 to go