Good Article for Debugging Connection Refused https://pythonspeed.com/articles/docker-connection-refused/
Docker for VSCode makes it easy to work with Docker
https://code.visualstudio.com/docs/containers/overview
Gitpod is preinstalled with theis extension
cd backend-flask
export FRONTEND_URL="*"
export BACKEND_URL="*"
python3 -m flask run --host=0.0.0.0 --port=4567
cd ..
- Make sure to unlock the port on the PORT tab
- Open the link for 4567 in your browser
- Append to the url to
/api/activities/home
- You should get back json
Create a file here: backend-flask/Dockerfile
FROM python:3.10-slim-buster
WORKDIR /backend-flask
COPY requirements.txt requirements.txt
RUN pip3 install -r requirements.txt
COPY . .
ENV FLASK_ENV=development
EXPOSE ${PORT}
CMD [ "python3", "-m" , "flask", "run", "--host=0.0.0.0", "--port=4567"]
docker build -t backend-flask ./backend-flask
Run
docker run --rm -p 4567:4567 -it backend-flask
FRONTEND_URL="*" BACKEND_URL="*" docker run --rm -p 4567:4567 -it backend-flask
export FRONTEND_URL="*"
export BACKEND_URL="*"
docker run --rm -p 4567:4567 -it -e FRONTEND_URL='*' -e BACKEND_URL='*' backend-flask
docker run --rm -p 4567:4567 -it -e FRONTEND_URL -e BACKEND_URL backend-flask
unset FRONTEND_URL="*"
unset BACKEND_URL="*"
Run in background
docker container run --rm -p 4567:4567 -d backend-flask
Return the container id into an Env Vat
CONTAINER_ID=$(docker run --rm -p 4567:4567 -d backend-flask)
docker container run is idiomatic, docker run is legacy syntax but is commonly used.
docker ps
docker images
curl -X GET http://localhost:4567/api/activities/home -H "Accept: application/json" -H "Content-Type: application/json"
docker exec CONTAINER_ID -it /bin/bash
You can just right click a container and see logs in VSCode with Docker extension
docker image rm backend-flask --force
docker rmi backend-flask is the legacy syntax, you might see this is old docker tutorials and articles.
There are some cases where you need to use the --force
FLASK_ENV=production PORT=8080 docker run -p 4567:4567 -it backend-flask
Look at Dockerfile to see how ${PORT} is interpolated
We have to run NPM Install before building the container since it needs to copy the contents of node_modules
cd frontend-react-js
npm i
Create a file here: frontend-react-js/Dockerfile
FROM node:16.18
ENV PORT=3000
COPY . /frontend-react-js
WORKDIR /frontend-react-js
RUN npm install
EXPOSE ${PORT}
CMD ["npm", "start"]
docker build -t frontend-react-js ./frontend-react-js
docker run -p 3000:3000 -d frontend-react-js
Create docker-compose.yml
at the root of your project.
version: "3.8"
services:
backend-flask:
environment:
FRONTEND_URL: "https://3000-${GITPOD_WORKSPACE_ID}.${GITPOD_WORKSPACE_CLUSTER_HOST}"
BACKEND_URL: "https://4567-${GITPOD_WORKSPACE_ID}.${GITPOD_WORKSPACE_CLUSTER_HOST}"
build: ./backend-flask
ports:
- "4567:4567"
volumes:
- ./backend-flask:/backend-flask
frontend-react-js:
environment:
REACT_APP_BACKEND_URL: "https://4567-${GITPOD_WORKSPACE_ID}.${GITPOD_WORKSPACE_CLUSTER_HOST}"
build: ./frontend-react-js
ports:
- "3000:3000"
volumes:
- ./frontend-react-js:/frontend-react-js
# the name flag is a hack to change the default prepend folder
# name when outputting the image names
networks:
internal-network:
driver: bridge
name: cruddur
We are going to use Postgres and DynamoDB local in future labs We can bring them in as containers and reference them externally
Lets integrate the following into our existing docker compose file:
services:
db:
image: postgres:13-alpine
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
ports:
- '5432:5432'
volumes:
- db:/var/lib/postgresql/data
volumes:
db:
driver: local
To install the postgres client into Gitpod
- name: postgres
init: |
curl -fsSL https://www.postgresql.org/media/keys/ACCC4CF8.asc|sudo gpg --dearmor -o /etc/apt/trusted.gpg.d/postgresql.gpg
echo "deb http://apt.postgresql.org/pub/repos/apt/ `lsb_release -cs`-pgdg main" |sudo tee /etc/apt/sources.list.d/pgdg.list
sudo apt update
sudo apt install -y postgresql-client-13 libpq-dev
services:
dynamodb-local:
# https://stackoverflow.com/questions/67533058/persist-local-dynamodb-data-in-volumes-lack-permission-unable-to-open-databa
# We needed to add user:root to get this working.
user: root
command: "-jar DynamoDBLocal.jar -sharedDb -dbPath ./data"
image: "amazon/dynamodb-local:latest"
container_name: dynamodb-local
ports:
- "8000:8000"
volumes:
- "./docker/dynamodb:/home/dynamodblocal/data"
working_dir: /home/dynamodblocal
Example of using DynamoDB local https://github.com/100DaysOfCloud/challenge-dynamodb-local
NB: we need to add an "-Init postgres" to get "postgresql-client" in gitpod.yml
directory volume mapping
volumes:
- "./docker/dynamodb:/home/dynamodblocal/data"
named volume mapping
volumes:
- db:/var/lib/postgresql/data
volumes:
db:
driver: local