My Journey to Learning Docker

My Journey to Learning Docker

I have been hearing about docker for quite a while, and it is on my “learn list” for quite a while.
I have been using vagrant for quite some time, I have used it and ansible for configuring it and my production servers.
Vagrant was good to me, but there were the times when I had several projects at a time and I needed an instance for mysql, mongo, php 7, php 5 and node. Those are a nice bunch of full pledged servers.

My super here mac pro, handled it, but it had some lags from time to time. I heard that Docker is efficient and is super productive for development, production and by sharing with others.

So after this long brief, I’ll start adding things I learn below, and in the end of my journey this would be a great cheetshit for docker.

Useful Notes:

stop all containers:

docker stop $(docker ps -a -q)

remove all containers:

docker rm $(docker ps -a -q)

remove all images:

docker rmi $(docker images -q)

run this folder inside nginx instance

docker run -v $(pwd):/usr/share/nginx/html -p 8080:80 nginx

-v: volume, means get the host folder in to the docker container, in this example: $(pwd) means the current folder in to the nginx html folder.

-p: maps ports between host and container, host 8080 points to container 80

nginx – is the image name, in this example

build a docker image

docker build -t YOUR_IMAGE_NAME .

//build from specific dockerfile
docker build -t my-php --file Dockerfile.php-fpm .

connect to a container with “ssh”

docker run -it my-wordpress /bin/sh

docker compose stuff

//build the images from the docker-compose.yml
docker-compose build

//run the docker-compose.yml
docker-compose build

Let’s recap what we learned in this tutorial:

  • Dockerfile is a blueprint for creating custom Docker images (or rather extending existing ones)
  • Docker images are disposable, distributable and immutable filesystems
  • Docker Containers provide basic Linux environment in which images can be ran
  • Each project can define own Docker images based on the requirements
  • With Docker, we don’t need to install anything locally (apart from Docker itself)
  • For many purposes, we can just use existing images from Docker Hub
  • Docker works natively on Linux with Windows and Mac support in the pipeline
  • We can build images for anything, but each should have a single responsibility, usually in a form of a process running in the foreground
  • docker-compose provides a convenient way to save us some typing
  • Mapping provides a convenient way to connect host’s resources to the container
  • Mounting provides a convenient way to plug in host’s directories in the container

Leave a Reply

Your email address will not be published. Required fields are marked *