Setup a local Docker environment

We are going to setup a complete Docker community edition environment on this series of posts. This series is meant for developers who have already heard about containers before but are yet to explore them. In this post you’ll get your first hands-on experience over the container world as an “alternative” to virtual machines. I’m sure you will be astonished by what you can achieve with low resources for small/medium deployments and by how painless it is.

Containers vs Processes

First things first. Assuming you are a newcomer like me to this trending topic, you will probably benefit from a quick comparison between the old-school definition of a Process and the one of a Container.

A Process is an execution instance managed by the OS that has its own memory address space (i.e. executable code, process I/O data, call stack and heap), OS-specific resource descriptors (e.g. file descriptors), CPU state and an image of the associated machine code.

Containers are processes plus their filesystem environments. They share the host machine’s kernel and run isolated from the host environment (just like native processes) but are able to access the host’s files and ports if configured as such. They are the runtime instance of a Docker image, which is an executable including all the requirements to run an application (i.e. libraries, runtime, code, environment variables, and config files).

processes_vs_containers

It is often established a parallelism between virtual machines and containers. A virtual machine has a complete OS with its own memory management, drivers, system background processes, etc. It is a distinct concept from a Container, has a different reason for its usage (e.g. distributing hardware resources per tenant) but we can benefit from a powerful synergy between the two, as seen in the following picture:

virtual_machines_vs_containers

Preparing your environment

Your hardware will be relevant once you start clustering containers. For future reference, I have my local docker environment on a Lenovo Ideapad 700 with an Intel(R) Core(TM) i7-6700HQ (Skylake) CPU @ 3.30GHz, 16GB DDR4 RAM, 256GB SSD + 1TB HDD, NVIDIA GeForce GTX950M (4GB) running a 64bit Windows 10 Home. For this specific setup I used Docker Toolbox (since Windows 10 Home doesn’t have Hyper-V) but you may use their latest installer if your OS has type-1 hypervisor capabilities.

After you’re done with the installation process, create a Docker ID. Then test your installation by opening a shell on the machine you installed your Docker daemon (e.g. on the Linux virtual machine managed by VirtualBox if you used Docker Toolbox) and giving the command:

$ docker run hello-world

You should get the following message, meaning you are set to start experimenting:

Hello from Docker!
This message shows that your installation appears to be working correctly.
...

Creating your first container

In Docker, you will need to create a Dockerfile, a requirements text file and the Application itself in order to build an image and consequently deploy a container from that image. Create a directory on your host and place all of these files together.

The Dockerfile dictates what are the external resources you want to include in the container environment and also which of the container virtualized resources (like networking interfaces and disk drives) you want to expose to the outside. The definitions on this file guarantee that this app will have the same behavior in any OS.

Dockerfile

# Use an official Python runtime as a base image
FROM python:2.7-slim

# Add current directory files to the container's working directory
WORKDIR /app
ADD . /app

# Install any required packages
RUN pip install -r requirements.txt

# Make port 80 available outside of this container
EXPOSE 80

# Define an environment variable
ENV TYPE One

# Run the app after the container launches
CMD ["python", "application.py"]

requirements.txt

Flask

application.py

from flask import Flask
import os
import socket

# __name__ gives the container ID which is equivalent to a process ID
app = Flask(__name__)

@app.route("/")
def main():

    html = "Type: {type}" \            "Hostname: {hostname}"     return html.format(type=os.getenv("TYPE", "one"), hostname=socket.gethostname()) if __name__ == "__main__":     app.run(host='0.0.0.0', port=80)

To build your image, go to the directory where these files are located and execute the following command on your shell (notice we are tagging this image as testapp):

$ docker build -t testapp .

We should always tag an image and associate it with a repository on a registry by running docker tag image username/repository:tag . Note that username is the registry name:

$ docker tag testapp john/startingrepo:test1

The built image will be located at your local host’s Docker image registry. You also have a remote image registry at Docker Hub (where you created your Docker ID). Let’s push our locally created image to our remote location:

$ docker login
...
$ docker push john/startingrepo:test1

Now to finally create our container, running this command will pull the image from the repository if it is not locally available and tagged on the host (in our case it is):

$ docker run john/startingrepo:test1

Python is now serving the application at http://localhost (or at http://192.168.99.100 if you used Docker Toolbox) thanks to the EXPOSE directive:

deployed_testapp_container

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s