Docker - from development to production

We have the technical introduction covered in our previous post. Now let’s see how Docker helps to build, run and maintain an application.

Application development phase

Development is usually the first phase where Docker brings some extra value. As mentioned in the technical introduction, Docker comes with tools that allow us to orchestrate a multi-container setup in a very easy way. Let's take a look at the benefits Docker brings during development.

Easy setup - low cost of introducing new developers

You only need to create a Docker configuration once and then each new developer on the team can start the project by executing a single command. No need to configure the environment, just download the project and run docker-compose up. That's all!

This might seem too good to be true but I have a good, real-life example of such a situation. I was responsible for a project where a new front-end developer was hired. The project was written in a very old PHP version (5.3) and had to be run on CentOS. The developer was using Windows and he previously worked on Java projects exclusively. I had a quick call with him and we went through a couple of simple steps: downloading and installing Docker, cloning the git repository and running docker-compose. After no more than 30 minutes he had a perfectly running environment and was ready to write his first lines of code!

We even have an article on a simple Docker setup for Symfony projects, check it out if you are interested!

No dependencies version mismatch issue

This issue often arises if a developer is involved in multiple projects, but it escalates in Micro-service oriented applications. Each service can be written by a different team and using different technologies. In some cases (it usually happens quite often) there might be a version mismatch within the same technology used in different services. A simple example: one service is using an older Elasticsearch version, and another a newer one. This can be dealt with accomplished by configuring two separate versions - but it is much easier to run them side-by-side in dedicated containers. A very simple example of such a configuration for docker-compose would look like this:

service_x_elastic:
  image: elasticsearch:5.2.2
service_y_elastic:
  image: elasticsearch:2.4.4

Possibility to test if the application scales

esting if the application scales is pretty easy with Docker. Of course, you won't be able to make some serious load testing on your local machine, but you can test if the application works correctly when a service is scaled horizontally. Horizontal scalability usually fails if the application is not stateless and the state is not shared between instances. Scaling can be very easily achieved using docker-compose:

docker-compose scale service_x=4

After running this command there will be four containers running the same service_x. You can (and you should) also add a separate container with a load balancer like HAProxy in front of them. That's it. You are ready to test!

No more “works on my configuration" issues

Docker is a solution that allows one configuration to be run everywhere. You can have the same - or almost the same - version running on all developer machines, CI, staging, and production. This radically reduces the amount of “works on my configuration" situations. At least it reduces the ones caused by different setups.

Continuous Integration

Now that you have a working development setup, configuring a CI is really easy. You just need to set up your CI to run the same docker-compose up command and then run your tests, etc. No need to write any special configuration; just bring the containers up and run your tests. I've worked with different CI servers like Gitlab CI, Circle CI, Jenkins and the setup was always quick and easy. If some tests fail, it is easy to debug too. Just run the tests locally (using the exact same container setup!).

Pre-production phase

When you have your development setup up and running, it is also quite easy to push your application to a staging server. In most projects I know, this process was pretty straightforward and required only a few changes. The main difference is in the so-called volumes - files/directories that are shared between your local disk and the disk inside a container. When developing an application, you usually set up containers to share all project files with Docker so you do not need to rebuild the image after each change. On pre-production and production servers, project files should live inside the container/image and should not be mounted to your local disk.

The other common change applies to ports. When using Docker for development, you usually bind your local ports to ports inside the container, i.e. your local 8080 port to port 80 inside the container. This makes it impossible to test the scalability of such containers and makes the URI look bad (no one likes ports inside the URI). So when running on any production or pre-production servers you usually put a load balancer in front of the containers.

There are many tools that make running pre-production servers much easier. You should definitely check out projects like Docker Swarm, Kubernetes and Rancher. I really like Rancher as it is easy to setup and really easy to use. We use Rancher as our main staging management tool and all Accesto members really enjoy working with it. Just to give you a small insight into how powerful such tools are: all our team members are able to update or create a new staging environment without any issues - and within a few minutes!

Production phase

The production configuration should be exactly the same as pre-production. The only small difference might be the tool you use to manage the containers. There is a multitude of popular tools used to run production containers but my two favorite one Kubernetes. It allows you to scale easily on new hosts. One important thing you should keep in mind when going with Docker on production: monitoring and logging - should be centralized and easy to access.

Cons

Docker has some downsides too. The first one you might notice is that it takes some time to learn how to use Docker. The basics are pretty easy to learn, but it takes time to master some more complicated settings and concepts. The main disadvantage for me is that it runs very slowly on MacOS (this has changed with the introduction of the M1 Pro chip) and Windows. Docker is built around many different concepts from the Linux kernel so it is not able to run directly on MacOS or Windows. It uses a Virtual Machine that runs Linux with Docker.

Summary

Over the past 4 years, we have been able to observe how Docker has gotten better and more mature with each release. At the same time, the whole ecosystem has grown and new tools have been published giving us more possibilities that we could not have thought of. By using Docker, we are able to easily run our applications on our developer machines and then run the same setup on pre- and production servers. Right now we can configure a setup within minutes and then release our application to a server also within minutes. I'm really curious about what new possibilities we will get in the coming months.

PS. Subscribe to our newsletter (in the right sidebar) to get notified about new posts. I plan to write a follow-up with detailed information about our flow and setup - what tools we use, how we have configured them etc.

icon

Ready to make your SaaS Scalable?

Fix most important issues within days from the kick-off

CONTACT US

Related posts