Creating a local development environment with Docker. An exploration.

Docker is an amazing software platform for deploying applications in a consistent and reliable manner. Anecdotally, there was a clear separation between my "pre-Docker" and "post-Docker" development days. It has enabled me to be confident that the code I produce locally will work in production. However, I've noticed some engineers (and "semi-technical" managers to put it kindly) take this usage to an extreme. They note: "Hey, the only dependency we ever need is Docker. Thus Docker, Visual Studio Code, and a terminal will be the only tools required for development; everything else is wrong" Quite ambitious! Is it possible to consolidate all of one's development into this very small number of tools? Let's take a look at how we can set this up, and take note of the positives and potential drawbacks.

Caveats

I'm obviously generalizing the points brought up by multiple engineers into one pithy statement -- there's an argument that I'm straw-manning their point -- however, this is to show the extent I have seen this discourse evolve. Tt typically devolves into a binary "you should never use NodeJS locally", "Docker should only be used for deployment", which I feel is not a healthy discussion .I'm interested in exploring this topic to see if there is some valuable techniques that can be learned by only using Docker for development.

When I refer to "local development tools", I usually will refer to applications that are available in $PATH, or other supporting libraries installed at a system-wide level (think OpenSSL). It should be noted that VSCode, being an Electron-based application, has an embedded version of NodeJS included to run scripts or run the debugger. However, many other editors, depending on the language, will require you to install binaries and libraries into your host system so that you can get goodies like auto-completion and debugging; these are the kinds of development tools I'll be focusing on.

I'll also be showing code examples for NodeJS and Go, since I feel these languages should be easy enough to follow. However, I'll discuss tools for other languages throughout as well.

I'm also assuming that you know the basics of Docker and docker-compose.

Note that when I mention VSCode in the rest post, you can substitute your favorite text editor or IDE (NeoVim being mine).

Dependencies

  1. Docker (use the latest stable for your OS)
  2. A text-editor
  3. A terminal emulator, preferably with some POSIX-like shell
  4. (Optional) https://github.com/jesseduffield/lazydocker I like lazydocker for debugging/monitoring
If you're curious what versions I have installed on my machine, here's some screenshots.

Docker (the stable channel should work as well)


VSCode Insiders
NeoVim




For my terminal I use https://github.com/jwilm/alacritty.

Lastly, if you wanted to follow along with the code samples, here's the GitHub repository I'll be working on.

Creating a NodeJS API

We've got all of our dependencies installed, and we're ready to go! How to we initialize our project? Normally I'd just run `npm\yarn init`, but since we're working inside of a container from scratch, for now, we'll just create the important files by hand, mainly the 'package.json', 'index.js' and 'Dockerfile'.




Nothing too spicy here. Just a simple express based server with a basic Dockerfile to boot. When run it downs, but there is a bug when I first made: "app.close" isn't a function, as it is on "http.Server". So we should fix that issue. However, there is another issue with the setup; every time I have to make a change I have to do another 'docker/docker-compose build'. For a small project like this, it's just tedium; but as the project grows in size, this can be prohibitively costly in terms of developer time. So how can we fix this?

The quick-and-dirty solution is to add a bind-mount and a volume. The bind-mount is required to get the source code into the container; however, bind-mounts override what's in the container, including node_modules, so creating a volume is a trick to create a data volume for the node_modules folder based on the contents of the image not the contents of the host filesystem. You could also create a bind-mount and start a shell, but I like being able to run 'docker-compose up -d' and have my services up without having to run 'npm install'.

I'm going to be using docker-compose to manage this, but you could do the same with the docker CLI as well. Here's what the file looks like after it's all said and done.

There are a couple of things to note with this approach:

  1. If you update the dependencies, you need to remove the volume to get a fresh copy of the node_modules directory.
  2. Also, you're not able to see the contents of the node_modules directory, as that's in the filesystem managed by the Docker daemon.
  3. I'm using 'npx' and 'nodemon' to add file-watching capabilities w/o having to do 'npm install -g' in Dockerfile. This keeps it a little cleaner, but more importantly it can help keep the final image size smaller when we use multi-stage builds.
  4. This is assuming the docker-compose file is used for local development. If you're using Compose in a CI environment (or even production). You'll need to create a different service in the same file, or a separate 'docker-compose.yml' file for that situation.
All that being said, with this approach we get:
  • Application dependencies are all isolated within the container
  • Ability to develop with any text-editor
  • Useful developer experience with file-watching

Working on a Go worker

For the next part, we're going to do the same thing, but with a Golang worker. It will listen to messages over a Nats subject (why Nats? I guess it's a CNCF tool in "incubation", so why the hell not). When a 'invoices.request' message comes in, we'll do some work, and then report the result to either the 'invoices.approved' or 'invoices.rejected' subject.

The first thing we'll address is figuring out how to scaffold a project with only Docker. Again, you can approach this in different ways; in this case I'll look at running 'go mod init' in the container so we don't have to write the manifests by hand.

First, we'll run a generic container based on an image that has the tooling installed. In this case we'll use the 'golang:1.12.9' image. From there we need another bind-mount to mount the host directory into the container and we can see the changes made from inside the container (After almost an hour of struggling with Realize with Go modules, I ended up on using Godev). Since Go produces a binary instead, we'll have to use multi-stage builds so that we can keep the final image size small, but still maintain a nice developer experience.


Without getting too deep into what the code does, here's some things to again note about this approach:

  1. This docker-compose.yml file was setup with local development in mind. You'd have to use another one, or some other deployment mechanism for CI/Production.
  2. Godev, at least on my MacBook, was running at approximately 300% CPU usage for the worker container. My Docker daemon is configured to have 4 cores and 2GB of RAM. Not sure why that is, other than something with the file watcher being used.
  3. In the final repo, I added a subscriber to the API server as well as a Redis container for "data storage". Mostly to keep things quick and simple.
  4. I did cheat when it came to debugging the file-watcher issues. 

Wrapping it up

I've probably gone on far too long for this topic, but I've learned a lot about setting up a "Docker-only-ish" local environment. Ultimately, I find that the biggest struggle with setting up a whole environment locally is the tooling. Many tools aren't made with containers in mind, and aren't designed to be put under the constraints that containers are designed for. They certainly work, but you need to be prepared and understand how to adjust you tooling to be run inside of a container.  This is also assuming your tooling can be easily run from a shell. IDEs for some languages like Java or C# require the SDKs/tools be installed on your host machine just to function.

Twelve Factor can be a useful set of principles to follow, but I don't believe it's intended to be interpreted as religious dogma. Parity between environments is essential, but it's just as important to be able to "context-switch" and know when something is running in a container, when something isn't, and how we can configure that container to behave as if it were in a production environment. Containers are amazing, but I don't believe they should get in the way of producing great software; we as engineers should educate and enable others to use the tools that make them the most efficient, while still being able to follow industry best practices around deploying software.

All-in-all, I don't believe this discussion should come down to a binary, it needs to be a balancing act for the engineering teams between productivity and maintainability.


Call to action!

Hopefully this will spark a fruitful discourse. I'm getting used to blogging about software development, and would love feedback on how I could improve this! I'm interested to see cover other topics like:

  • Running different kinds of tests with Docker/docker-compose
  • Is it possible to have the docker-compose file be parameterized between CI and local dev? Maybe something like the new 'docker app' tool could come into play here
  • Running Vim/NeoVim inside of a container
Leave a comment below on your thoughts on the matter!

Comments

  1. Best eCOGRA Sportsbook Review & Welcome Bonus 2021 - CA
    Looking for 토토사이트 an eCOGRA Sportsbook worrione Bonus? https://deccasino.com/review/merit-casino/ At this eCOGRA Sportsbook review, we're https://octcasino.com/ talking about a variety of ECCOGRA sportsbook promotions. herzamanindir.com/

    ReplyDelete
  2. The participant can then use this credit score in the on-line on line casino. This could be very straightforward to do with the thecasinosource.com help of a short analysis on the Internet. Even if the PayPal emblem is displayed on the provider’s web page, this doesn't mean that may be a|it is a} safe provider. Be suspicious and don't be lured by very excessive bonus presents. If these are unusually excessive, this is an indication of a doubtful on-line playing home. So, ought to not|you shouldn't} blindly belief every website and supply your PayPal account info there.

    ReplyDelete

Post a Comment