Making the dockers work

By: on December 7, 2017

Over the past few weeks I’ve been fo­cusing mostly on build and de­ploy­ment tooling around docker and Kuber­netes. One par­tic­ular down­side of the cur­rent sys­tem, is our ap­plic­a­tions have a fair number of ser­vice de­pend­en­cies. Up until now, we’ve taken to run­ning everything in­side docker using dock­er­-­com­pose, but this feels to me more like a way to sweep com­plex­ities and tech­nical prob­lems under the rug, so to speak.

While there is some value in keeping all of your auto­ma­tion hacks in one place, it can make life dif­fi­cult for anyone who doesn’t use pre­cisely that work­flow (eg: de­velopers who need to be able to use live-re­load­ing, for one), or people who need ready ac­cess to de­bug­ging tools such as strace(1) and the like.

Other is­sues in­clude that dif­ferent en­vir­on­ments, such as a de­veloper work­sta­tion and testing on a build server have dif­ferent needs. For one, if your builds run in­side a docker con­tainer, and your dev work­flow re­quires that you mount volumes into a con­tainer, then be­cause docker as­sumes that any volume paths are re­l­ative to the host, not the con­text that dock­er­-­com­pose runs in, it’s more than likely you’ll end up with a build build fail­ures be­cause docker seems to (by de­fault) create an empty root-owned dir­ectory on the host where it doesn’t already ex­ist. So, if you ex­pect ar­ti­facts such as HTML tem­plates or Javas­cript to be present, you’ll be sadly dis­a­poin­ted.

So, in this spirit of bringing our dev en­vir­on­ments closer to what we run in pro­duc­tion, I’ve started building out some in­fra­struc­ture to run and test our ap­plic­a­tions in minikube. Being that Kuber­netes etc. are some­what fash­ion­able, we’re going to use that won­derful tool from the 1970’s called make(1) to do most of the heavy lift­ing.

The crux of our de­veloper work­flow is the build-test-re­load cycle, and most of our de­velopers (sim­pli­fied for dia­dactic pur­poses) will be using a clo­jure repl.

So, the make­file below en­sures when you run make repl it’ll:

  • Re­build any jars re­quired
  • build any im­ages re­quired for de­pend­en­cies (elided here, but identical to that for the ap­plic­a­tion im­age),
  • pub­lish them,
  • re-­con­figure kube­n­etes
  • Start a lein­ingen repl con­figured with ad­dresses of the ex­posed ser­vices.

We run the repl on the host work­sta­tion dir­ectly, be­cause whilst it’s pos­sible to ex­port a host dir­ectory into minikube and use it within a kube­n­etes con­tainer; it’s not nearly as simple and re­quires some de­gree of in­dir­ec­tion.

Al­tern­at­ively, You can use just make de­ploy to push the whole lot into Kuber­netes, and test as you like in a browser.

There’s a few in­ter­esting pat­terns here worth men­tion­ing, though.

Tag de­riv­a­tion:

We auto­mat­ic­ally com­pute the tag name from the cur­rent git ver­sion, and de­veloper iden­tity. On the build server, we’ll over­ride this with just the git hash by using some­thing akin to make IM­AGE_TAG=$(git log -n 1 --­pretty=­form­at:'%h').

Stamp files:

Be­cause docker im­ages aren’t con­crete in the tra­di­tional unix sense (ie: they’re not just a simple file); we use a .stamp file as a proxy for the image being built, and pub­lished. Where the image tag may vary, we in­clude $(IM­AGE_TAG) in the stamp file name to en­sure that whenever the image tag changes, we re-do work where re­quired.

WORKDIR = work

K8S_DEPS_SRC := k8s/pg-core.yaml
K8S_APP_SRC  :=  k8s/app-core.yaml

K8S_NAMESPACE = localdev
K8S_CONTEXT = minikube
K8S_ZONE = k8s.local
REGISTRY = my-lovely-registry

KUBECTL = kubectl --context $(K8S_CONTEXT) -n $(K8S_NAMESPACE)

# By default, we want to make it obvious who has built an image and where it is from.
# We can override this when building releases on the build server.
IMAGE_TAG := $(shell echo dev-$$(whoami)-$$(git log -n 1 --pretty=format:'%h') )

.PHONY: start clean total-destroy deploy deploy-dependencies \
	deploy-application purge images push-images \
	app-jar repl

all: deploy

	rm -rf $(WORKDIR)

total-destroy: clean
	rm -rf target

	minikube status

# We don't just specify $(WORKDIR), as it's modification time will change
# every time a new file is created, and so anything depending on it directly
# will be re-built needlessly.
	mkdir -vp $(WORKDIR)
	touch $@

jar: target/app.jar
target/app.jar: $(shell find src -type f -print)
	./lein uberjar

$(WORKDIR)/image-app-core-build.$(IMAGE_TAG).stamp: $(WORKDIR_EXISTS) target/app.jar
	docker build -t app/app-core:$(IMAGE_TAG) .
	touch $@

# Convenience alias for the images.
images: $(WORKDIR)/image-app-core-build.$(IMAGE_TAG).stamp

$(WORKDIR)/image-app-core-push.$(IMAGE_TAG).stamp: $(WORKDIR)/image-app-core-build.stamp
	docker tag app/app-core:$(IMAGE_TAG) $(REGISTRY)/app/app-core:$(IMAGE_TAG)
	docker push $(REGISTRY)/app/app-core:$(IMAGE_TAG)
	touch $@

# Convenience alias to publish images.
push-images: $(WORKDIR)/image-app-core-push.stamp $(WORKDIR)/image-hydra-setup-core-push.stamp

# Build the kubernetes description of dependencies, substituting our current
# version for `IMAGE_TAG`.
$(WORKDIR)/dependencies.yaml: $(VERSION_STAMP) $(K8S_DEPS_SRC)
	# Concatenate all files.
	tmpf=$$(mktemp -d $(WORKDIR)/deps-XXXXXX) && \
	for f in $(K8S_DEPS_SRC); do \
		echo "---";
		sed -e "s/IMAGE_TAG/$(IMAGE_TAG)/g" "$$f";
	done > $$tmpf && mv -v $$tmpf $@

# Build the kubernetes description of the application, substituting our
# current version for `IMAGE_TAG`.
$(WORKDIR)/application.yaml: $(VERSION_STAMP) $(K8S_APP_SRC)
	# Concatenate all files.
	tmpf=$$(mktemp -d $(WORKDIR)/deps-XXXXXX) && \
	for f in $(K8S_APP_SRC); do \
		echo "---";
		sed -e "s/IMAGE_TAG/$(IMAGE_TAG)/g" "$$f";
	done > $$tmpf && mv -v $$tmpf $@

# Create the namespace, if required.
	if ! $(KUBECTL) get namespace/$(K8S_NAMESPACE); then \
		$(KUBECTL) create namespace $(K8S_NAMESPACE); \
	touch $@

# Once we have all of our pre-requisites assembled, deploy our dependencies.
deploy-dependencies: $(WORKDIR)/dependencies.yaml \
		$(WORKDIR)/k8s-ns-$(K8S_NAMESPACE).stamp \
	$(KUBECTL) apply -f $<

# Similarly for the application itself.
deploy-application: $(WORKDIR)/application.yaml deploy-dependencies
	$(KUBECTL) apply -f $<

deploy: deploy-dependencies deploy-application

# Here we look up the IP of the minikube vm, port numbers of any NodePort
# services (for dependencies like Postgres), and passes them to the
# application as environment variables.
repl: deploy-dependencies
	ip=$$(minikube ip) && \
	pg_port=$$(kubectl --context=minikube -n localdev get svc/pg-core -o go-template='{{range .spec.ports}}{{if .nodePort}}{{.nodePort}}{{"\n"}}{{end}}{{end}}') && \

	POSTGRES=jdbc://$${ip}:$${pg_port}/my-db ./lein repl

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>