Archive for April, 2015

The trajectory of a software engineer… and where it all goes wrong

Monday, April 20th, 2015

Just came across a thought-provoking post by a developer called Michael Church on the subject of the trajectory of a software engineer:

https://michaelochurch.wordpress.com/2012/01/26/the-trajectory-of-a-software-engineer-and-where-it-all-goes-wrong/

Using a 0.0 to 3.0 scale, he categorizes a Software developer from a novice to a “Senior fellow”. A novice is a rookie programmer who may not provide enough value for the remuneration which he/she is being paid whereas a Senior Fellow is one of the best known programmer in the world, typically known for his outstanding and groundbreaking contributions to the industry as a whole.

While most programmers may start off as novice (0.0 to 0.4) or marginally better, very few move beyond a 1.0. At a level of 1.0-1.3, a programmer becomes something what Michael calls a full-fledged adder, a stage where they are able to make decent contributors to the projects which they directly work on. But unfortunately, most programmers fail to advance much further than that, often not necessarily due to intellectual limitations, but rather due to lack of drive and curiosity.

In my opinion, thought leaders like Martin Fowler or Kent Beck probably falls under 2.4 to 2.6: Global multiplier (“Fellow”) whereas the likes of Linus Torwalds, Peter Norvig and Richard Stallman falls under 2.7 to 3.0: Senior fellow — the highest level of distinction possible.

Docker explained in layman’s terms

Tuesday, April 14th, 2015

There are quite a lot of good documentation available about Docker including the official one, but many of them start explaining lxc, cgroupsLinux kernel and UnionFS in the very first paragraph, thus scaring off many readers who are not Linux geeks.

In this article I will try to explain Docker in layman’s terms. No prior knowledge of virtual machines, cloud deployments or DevOps practices are assumed to understand this post.

Docker?

Docker is an open source platform which can be used to package, distribute and run your applications. Docker provides an easy and efficient way to encapsulate applications (e.g. a Java web application) and any required infrastructure to run that application (e.g. Red hat Linux OS, Apache web server, Tomcat application server, mySQL database etc.) as a single “Docker image” which can then be shared through a central, shared “Docker registry“. The image can then be used to launch a “Docker container” which makes the contained application available from the host where the Docker container is running.

Docker provides some convenient tools to build Docker images in a simple and efficient way. A Docker container on the other hand is a kind of light weight virtual machine with considerably smaller memory and disk space footprint than a full blown virtual machine.

By enabling fast, convenient and automated deployments, Docker has the effect of shortening the cycle between writing code, testing code and getting it live on Production. On the other hand, by providing a light weight container to run the application, Docker enables very efficient utilization of hardware and CPU resources.

Docker is open source and can be installed on your notebook or on any servers where you want to build and run your Docker images and containers (provided you meet the minimum system requirements).

Docker deployment workflow from 1000 feet

Before we look into individual components of the Docker ecosystem, let us try to understand one of the ways, how Docker workflow makes sense in the Software Development Life Cycle.

In-order to support this workflow, we need a CI tool and a configuration management tool. I picked Bamboo and Ansible, even though the workflow would remain the same for any other tools or modes of deployment. Below is a possible workflow:

  1. Code is committed to Git repository
  2. A job is triggered by Bamboo to build the application from the source code and run unit/integration tests.
  3. If tests are successful, Bamboo builds a Docker image which is pushed to a “Docker registry”. (Dockerfile, which is used to provide a template for building the image is typically committed as part of the codebase)
  4. Bamboo runs an Ansible playbook to deploy the image to QA servers
  5. If QA tests are passed as well, Ansible can be used to deploy and start the container in production

Docker ecosystem

Docker client and server

Docker is a client-server application. The Docker client talks to a Docker server (also called a daemon), which in turn does all the work. Docker is equipped with a command line client binary called Docker and a full Restful API. Docker client and server could be running on the same host or on different hosts.

Docker images

Images are the building blocks of the Docker world. They are the build part of Docker’s lifecycle. They are built step-by-step using a series of instructions which are typically described in a simple text configuration file called “Dockerfile”.  Docker provides a simple text based way of declaring the infrastructure and the environment dependencies.

Docker images are highly portable across hosts and environments. Just like compiled Java code can be run on any operating system where JVM is installed, a Docker image can be run in a Docker container on any host that runs Docker.

Docker registry

Docker registries are the distribution component of Docker. Docker stores the images which you build in a registry. There are two types of registries: public and private. Docker Inc. operates the public registry for images called Docker Hub. You can create an account on the Docker hub and store and share your images, but you have the option of keeping your images in Docker Hub private as well.

It is also possible to create your own registry behind your corporate firewall.

Docker container

If images are the building or packing aspect of Docker, containers are the runtime or execution aspect of Docker. Containers are launched from images and may contain one or more running processes.

Docker borrows the concept of standard shipping container, used to transport goods globally, as a model for its containers. But instead of goods, Docker containers ship software. Each container contains a software image — its cargo — and like its physical counterpart allows a set of operations to be performed. For example, it can be created, started, stopped, restarted and destroyed.

Like a shipping container, Docker doesn’t care about the contents of the container while performing these actions; for example whether a container is a web server, a database or an application server. Each container is loaded the same way as any other container.

Docker also doesn’t care where you ship your container: you can build on your laptop, upload to a registry, then download to a physical or virtual server, test, deploy to a cluster of a dozen Amazon EC2 hosts, and run.

 Benefits of Docker

  1. Efficient hardware utilization: When compared to hypervisor-based virtual machines, Docker containers use less memory, CPU cycles and disk space. This enables more efficient utilization of hardware resources. You can run more containers than virtual machines on a given host, resulting in higher density.
  2. Security: Docker isolates many aspects of the underlying host from an application running in a container without root privileges. Also a container has a smaller attack surface than a complete virtual machine.
  3. Consistency and portability: Docker provides a way to standardize application environments and enables portability of the application across environments.
  4. Fast, Efficient deployment cycle: Docker reduces the cycle time between code being written, code being tested, deployed and used.
    Below quote is taken from from official documentation

    Your developers write code locally and share their development stack via Docker with their colleagues. When they are ready, they push their code and the stack they are developing onto a test environment and execute any required tests. From the testing environment, you can then push the Docker images into production and deploy your code.

  5. Ease of use: Docker’s low overhead and quick startup times make running multiple containers less tedious.
  6. Encourages SOA: Docker is a natural fit for microservices and SOA architectures since each container typically runs a single process or application and you can run multiple containers with little overhead on the same system.
  7. Segregation of duties: In Docker ecosystem, Developers care about making the application work inside the container and Ops cares about managing those containers.

Containers Vs. Virtual machines

docker-containers-vms

  • Virtual machines have a full OS with its own memory management, device drivers, daemons, etc. Containers share the host’s OS and are therefore lighter weight.
  • Because Containers are lightweight, starting a container takes approximately a second whereas booting a VM can take several minutes
  • Containers can generally run only same or similar operating system as the host operating system. e.g., they can run Red Hat on an Ubuntu host, but you can’t run Windows on an Ububtu host. (In practice, for most practical cases it is not a real requirement to run different OS types)
  • While using VMs, in theory, vulnerabilities in particular OS versions can’t be leveraged to compromise other VMs running on the same physical host. Since containers share the same kernel, admins and software vendors need to apply special care to avoid security issues from adjacent containers.
    Countering this argument is that lightweight containers lack the larger attack surface of the full OS needed by a VM, combined with the potential exposures of the hypervisor itself.

References

  1. Docker user guide
  2. Security risks and vulnerabilities of Docker
  3. Containers Vs. VMs
  4. Docker: Using Linux Containers to Support Portable Application Deployment
  5. Contain yourself: The layman’s guide to Docker
  6. Deploying Java applications with Docker

 

O’Reilly Software Architecture Conference

Sunday, April 12th, 2015

O’Reilly recently conducted a Software Architecture Conference in Boston. That included a two days training on Microservices by Sam Newman (author of the recently published book Building Microservices) and two further days of talks by various speakers.

A talk was given by Martin Fowler on Agile Architecture, the video of which is available from the conference website.

microxchg.io => Microservice conference in Berlin

Friday, April 10th, 2015

Recently , I attended a Microservice conference in Berlin, where some of the trend-setters in this field such as James Lewis(Thoughtworks), Adrian Cockburn(Author, Agile development expert) and Chris Richardson(founder of Cloudfoundry) gave interesting presentations. Among the speakers were also Sam Newman(Thoughtworks) who has written a new book on the same topic.

My current reading list

Friday, April 10th, 2015

Below list is a self-reminder as well to see how I would feel when I look back at this list at a later time towards the end of the year.

Currently reading

Technical

  • Building Microservices by Sam Newman (published 2015)
  • Release It by Michael Nygard (published 2007)
  • The Docker Book by James Turnbull (published 2015)
  • Learning Spring Boot by Greg L. Turnquist (published 2014)
  • Pro AngularJS by Adam Freeman
  • Spring in Action by Craig Walls

Non-technical

  • Hatching Twitter: A True Story of Money, Power, Friendship, and Betrayal by Nick Bilton (published 2014)
  • The Software Paradox

Reread

  • Domain driven Design by Eric Evans (published 2003)
  • Making Things Happen by Scott Berkun (published 2008)
  • Leadership and self-deception by Arbinger Institute (published 2000)

Planned for 2015

Technical

  • DevOps: A Software Architect’s Perspective by Len Bass and Ingo Weber (published 2015)
  • Continuous Integration by Jez Humble (published 2010)
  • Implementing Lean Software Development: From Concept to Cash by Poppendieck (published 2006)
  • NoSQL Distilled by Pramodkumar J. Sadalage und Martin Fowler (published 2012)

Non-technical

  • Poor Charlie’s Almanack: The Wit and Wisdom of Charles T. Munger (published 2005)
  • Becoming Steve Jobs: The Evolution of a Reckless Upstart into a Visionary Leader by Brent Schlender (published 2015)
  • The 5 Elements of Effective Thinking by Edward B. Burger (published 2012)
  • How to Read a Book by Mortimer J. Adler (Autor), Charles Van Doren (published 1972)

Failed (started long back, still lying in the bookshelf unfinished)

  • Hadoop in Action by Chuck Lam
  • In The Plex: How Google Thinks, Works, and Shapes Our Lives by Steven Levy (published 2011)

 Meta

Colour code

Actively reading/Succesfully finished 

to 

Occasionally reading  

to 

Inactive

to

Given up