Archive for the ‘Technology’ Category

The trajectory of a software engineer… and where it all goes wrong

Monday, April 20th, 2015

Just came across a thought-provoking post by a developer called Michael Church on the subject of the trajectory of a software engineer:

https://michaelochurch.wordpress.com/2012/01/26/the-trajectory-of-a-software-engineer-and-where-it-all-goes-wrong/

Using a 0.0 to 3.0 scale, he categorizes a Software developer from a novice to a “Senior fellow”. A novice is a rookie programmer who may not provide enough value for the remuneration which he/she is being paid whereas a Senior Fellow is one of the best known programmer in the world, typically known for his outstanding and groundbreaking contributions to the industry as a whole.

While most programmers may start off as novice (0.0 to 0.4) or marginally better, very few move beyond a 1.0. At a level of 1.0-1.3, a programmer becomes something what Michael calls a full-fledged adder, a stage where they are able to make decent contributors to the projects which they directly work on. But unfortunately, most programmers fail to advance much further than that, often not necessarily due to intellectual limitations, but rather due to lack of drive and curiosity.

In my opinion, thought leaders like Martin Fowler or Kent Beck probably falls under 2.4 to 2.6: Global multiplier (“Fellow”) whereas the likes of Linus Torwalds, Peter Norvig and Richard Stallman falls under 2.7 to 3.0: Senior fellow — the highest level of distinction possible.

Docker explained in layman’s terms

Tuesday, April 14th, 2015

There are quite a lot of good documentation available about Docker including the official one, but many of them start explaining lxc, cgroupsLinux kernel and UnionFS in the very first paragraph, thus scaring off many readers who are not Linux geeks.

In this article I will try to explain Docker in layman’s terms. No prior knowledge of virtual machines, cloud deployments or DevOps practices are assumed to understand this post.

Docker?

Docker is an open source platform which can be used to package, distribute and run your applications. Docker provides an easy and efficient way to encapsulate applications (e.g. a Java web application) and any required infrastructure to run that application (e.g. Red hat Linux OS, Apache web server, Tomcat application server, mySQL database etc.) as a single “Docker image” which can then be shared through a central, shared “Docker registry“. The image can then be used to launch a “Docker container” which makes the contained application available from the host where the Docker container is running.

Docker provides some convenient tools to build Docker images in a simple and efficient way. A Docker container on the other hand is a kind of light weight virtual machine with considerably smaller memory and disk space footprint than a full blown virtual machine.

By enabling fast, convenient and automated deployments, Docker has the effect of shortening the cycle between writing code, testing code and getting it live on Production. On the other hand, by providing a light weight container to run the application, Docker enables very efficient utilization of hardware and CPU resources.

Docker is open source and can be installed on your notebook or on any servers where you want to build and run your Docker images and containers (provided you meet the minimum system requirements).

Docker deployment workflow from 1000 feet

Before we look into individual components of the Docker ecosystem, let us try to understand one of the ways, how Docker workflow makes sense in the Software Development Life Cycle.

In-order to support this workflow, we need a CI tool and a configuration management tool. I picked Bamboo and Ansible, even though the workflow would remain the same for any other tools or modes of deployment. Below is a possible workflow:

  1. Code is committed to Git repository
  2. A job is triggered by Bamboo to build the application from the source code and run unit/integration tests.
  3. If tests are successful, Bamboo builds a Docker image which is pushed to a “Docker registry”. (Dockerfile, which is used to provide a template for building the image is typically committed as part of the codebase)
  4. Bamboo runs an Ansible playbook to deploy the image to QA servers
  5. If QA tests are passed as well, Ansible can be used to deploy and start the container in production

Docker ecosystem

Docker client and server

Docker is a client-server application. The Docker client talks to a Docker server (also called a daemon), which in turn does all the work. Docker is equipped with a command line client binary called Docker and a full Restful API. Docker client and server could be running on the same host or on different hosts.

Docker images

Images are the building blocks of the Docker world. They are the build part of Docker’s lifecycle. They are built step-by-step using a series of instructions which are typically described in a simple text configuration file called “Dockerfile”.  Docker provides a simple text based way of declaring the infrastructure and the environment dependencies.

Docker images are highly portable across hosts and environments. Just like compiled Java code can be run on any operating system where JVM is installed, a Docker image can be run in a Docker container on any host that runs Docker.

Docker registry

Docker registries are the distribution component of Docker. Docker stores the images which you build in a registry. There are two types of registries: public and private. Docker Inc. operates the public registry for images called Docker Hub. You can create an account on the Docker hub and store and share your images, but you have the option of keeping your images in Docker Hub private as well.

It is also possible to create your own registry behind your corporate firewall.

Docker container

If images are the building or packing aspect of Docker, containers are the runtime or execution aspect of Docker. Containers are launched from images and may contain one or more running processes.

Docker borrows the concept of standard shipping container, used to transport goods globally, as a model for its containers. But instead of goods, Docker containers ship software. Each container contains a software image — its cargo — and like its physical counterpart allows a set of operations to be performed. For example, it can be created, started, stopped, restarted and destroyed.

Like a shipping container, Docker doesn’t care about the contents of the container while performing these actions; for example whether a container is a web server, a database or an application server. Each container is loaded the same way as any other container.

Docker also doesn’t care where you ship your container: you can build on your laptop, upload to a registry, then download to a physical or virtual server, test, deploy to a cluster of a dozen Amazon EC2 hosts, and run.

 Benefits of Docker

  1. Efficient hardware utilization: When compared to hypervisor-based virtual machines, Docker containers use less memory, CPU cycles and disk space. This enables more efficient utilization of hardware resources. You can run more containers than virtual machines on a given host, resulting in higher density.
  2. Security: Docker isolates many aspects of the underlying host from an application running in a container without root privileges. Also a container has a smaller attack surface than a complete virtual machine.
  3. Consistency and portability: Docker provides a way to standardize application environments and enables portability of the application across environments.
  4. Fast, Efficient deployment cycle: Docker reduces the cycle time between code being written, code being tested, deployed and used.
    Below quote is taken from from official documentation

    Your developers write code locally and share their development stack via Docker with their colleagues. When they are ready, they push their code and the stack they are developing onto a test environment and execute any required tests. From the testing environment, you can then push the Docker images into production and deploy your code.

  5. Ease of use: Docker’s low overhead and quick startup times make running multiple containers less tedious.
  6. Encourages SOA: Docker is a natural fit for microservices and SOA architectures since each container typically runs a single process or application and you can run multiple containers with little overhead on the same system.
  7. Segregation of duties: In Docker ecosystem, Developers care about making the application work inside the container and Ops cares about managing those containers.

Containers Vs. Virtual machines

docker-containers-vms

  • Virtual machines have a full OS with its own memory management, device drivers, daemons, etc. Containers share the host’s OS and are therefore lighter weight.
  • Because Containers are lightweight, starting a container takes approximately a second whereas booting a VM can take several minutes
  • Containers can generally run only same or similar operating system as the host operating system. e.g., they can run Red Hat on an Ubuntu host, but you can’t run Windows on an Ububtu host. (In practice, for most practical cases it is not a real requirement to run different OS types)
  • While using VMs, in theory, vulnerabilities in particular OS versions can’t be leveraged to compromise other VMs running on the same physical host. Since containers share the same kernel, admins and software vendors need to apply special care to avoid security issues from adjacent containers.
    Countering this argument is that lightweight containers lack the larger attack surface of the full OS needed by a VM, combined with the potential exposures of the hypervisor itself.

References

  1. Docker user guide
  2. Security risks and vulnerabilities of Docker
  3. Containers Vs. VMs
  4. Docker: Using Linux Containers to Support Portable Application Deployment
  5. Contain yourself: The layman’s guide to Docker
  6. Deploying Java applications with Docker

 

Facebook purchases Oculus VR..Would we soon have virtual reality?

Wednesday, March 26th, 2014

Facebook just announced that it has purchased Oculus VR, a maker of virtual reality headset for an estimated price of $2 billion in cash and stocks.

Oculus VR makes “Oculus Rift”, a device that looks like a headset with a mask that enables virtual reality experience for video games. Zuckerberg says that their efforts with Oculus will continue to focus on gaming initially, and that the company will continue to operate independently of Facebook. But after gaming, Zuckerberg says, they’re going to expand into a variety of other arenas.

“After games, we’re going to make Oculus a platform for many other experiences. Imagine enjoying a court side seat at a game, studying in a classroom of students and teachers all over the world or consulting with a doctor face-to-face — just by putting on goggles in your home,” he says. “This is really a new communication platform. By feeling truly present, you can share unbounded spaces and experiences with the people in your life. Imagine sharing not just moments with your friends online, but entire experiences and adventures.”

Gaming and online betting websited have played a big role in the development of many web technologies and internet in general. But it is not quite clear why Facebook would pay $2 billion  for a VR company when most of their users access Facebook through mobile and they realized albeit somewhat slowly that mobile is where their future lies. People are unlikely to carry a bulky gadget with them all the time, so unlike Google glass, Oculus is more for your living room or game room. Whether many Facebook users will pay something like $300 for such a gadget is another question.

But first mobile phones were bulky tand expensive too..Over the time headsets could also become thinner and lighter and even cheap enough for mass market penetration. This could enable companies to conduct videoconferencing which offers richer experience that that is possible currently and could help Facebook to penetrate corporate market…Or people to attend a friend’s wedding on another part of the planet remotely and virtually!