Angular2: Beyond the “Todo” list!

Angular2 is the latest buzz word in web programming and frankly speaking, I am in love with it. It seems to be the “Docker” for web! Well I am also in love with “Docker” and the good part is that both Angular2 and Docker do not mind me loving them at the same time ;). Jokes apart, I particularly like Angular2 because of the component concept and its inherent support for TypeScript (TS). Components truly allow for creation of re-usable web components since they also encapsulate the “view” aspect. Hence no more grappling / switching between a model/view/controller. Everything is inside a component. You use the component and you get an automatic binding with the needed controller and view. This takes re-usability to the next level. Also for someone like me whose background is systems programming, components can be seen as class object exposing certain methods/properties (i.e. basic encapsulation). Components also allows for extension (i.e. inheritance) and can be tested individually, so once confirmed working, they are always working without any external dependency. That said, there are quite a few tutorials out there which creates a basic ToDo app with Angular2 and yes, it is pretty cool. But when you try to make something useful out of what has been learned from the ToDo App, the details bog us down. In any case, #FreeCodeCamp (#FCC) project on Pomodoro Clock was my trigger to use Angular2. I have the project hosted on Plunker (Pomodoro Clock) which uses Angular2, TS and Semantic-UI. I am also a big fan of Jade so have used it a little bit in the index.html but using it inside a component template seems to be a bit awkward. Though we can do so, I have used normal HTML since the components are broken down and are quite simple. Pomodoro clock is basically a countdown timer which alternates between a session time and break time and increases productivity (Google it if you want to know more).

Enough talk, lets start with the code explanation. The first file in the plnrk is a config.js file. This file basically allows for using the typescript transpiler and tells our web application to use angular2 and rxjs. This file will be automatically generated with the right parameters if you select a new AngularJS->2.0.x(TS) project in Plnkr menu. So nothing interesting really. We skip it and go to the next index.jade file. This is the jade file that I am using i.e. it is the one which is compiled into html and rendered in the browser window and which also creates and loads the Angular2 application. The structure is pretty easy to know if you have done any jade programming. There is a title and then a bunch of scripts which are brought in for including jade, jade runtime, zone, reflect, transcript, semantic-ui, jquery (needed by semantic-ui) and rxjs. We also see our config.js being referenced and our ‘app’ imported. There is also a custom font that I generated from which I am using across my #FCC projects with some very basic beautification css. Rest of the the file is self-explanatory, but look particularly at line 33.


That is exactly where our src/app.ts is getting called. But you say that app is in src directory, how does angular find it. Look into config.js which has a map which tells the system loader as to where to look for the ‘app’. Before I forget, we see div.ui.grid.container.centered on line 31 which is using the UI class from semantic-ui. I like that one too since it provides a host of components out of the box for use. But before we go to app.ts, lets look at the main.ts. This file is the one which bootstraps our App class i.e. its loads/initializes/starts our angular app. There is generally only one class that is bootstrapped (though I have played with multiples and it all seems to work. It can look as a bad design though so avoid it). Main.ts imports the bootstrap class from angualr2 and the app class from app.ts and bootstraps it. If there is an error, it is directly printed in the HTML.

Now we go to src/app.ts which imports Component class from Angular2 with break and session (which I have written) and observable from rxjs. The part to know about is @Component({…}). The selector in here defines the selector we should use inside our HTML to load the particular class. In our case, I have named it app but you can name it whatever you like and call it in the index.jade on line 33. We are not using any providers in this app. Then we have a template which shows our top level UI again using components from semantic-ui. Line 25-26 are the important ones which invokes our session and break components using the specified selector.

<session-length [sessionTime]="sessionTime" (changedSessionTime)="sessionTimeChanged($event)" [sessionDisabled]="inputDisabled"></session-length>
<break-length [breakTime]="breakTime" (changedBreakTime)="breakTimeChanged($event)" [breakDisabled]="inputDisabled"></break-length>

We are passing some input parameters and also waiting for an event which we expect to be generated from the component. Between, this code can be optimized to just use only one component instead of break & session but that we can do in the next post. For the template to work properly, we need to use Session and Break as directives to App component. Then we have the App class exported which has logic to update the progressbar as well as manipulate the parameters which are passed to the break & session classes. Before going into details here, lets look at the break.ts file. Basically, it is a simple component wrapping a number input defining the minimum and maximum values. The class Break expects breakTime and breakDisabled as input values. The field is disabled when the timer is ongoing. And then we have a valueChanged function which emits the value of the component as an output to whoever would like to know the value of the field. Since the initial value is given as an input, we do not need to emit the value in constructor. Note that we are using property binding in the template using []=””. More details on bindings are in the Angular2 documentation. This is a one-way binding. We also have a property(breakTime) and event(change) two-way binding. The session.ts is similar. So if we take the min and max as input values as well, and modify the emit output to include some kind of indication on the output object, we can just have one component instead of 2.


Now back to app.ts, we can see that we are using the semantic-ui progress bar. The weird part with this one is that one needs to call the progress() function to get the progress bar to move. Just passing a value will not work. Do note that I can access my UI components in the class by using $(“#”) and call the resulting methods/properties. This took quite a lot of time for me to figure out. Rest of the code is pure logic which I will not explain. Give your comments and see if you can progress from here. This sample code paves way for more complex applications by breaking it down into components as well as using classes, components, inputs, outputs and eventemitter.


Get Started With Docker!

Docker is amazing! It is a gift to mankind. Next best thing after sliced bread! Lets start using it yeah. OK, lets say you want to get started with docker quickly and install a host of services on your local machine that enables you to get your big-data analytics SW into a real-time visualization. Don't go about searching the net on how to install different servers. Instead, use Docker (I bet you knew this was coming)!

Docker runs great on Linux. Use:

  1. sudo apt-get install docker docker-compose

if using a flavor of ubuntu/deb. Otherwise browse the web and get a package suitable for your distro. Make sure you have a proper HW to get about doing this. Once done, enable the docker service using:

  1. sudo systemctl class="hljs-built_in">enable docke

That's pretty much it. Now docker is running. The first thing one should do is to get the Simple Docker UI which takes away the burden of remembering all those commands. It is also available as a Google Chrome extension. Click on it and chrome downloads the application for you and creates a nice entry in your "start" (!!!) menu. Start the Simple Docker UI using:

  1. sudo docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock uifd/ui-for-docke

If everything is installed OK, you can now open your browser to point to localhost:9000 and see a nice Docker UI. Play around with it but it is really simple to understand. The next step is to run some servers. An example can be a RabbitMQ server for instance. All the servers are available as either user-generated or official docker images on Most have instructions on how to start the servers. But in this case, assume you need a RabbitMQ. The installation and default configuration of the server is as simple as:

  1. sudo docker run -d -p 25672:25672 -p 4369:4369 -p 5671:5671 -p 5672:5672 --hostname mb-host rabbitmq:3.6.2

If you want to manage your RabbitMQ server so started with a nice RabbitMQ management plugin, start that up as well using:

  1. sudo docker run -d -P --hostname mb-host rabbitmq:3.6.2-management

If data needs to be persisted across docker runs, create a volume and associate it with a directory on your local machine. Containers started by docker will then persist the data on that volume. You can look into docker UI to see which container is storing data where (I told you it is very simple). The following code creates a new data volume on /data/vidacdb with an associated mongoDB version and passes a true to allow persistence. We also start the docker-compose service just in case. Then we run an actual instance of mongoDB giving it the volume to use. Make sure we have the --smallfiles passed which helps by using less memory.

  1. sudo docker create --name mongo-data-volume -v /data/vidacdb mongo:3.3.6 /bin/true
    sudo docker-compose up -d
    sudo docker run -d -p 27017:27017 --volumes-from mongo-data-volume mongo:3.3.6 --smallfiles

Thats pretty much it folks. You are up and running Docker. You can create your own docker images if need be. Run all the servers as needed, bunch them up as scripts and you are good to go. For me, I use the following bash script to run the needed servers.

  1. #!/bin/bash

  2. # Start RabbitMQ from the official image
    #sudo docker run -d -p 25672:25672 -p 4369:4369 -p 5671:5671 -p 5672:5672 --hostname mb-host rabbitmq:3.6.2
    # Also start the management plugin as needed
    #sudo docker run -d -P --hostname mb-host rabbitmq:3.6.2-management

  3. # Create a mongo-data-volume
    sudo docker create --name mongo-data-volume -v /data/vidacdb mongo:3.3.6 /bin/true
    sudo docker-compose up -d

  4. # Start MongoDB from the official image
    #sudo docker run -v /data/vidacDB -d -p 27017:27017 mongo:3.3.6
    sudo docker run -d -p 27017:27017 --volumes-from mongo-data-volume mongo:3.3.6 --smallfiles

  5. # --rest --auth

  6. # Start ElasticSearch needed for live data from official image
    sudo docker run -d -p 9200:9200 -p 9300:9300 elasticsearch:2.3.3
    "es_host" -Des.http.cors.enabled="true" -Des.http.cors.allow-origin:"/http?:\/\/localhost(:[0-9]+)?/"

  7. sleep 5

  8. # Kibana from official #4.5.1
    sudo docker run --name local-kibana -e ELASTICSEARCH_URL= -p 5601:5601 -d kibana:4.5.1

  9. # Kibana from official #4.5.1
    #sudo docker run --name edm-kibana -e ELASTICSEARCH_URL= -p 5610:5601 -d kibana:4.5.1

  10. # Jenkins from official #1.651.3
    sudo docker create --name jenkins-data-volume -v /data/jenkins_home jenkins:1.651.3 /bin/true
    sudo docker-compose up -d
    sudo docker run -v /data/jenkins_home -d -p 8080:8080 -p 50000:50000 jenkins:1.651.3


Install Linux on a Fresh Machine using LVM!

Okay, I know, the title says it all right? And there are hundreds and thousands of tutorials out there that allows you to do this right? Show me one tutorial which tells you what different partitions are needed when you are doing fresh install of Linux especially whose installers do not support LVM/LVM2 installations. Lets take an example of Manjaro, the latest talk of the town. The graphical installers do not have an option to support LVM. Even if you do manual partitioning, it does want to install anything over LVM and it wants the root and swap partitions. Besides, it will forget that we also need a smaller boot partition to install GRUB/SYSLINUX right? Well, atleast it did that to me and either I am too stupid to understand it or I didn’t read the instructions properly. In any case, I headed over to the CLI installer which thankfully had an option to use LVM (which was misguiding) since it wanted to do partitions manually and did not provide any guidance as to what partitions should be created. Instead it more felt like an GUI menu for creating reminding the steps namely partitioning, creating physical volumes (PVs) and creating volume groups (VGs).

So without much ado, below are the things that you really need to do to use LVM properly and then use it without the installers supporting/not supporting it. My assumption is that you want to use all the disks in your computer inside the VG for LVM. Assuming 2 disks, below is the schema.

/dev/sda should have the following partitions.

1. A boot partition of around 256MB (create lesser if you want, will only be used to install GRUB/SYSLINUX/any other boot loader).  /dev/sda1
2. A swap partition double the amount of RAM that you have. /dev/sda2
3. A LVM partition covering the whole disk. /dev/sda3

/dev/sdb should have:

1. A swap partition double the amount of RAM that you have /dev/sdb1
2. A LVM partition covering the whole disk. /dev/sdb2

Add whatever number of disks that you need to add but be sure to create the 2 varieties. Use any of fdisk, gdisk, parition managers, etc. to create the needed partitions. Now create the (PVs) and (VGs) that you want. For this the following commands will help.

lvmdiskscan –> shows the partitions available to create volumes

pvcreate eg: pvcreate /dev/sda3 /dev/sdb2 (for the above example) –> create a physical volume group with the needed partitions

pvdisplay –> list the partitions added to the physical volume(s)

vgcreate eg: vgcreate vgpool /dev/sda3 /dev/sdb2 –> create a volume group with the name vgpool including the listed partitions

vgdisplay –> Display your volume group and included partitions

lvcreate –l 100%FREE –n eg: lvcreate –l 100%FREE vgpool –n lvhome –> creates a logical volume lvhome on the vgpool consuming 100% of the free space on the vg

vgscan and vgchange –ay –> scan and activate the lvms.


At this point in time you should be able to use your LVM for installing. In the installer, go to manual partitioning and select /dev/sda1 as the boot partition and /dev/sda2 as the swap with mount point of /dev/lvhome as root. Hopefully this will avoid the number of formats and partition restructuring that I had to do. Énjoy your new Linux using LVM on this machine. PS: The latest kernel support LVM but if not, do a modprobe on dm-mod install lvm2 before doing all the above.

Metrics & task boards in Scrum/Agile!

My thoughts on why and how metrics/measurements of ongoing tasks should be done.

I will start with the practice. The problems we are trying to tackle is:

1. How to make our team believe in their own estimations?
2. What is our cycle time?
3. How do we project the probability of fulfilling the sprint goals?
4. How do we track the state of the task?
5. How do we ensure continuous development of processes?

For the last 3 questions, I suggest referring to the self-explaining Cumulative Flow Diagrams ( Martin Alaimo writes on measuring sprint progress in the Scrumalliance community blogs ( Essential Scrum: A practical guide to the Most Popular Agile Process by Kenneth S. Rubin (p.357-359) defines how task metrics can be visualized (though in a table formation) ( An electronic task board showing the progress ( A detailed article by MSFT on Task board (of course tailored towards VS Team edition usage but has lots of details) ( Another article ( very good on why sprint progress should be monitored regularly and not at the end of the sprint.

I know we all hate RallyDev but still a nice article ( And again an MSDN article on srcum process workflow ( And a good read on Effective Visual Management of Scrum/Task boards with Information Radiators (

For this sprint, I will put information on my tasks which will help visualize the progress as well as track the state of the task and project if we can meet the sprint goals. And it will also help in the retrospective and next sprint planning with estimation.

Retrospective 2015 – Good, bad and ugly!

My last blog entry was on 1st July 2015. It has almost been 6 months and I am not that proud of the lapse in time without writing down my thoughts. It is as I see not only bad for my readers but also for my mental health ;). In any case, one of my new year resolutions is to jot down atleast some of my thoughts every fortnight, if not every week. Hopefully, this new year resolution does not go down the drain as every other till date. My detailed retrospective for 2015 is below. But the key takeaways were that 2015 was an average year. A lot of things happened, some good, some bad, but they haven't been able to satisfy me and I feel that the time spent could have been done in a better manner furthering towards my goals, both material and spiritual.

2015 started with a bang. January was the 3rd month of pregnancy for my wife with the due dates in July being fixed. In February, I got the bad news that the envelope containing my certificates that I had sent for my MBA course was returned back to me saying "Invalid Address". I got in touch with the relevant authorities and they accepted my late submissions. In March, I finally got admitted to MBA which was to start in August. In the meantime, the office work was boiling down to boring chores and I had the experience of working in a very bad team from every perspective, be it work, relationships, etc. The first half of the year in the office was dreadful. The team I was put in was very immature and had some elements which didn't care about team performance and delivery but instead cared about their personal ego satisfaction at the cost of redoing work and wasting company resources! This led to a lot of friction between various members with the final culmination point on me since I was quite vocal about those aspects. In any case, I then moved on to a much better team which had the delivery mentality and was mature enough to engage in various aspects. And this happened in the second half which helped take a lot of stress out of my life which was accumulated during the first half.

Since July was decided to be the due date for my 2nd child, my parents came in June and this time were planned to stay with us for 8 months. But then the bad news started rolling in. My paternal aunt passed away (@75 years) in end of June merely 20 days after my parents arrival. My parents especially my dad had a very hard time coping up with the loss. My 2nd child was born on 16th July with normal delivery procedures. Though my wife had to be rushed into the ambulance as she was bleeding from the afternoon. The baby came at 1.40am the next day who was supposed to change the next 6 months for all of us. She is a bundle of both joy but with gastric troubles. Her sleeping patterns are very unusual so we rarely got any sleep. In the meantime, my father started having low blood pressure and after a lot of deliberation, we finally had to pre-pone our parents departure tickets to leave in end of September. My MBA had started and I have to spend almost 10 hours a week to keep pace with submission of assignments and papers as demanded by the course.

After my parents went back to India, we received another bad news. This time my maternal grandfather  (90 years) expired after a prolonged illness. As if that was not enough, my cousin sister's mother-in-law also expired in December. Incidentally, both my paternal aunt and cousin's mother-in-law were best friends who had spent almost half of their lifetime together. Again in December, my uncle on father's side expired (~80 years) after an illness as well. This is the only year that I know of when I have seen so many deaths in our family. Death is the final reality of life which we do not think about except when we see someone die. It is sometimes a point of spiritual thought as to how we can make the most out of our life instead of doing normal chores and passing away.

On the good side, my father's health is back to normal, we have got a little bit of life back since our 2nd one is growing up and is almost 6 months now. My eldest is going to a Swedish school so hopefully has her future secured. We have invested in a house coming up in 3 years time from now. MBA in BTH is going along at full speed and we are almost at the end of semester learning a lot of things and fundamentals which are far more valuable to know in theory, even though have applied practically. Work is good and am enjoying with my new team. I released a CM12.1 ROM for Oppo Find5 on XDA and am now working for CM13.0 plus some other exquisite ROMs for Find5. A lot of new projects were started both on github and bitbucket. Gitlab has some interesting projects ongoing which I will open source once I reach a certain useable point. Plus I revived my interest in cricket and played quite a few games in summer with Danish Cricket League (though our team didn't win any but hey!). Also I am playing badminton regularly which has a very positive effect on my mental and physical health. The bad part being at after my 2nd kid was born and my MBA started, I am not able to devote time to Cardio which I intend to start again this year. My Swedish is still worst than a "nollan".

As for the new years resolutions, they are very simple.
1. Learn Swedish (carry forward from last year)
2. Finish all the different projects started in 2015
3. Keep up with the MBA, work and personal life starting with my exercises again

Hopefully, 2016 is much better than 2015. I do hope that I can contribute to the society which satisfies my soul and I am able to make my family and world at large a happier place. HAPPY NEW YEAR Everybody!