SSHFS on Linux & Windows

SSHFS does come into play when connecting to a remote client and/or working on a remote server. I am pretty sure most of you are aware of how to do this. But if not, the below can help a lot with your development using your favorite tools. The problem is that on the remote servers is either not having root access, or updated and latest software and/or most of our new age tools might not work as expected (eg: try to run atom on it or sublimeText). Also using NX is pain... You can of course run Linux or X-Server on windows and get native X-forwarding but losing the SSH session will invalidate your window and you might have data loss. Worst of all, such X-sessions are not restored (on windows).

So one idea is to use SSHFS on both your Linux and Windows client and load your $HOME directory as a shared drive on which you can operate. If you want to read more about it (http://www.linuxjournal.com/article/8904).

For Linux:
    - If you do not have sshfs installed, install it (sudo apt-get install sshfs)
    - sudo mkdir /mnt/ (create a mount point)
    - Mount your drive using: sudo sshfs -o allow_other @remote-server_ip:/home/ /mnt/

Of course, I assume that on your local linux machine you have sudo rights ;). Now you can do stuff on your /mnt/ and all the changes are reflected back on the main server (so no more copy creations for those who were doing it). You can mount as many folders from the remote server as needed and replicate that environment on your local machine and use your machine to build/compile/test, etc.

For Windows:
    - Download http://www.expandrive.com/download-expandrive/ or http://www.netdrive.net/download/download_click.html (both are similar)
    - Install and mount your Hercules repo as a drive on windows and operate on it.
    - Open source lovers can use dokany or tuissh GUI package 

Advantages:
    - No more NX problems
    - Use your favorite editors/analysis tools
    - No more copies of code.
    - FUN UNLIMITED

Hope it helps. ENJOY!

Angular2: Beyond the “Todo” list!

Angular2 is the latest buzz word in web programming and frankly speaking, I am in love with it. It seems to be the “Docker” for web! Well I am also in love with “Docker” and the good part is that both Angular2 and Docker do not mind me loving them at the same time ;). Jokes apart, I particularly like Angular2 because of the component concept and its inherent support for TypeScript (TS). Components truly allow for creation of re-usable web components since they also encapsulate the “view” aspect. Hence no more grappling / switching between a model/view/controller. Everything is inside a component. You use the component and you get an automatic binding with the needed controller and view. This takes re-usability to the next level. Also for someone like me whose background is systems programming, components can be seen as class object exposing certain methods/properties (i.e. basic encapsulation). Components also allows for extension (i.e. inheritance) and can be tested individually, so once confirmed working, they are always working without any external dependency. That said, there are quite a few tutorials out there which creates a basic ToDo app with Angular2 and yes, it is pretty cool. But when you try to make something useful out of what has been learned from the ToDo App, the details bog us down. In any case, #FreeCodeCamp (#FCC) project on Pomodoro Clock was my trigger to use Angular2. I have the project hosted on Plunker (Pomodoro Clock) which uses Angular2, TS and Semantic-UI. I am also a big fan of Jade so have used it a little bit in the index.html but using it inside a component template seems to be a bit awkward. Though we can do so, I have used normal HTML since the components are broken down and are quite simple. Pomodoro clock is basically a countdown timer which alternates between a session time and break time and increases productivity (Google it if you want to know more).

Enough talk, lets start with the code explanation. The first file in the plnrk is a config.js file. This file basically allows for using the typescript transpiler and tells our web application to use angular2 and rxjs. This file will be automatically generated with the right parameters if you select a new AngularJS->2.0.x(TS) project in Plnkr menu. So nothing interesting really. We skip it and go to the next index.jade file. This is the jade file that I am using i.e. it is the one which is compiled into html and rendered in the browser window and which also creates and loads the Angular2 application. The structure is pretty easy to know if you have done any jade programming. There is a title and then a bunch of scripts which are brought in for including jade, jade runtime, zone, reflect, transcript, semantic-ui, jquery (needed by semantic-ui) and rxjs. We also see our config.js being referenced and our ‘app’ imported. There is also a custom font that I generated from enjoycss.com which I am using across my #FCC projects with some very basic beautification css. Rest of the the file is self-explanatory, but look particularly at line 33.

app

That is exactly where our src/app.ts is getting called. But you say that app is in src directory, how does angular find it. Look into config.js which has a map which tells the system loader as to where to look for the ‘app’. Before I forget, we see div.ui.grid.container.centered on line 31 which is using the UI class from semantic-ui. I like that one too since it provides a host of components out of the box for use. But before we go to app.ts, lets look at the main.ts. This file is the one which bootstraps our App class i.e. its loads/initializes/starts our angular app. There is generally only one class that is bootstrapped (though I have played with multiples and it all seems to work. It can look as a bad design though so avoid it). Main.ts imports the bootstrap class from angualr2 and the app class from app.ts and bootstraps it. If there is an error, it is directly printed in the HTML.

Now we go to src/app.ts which imports Component class from Angular2 with break and session (which I have written) and observable from rxjs. The part to know about is @Component({…}). The selector in here defines the selector we should use inside our HTML to load the particular class. In our case, I have named it app but you can name it whatever you like and call it in the index.jade on line 33. We are not using any providers in this app. Then we have a template which shows our top level UI again using components from semantic-ui. Line 25-26 are the important ones which invokes our session and break components using the specified selector.

<session-length [sessionTime]="sessionTime" (changedSessionTime)="sessionTimeChanged($event)" [sessionDisabled]="inputDisabled"></session-length>
<break-length [breakTime]="breakTime" (changedBreakTime)="breakTimeChanged($event)" [breakDisabled]="inputDisabled"></break-length>

We are passing some input parameters and also waiting for an event which we expect to be generated from the component. Between, this code can be optimized to just use only one component instead of break & session but that we can do in the next post. For the template to work properly, we need to use Session and Break as directives to App component. Then we have the App class exported which has logic to update the progressbar as well as manipulate the parameters which are passed to the break & session classes. Before going into details here, lets look at the break.ts file. Basically, it is a simple component wrapping a number input defining the minimum and maximum values. The class Break expects breakTime and breakDisabled as input values. The field is disabled when the timer is ongoing. And then we have a valueChanged function which emits the value of the component as an output to whoever would like to know the value of the field. Since the initial value is given as an input, we do not need to emit the value in constructor. Note that we are using property binding in the template using []=””. More details on bindings are in the Angular2 documentation. This is a one-way binding. We also have a property(breakTime) and event(change) two-way binding. The session.ts is similar. So if we take the min and max as input values as well, and modify the emit output to include some kind of indication on the output object, we can just have one component instead of 2.

 

Now back to app.ts, we can see that we are using the semantic-ui progress bar. The weird part with this one is that one needs to call the progress() function to get the progress bar to move. Just passing a value will not work. Do note that I can access my UI components in the class by using $(“#”) and call the resulting methods/properties. This took quite a lot of time for me to figure out. Rest of the code is pure logic which I will not explain. Give your comments and see if you can progress from here. This sample code paves way for more complex applications by breaking it down into components as well as using classes, components, inputs, outputs and eventemitter.

HAPPY CODING!

Get Started With Docker!

Docker is amazing! It is a gift to mankind. Next best thing after sliced bread! Lets start using it yeah. OK, lets say you want to get started with docker quickly and install a host of services on your local machine that enables you to get your big-data analytics SW into a real-time visualization. Don't go about searching the net on how to install different servers. Instead, use Docker (I bet you knew this was coming)!

Docker runs great on Linux. Use:

  1. sudo apt-get install docker docker-compose

if using a flavor of ubuntu/deb. Otherwise browse the web and get a package suitable for your distro. Make sure you have a proper HW to get about doing this. Once done, enable the docker service using:

  1. sudo systemctl class="hljs-built_in">enable docke

That's pretty much it. Now docker is running. The first thing one should do is to get the Simple Docker UI which takes away the burden of remembering all those commands. It is also available as a Google Chrome extension. Click on it and chrome downloads the application for you and creates a nice entry in your "start" (!!!) menu. Start the Simple Docker UI using:

  1. sudo docker run -d -p 9000:9000 --privileged -v /var/run/docker.sock:/var/run/docker.sock uifd/ui-for-docke

If everything is installed OK, you can now open your browser to point to localhost:9000 and see a nice Docker UI. Play around with it but it is really simple to understand. The next step is to run some servers. An example can be a RabbitMQ server for instance. All the servers are available as either user-generated or official docker images on https://hub.docker.com/. Most have instructions on how to start the servers. But in this case, assume you need a RabbitMQ. The installation and default configuration of the server is as simple as:

  1. sudo docker run -d -p 25672:25672 -p 4369:4369 -p 5671:5671 -p 5672:5672 --hostname mb-host rabbitmq:3.6.2

If you want to manage your RabbitMQ server so started with a nice RabbitMQ management plugin, start that up as well using:

  1. sudo docker run -d -P --hostname mb-host rabbitmq:3.6.2-management

If data needs to be persisted across docker runs, create a volume and associate it with a directory on your local machine. Containers started by docker will then persist the data on that volume. You can look into docker UI to see which container is storing data where (I told you it is very simple). The following code creates a new data volume on /data/vidacdb with an associated mongoDB version and passes a true to allow persistence. We also start the docker-compose service just in case. Then we run an actual instance of mongoDB giving it the volume to use. Make sure we have the --smallfiles passed which helps by using less memory.

  1. sudo docker create --name mongo-data-volume -v /data/vidacdb mongo:3.3.6 /bin/true
    sudo docker-compose up -d
    sudo docker run -d -p 27017:27017 --volumes-from mongo-data-volume mongo:3.3.6 --smallfiles

Thats pretty much it folks. You are up and running Docker. You can create your own docker images if need be. Run all the servers as needed, bunch them up as scripts and you are good to go. For me, I use the following bash script to run the needed servers.

  1. #!/bin/bash

  2. # Start RabbitMQ from the official image https://hub.docker.com/_/rabbitmq/
    #sudo docker run -d -p 25672:25672 -p 4369:4369 -p 5671:5671 -p 5672:5672 --hostname mb-host rabbitmq:3.6.2
    # Also start the management plugin as needed
    #sudo docker run -d -P --hostname mb-host rabbitmq:3.6.2-management

  3. # Create a mongo-data-volume
    sudo docker create --name mongo-data-volume -v /data/vidacdb mongo:3.3.6 /bin/true
    sudo docker-compose up -d

  4. # Start MongoDB from the official image https://hub.docker.com/_/mongo/
    #sudo docker run -v /data/vidacDB -d -p 27017:27017 mongo:3.3.6
    sudo docker run -d -p 27017:27017 --volumes-from mongo-data-volume mongo:3.3.6 --smallfiles

  5. # --rest --auth

  6. # Start ElasticSearch needed for live data from official image https://hub.docker.com/_/elasticsearch/
    sudo docker run -d -p 9200:9200 -p 9300:9300 elasticsearch:2.3.3 -Des.node.name=
    "es_host" -Des.http.cors.enabled="true" -Des.http.cors.allow-origin:"/http?:\/\/localhost(:[0-9]+)?/"

  7. sleep 5

  8. # Kibana from official https://hub.docker.com/_/kibana/ #4.5.1
    sudo docker run --name local-kibana -e ELASTICSEARCH_URL=http://136.225.119.130:9200 -p 5601:5601 -d kibana:4.5.1

  9. # Kibana from official https://hub.docker.com/_/kibana/ #4.5.1
    #sudo docker run --name edm-kibana -e ELASTICSEARCH_URL=http://150.132.77.249:9200 -p 5610:5601 -d kibana:4.5.1

  10. # Jenkins from official https://hub.docker.com/_/jenkins/ #1.651.3
    sudo docker create --name jenkins-data-volume -v /data/jenkins_home jenkins:1.651.3 /bin/true
    sudo docker-compose up -d
    sudo docker run -v /data/jenkins_home -d -p 8080:8080 -p 50000:50000 jenkins:1.651.3

ENJOY!

Install Linux on a Fresh Machine using LVM!

Okay, I know, the title says it all right? And there are hundreds and thousands of tutorials out there that allows you to do this right? Show me one tutorial which tells you what different partitions are needed when you are doing fresh install of Linux especially whose installers do not support LVM/LVM2 installations. Lets take an example of Manjaro, the latest talk of the town. The graphical installers do not have an option to support LVM. Even if you do manual partitioning, it does want to install anything over LVM and it wants the root and swap partitions. Besides, it will forget that we also need a smaller boot partition to install GRUB/SYSLINUX right? Well, atleast it did that to me and either I am too stupid to understand it or I didn’t read the instructions properly. In any case, I headed over to the CLI installer which thankfully had an option to use LVM (which was misguiding) since it wanted to do partitions manually and did not provide any guidance as to what partitions should be created. Instead it more felt like an GUI menu for creating reminding the steps namely partitioning, creating physical volumes (PVs) and creating volume groups (VGs).

So without much ado, below are the things that you really need to do to use LVM properly and then use it without the installers supporting/not supporting it. My assumption is that you want to use all the disks in your computer inside the VG for LVM. Assuming 2 disks, below is the schema.

/dev/sda should have the following partitions.

1. A boot partition of around 256MB (create lesser if you want, will only be used to install GRUB/SYSLINUX/any other boot loader).  /dev/sda1
2. A swap partition double the amount of RAM that you have. /dev/sda2
3. A LVM partition covering the whole disk. /dev/sda3

/dev/sdb should have:

1. A swap partition double the amount of RAM that you have /dev/sdb1
2. A LVM partition covering the whole disk. /dev/sdb2

Add whatever number of disks that you need to add but be sure to create the 2 varieties. Use any of fdisk, gdisk, parition managers, etc. to create the needed partitions. Now create the (PVs) and (VGs) that you want. For this the following commands will help.

lvmdiskscan –> shows the partitions available to create volumes

pvcreate eg: pvcreate /dev/sda3 /dev/sdb2 (for the above example) –> create a physical volume group with the needed partitions

pvdisplay –> list the partitions added to the physical volume(s)

vgcreate eg: vgcreate vgpool /dev/sda3 /dev/sdb2 –> create a volume group with the name vgpool including the listed partitions

vgdisplay –> Display your volume group and included partitions

lvcreate –l 100%FREE –n eg: lvcreate –l 100%FREE vgpool –n lvhome –> creates a logical volume lvhome on the vgpool consuming 100% of the free space on the vg

vgscan and vgchange –ay –> scan and activate the lvms.

 

At this point in time you should be able to use your LVM for installing. In the installer, go to manual partitioning and select /dev/sda1 as the boot partition and /dev/sda2 as the swap with mount point of /dev/lvhome as root. Hopefully this will avoid the number of formats and partition restructuring that I had to do. Énjoy your new Linux using LVM on this machine. PS: The latest kernel support LVM but if not, do a modprobe on dm-mod install lvm2 before doing all the above.

Metrics & task boards in Scrum/Agile!

My thoughts on why and how metrics/measurements of ongoing tasks should be done.

I will start with the practice. The problems we are trying to tackle is:

1. How to make our team believe in their own estimations?
2. What is our cycle time?
3. How do we project the probability of fulfilling the sprint goals?
4. How do we track the state of the task?
5. How do we ensure continuous development of processes?

For the last 3 questions, I suggest referring to the self-explaining Cumulative Flow Diagrams (http://www.slideshare.net/yyeret/explaining-cumulative-flow-diagrams-cfd). Martin Alaimo writes on measuring sprint progress in the Scrumalliance community blogs (https://www.scrumalliance.org/community/articles/2011/may/measuring-sprint-progress). Essential Scrum: A practical guide to the Most Popular Agile Process by Kenneth S. Rubin (p.357-359) defines how task metrics can be visualized (though in a table formation) (https://books.google.se/books?id=3vGEcOfCkdwC&pg=PA357&lpg=PA357&dq=visualize+tasks+in+scrum+boards&source=bl&ots=-BBbkkfr_l&sig=KqO_9xWDIEM3hVqe-9QSi0IQKQQ&hl=en&sa=X&ved=0ahUKEwjZk9ztxurKAhWFs3IKHYioBLE4FBDoAQg9MAU#v=onepage&q=visualize%20tasks%20in%20scrum%20boards&f=false). An electronic task board showing the progress (https://www.targetprocess.com/content/uploads/2013/11/lists-sketch-for-Targetprocess-3.png). A detailed article by MSFT on Task board (of course tailored towards VS Team edition usage but has lots of details) (https://msdn.microsoft.com/en-us/library/vs/alm/work/scrum/task-board). Another article (https://blog.taiga.io/q-id-like-to-measure-the-sprint-progress-through-closed-tasks.html) very good on why sprint progress should be monitored regularly and not at the end of the sprint.

I know we all hate RallyDev but still a nice article (https://help.rallydev.com/task-board). And again an MSDN article on srcum process workflow (https://msdn.microsoft.com/en-us/library/vs/alm/work/guidance/scrum-process-workflow). And a good read on Effective Visual Management of Scrum/Task boards with Information Radiators (https://agilepearls.wordpress.com/tag/scrum-boards/).

For this sprint, I will put information on my tasks which will help visualize the progress as well as track the state of the task and project if we can meet the sprint goals. And it will also help in the retrospective and next sprint planning with estimation.