Category: Technology

DIY remote controlled pan & tilt CCTV using Raspberry Pi

RPi CCTV in Action :

A simple web interface to control the RPi CCTV :

What you need to build this (hardware) :

  1. Raspberry Pi (preferably raspberry pi-3, for on-board wireless, or raspberry pi zero w, although you can achieve similar result without much effort using other boards)
  2. Pan & Tilt frame, I used one available here on amazon –
  3. 2 servo motors. I used sg90 micro servo motors since I had them handy, however you can also use other similar ones available

I have recently started liking GoLang quite a bit, so I decided to try to make use of it for a small IOT project that I’ve been meaning to do for a long time. In my search I came across Gobot, which is a framework for building robots written in golang. It comes with support for a lot of development boards available in the market such as Raspberry Pi, Beaglebone, Arduino, and several more. Not only that it also has drivers available for a wide variety of devices which can be used to create all sorts of nifty robots.

Here is the repository with all the source code :

NOTE : In order to generate PWM (Pulse Width Modulation) signals while using Gobot, you will need to install This was required in my project since servo motors are controlled using PWM signals.

The best thing about Gobot is that with very little code I was able to expose the entire hardware (the 2 servo motors) via a JSON API. This meant that I could easily create user interfaces for the web / mobile and by consuming the API I could control my CCTV camera remotely over wifi!

VN:F [1.9.22_1171]
Rating: 5.0/5 (1 vote cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

SSH Config Trick

Update2 : As I have been informed now, there has recently been support added for including external files in the ssh config. Refer for details.

Update : Modified the script for more flexibility to allow easy modifications

I use SSH nearly every day to securely connect to remote servers. For simplifying managing remote server configurations & not having to remember IP addresses and other server specific details, I use ssh config. For those who aren’t familiar with this should read –

SSH configs are really neat, they allow for you to be able to logically name servers, provide configurations specific to servers (for instance if you use different ssh keys for different servers) and ultimately make using ssh hassle free without having to provide / remember the server specific details while actually trying to connect to the servers manually. It saves a lot of time!

I keep all my system configurations, along with my ssh config within source control at to be able to setup new systems very quickly (have had to do that on several occasions). Previously I used to symlink my sshconfig into ~/.ssh directory to setup the base ssh config with my personal server details. However, since I work as a consultant, I have to often modify it and add confidential server configurations of clients I work with.

The problem with that is often that leaves me with uncommitted changes which really annoyed me. I had been looking for ways to be able to segregate my ssh config into multiple files allowing me to keep confidential information in separate files yet not requiring any changes in my base config file to avoid this problem. Unfortunately there is no support for including / importing / referencing external files.

I came up with this simple solution :

Instead of symlinking, I created a simple shell script to generate my ssh config instead. Here is what it looks like :

Now all I do is create separate files in ~/.ssh/configs directory to store ssh configs of different clients and simply re-run the above script to generate a new concatenated ssh config.

This way my base ssh config remains unchanged and I can safely create multiple client specific ssh config files in ~/.ssh/configs directory without worrying about accidentally committing them and leaking to public.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Text Editor Choice & Peer Pressure

I am a VIM user and have been for quite a long time (~7 years). I am both a vim plugin author and also like to believe I have helped a lot of people switch / grok the core philosophy that I believe distinguishes it from all other editors and gives it an edge (IMO).

However, in this article I am not here to convince you to choose VIM as your choice of text editor, on the contrary, I am going to suggest you to do the opposite, especially if the only motivation to do so is driven by some form of peer pressure.

I will confess, I have been in that camp in the past where I would try to ‘convert’ other developers I see struggle (matter of perception) with other editors doing very basic tasks. Although my intentions have always been to help them improve their development setup & productivity, I have come to realise that this is wrong and perhaps even immoral.

When it comes to text editors, there is always a learning curve involved. The learning curve varies amongst different editors. For most modern editors this learning curve is often very low, however for more complex editors like VIM it is quite steep.

For a lot of people, this isn’t worthwhile and for good reason. There are far more important things that what text editor we use on a daily basis, be it for programming or otherwise. Moreover, if you are comfortable with your current choice and it does not hinder your thought process / development process you shouldn’t have to think about switching.

VN:F [1.9.22_1171]
Rating: 4.5/5 (2 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Template with Context using ERB

Recently, I was required to build a template rendering component that should render within a dynamic context, i.e. it would have methods / variables defined within a predefined context available during rendering, apart from what was provided directly.

For simplicity we chose ERB, since we can place them within locale files for internationalisation and designers could edit them with ease if need arises.

Following is a very simple implementation. Since we extend OpenStruct we can simply pass a hash to it during instantiation and all the keys will be accessible as methods and will be exposed directly during rendering.

The benefit of creating a separate abstraction is that it creates a sandbox environment for the template processing. This allows us to have more control over what gets exposed during processing and prevents accidental leakages into the context.

The main method here is the render method, where we utilise the ERB library. We create an ERB instance with the supplied template string and then call result, to which we supply the current execution context using the current binding.

The nifty trick here is the method_missing definition. Here we override the default definition which OpenStruct uses behind the scenes to access the keys as methods on the seed hash. We check for the existence of the method and delegate it to self (hash) when available, otherwise we delegate that method over to the special object available at :_object key, which serves as our ‘dynamic context’.

The usage would look something like this :

This gives us the flexibility of rendering the given template in any context and hence the templates could be designed flexibly. Since we built this as a rubygem, it allows us to expose template processing in a way that allows the host application to build it’s own context with custom functions / attributes to be used during rendering.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

Slack old school style

Slack has gained a lot of popularity in recent times as a tool for inter-team communication and for the right reasons. It offers a lot of features such as 3rd party software integrations, rich content communication, collaboration, bots etc. which when put together help create a rich collaborative user experience for teams to work together, especially when they are not co-located, which is fairly common these days.

If you’re anything like me and have been around long enough to know what irc is and when you look at various elements that slack provides like channels, accounts (servers), private messaging, etc. you know that it draws a lot of parallels and inspiration from that old school communication tool.

However, slacks desktop application, which is build using electron, although is very feature rich, is extremely memory hungry for my liking. I have also experienced extreme slow downs / hang ups at times. For that reason, I have always been on the look out for a way to combine the rich user experience of slack with the simplicity of irc without compromising a lot on the goodness the app provides.

Slack has always offered an IRC gateway which can be enabled by a team admin, and then you can use any regular irc client to connect. However that means that I have to compromise on a lot of features, some very basic such as editing your own messages etc and also since it needs to be enabled explicitly by an admin, its not always available.

Recently I came across wee-slack which is a slack app built as a weechat plugin.


Since I had been using weechat already previously as my irc client (still do), this was perfect for me. After using it for a significant amount of time, I can say it is superb. It gives me the best of both worlds without compromise. It uses slack’s APIs and hence does not require an explicit IRC gateway feature to be enabled by an admin, moreover it gives a very text friendly API to leverage slacks features from within weechat. I would highly recommend using it as opposed to using the slack app since it is extremely light weight yet powerful.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: +1 (from 1 vote)

WebRTC Broadcast

WebRTC is fun! I recently gave a quick session on WebRTC at Bangalore Ruby User Group (BRUG). Following are the slides of my presentation.

For demo I had recently created a demo application in elixir using the phoenix framework because it makes dealing with websockets really easy. That is used for creating the custom signaling service that is required to facilitate such a broadcast. The source code is available at dhruvasagar/webrtc-broadcast

In a nutshell for the purpose of a demo, what I essentially do is that each ‘Host’ (who access the home page at ‘/’) sends out a ‘stream-ready’ event notifying that it’s stream is ready to be shared and that is then communicated to all other hosts. This then lets them initiate the basic WebRTC signaling that’s required for them to know about each others’ audio / video capabilities by means of the offer / answer mechanism. Once that is complete, they finally receive each others’ streams and that is then added onto their views dynamically.

For listeners (who simply receive the streams, and are not required to share their streams, access the site at ‘/radio’), they initiate the process by sending out a ‘stream-request’, which the signaling server then passes forward to all hosts. Once the hosts receive the ‘stream-request’ they follow that by initiating the WebRTC signaling process for communicating their streams with the listeners (by means of offer / answer). Once the communication is established the listeners then receive the streams of all the hosts and add those dynamically to their view.

I wanted to demonstrate that once you understand the process of facilitating an offer / answer to establish the initial connection between two peers using RTCPeerConnection, rest of signaling can then be customised as per situation to facilitate any kind of complex multi peer-peer communication.

NOTE: In this demo, a broadcast implies that each host shares his stream directly with each listener. So as the number of listeners (or hosts) grow, the number of peer connections required also grows rapidly. From the perspective of achieving a scalable broadcast service (peer-peer), it may instead be better to utilise each listener to also act as a proxy host, allowing for him to be able to share the stream of other hosts to other listeners, reducing the overhead on the hosts and also reducing their bandwidth requirements.

VN:F [1.9.22_1171]
Rating: 0.0/5 (0 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Pristine – ZSH Theme

A while back I posted about Amuse my prompt theme for ZSH targeting oh-my-zsh. Since this was a much appreciated theme, I would like to share another theme, Pristine, that I have created.

However, I recently switched to using Prezto. This is because it felt like a more minimalistic configuration framework for ZSH. I also wanted to get faster load times. I must say my experience has been quite good and I really like it. Obviously there are pros and cons but I digress.

Needless to say Pristine targets Prezto. It is influenced by and has a lot of similarities with Amuse. However, there are subtle changes that I feel make it more cleaner and simpler. These are the highlights of Pristine :

  1. It tells you which branch you are on if the current working directory is a git repository.
  2. It indicates the git status of the repository to highlight whether there are any changes by using a green tick (✔) for no changes or a red cross (✗) for uncommitted changes.
  3. It keeps the prompt where you type on a new line with a preceding $ sign, to make more space for typing commands.
  4. It displays both ruby & node versions in use currently on the right. I find this very useful since I often work with both.
  5. It also modifies the spell correction prompt offered by ZSH to highlight the spelling error more prominently.

Read More →

VN:F [1.9.22_1171]
Rating: 3.0/5 (2 votes cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Google open sources it’s machine learning system

I am a huge proponent of open source technologies and having worked predominantly in various web technologies in the course of my career, I mostly work exclusively with open source stacks.

In the recent past I have been very keen to try and wrap my head around Artificial Intelligence and more specifically Machine Learning, although I am still barely even scratching the surface of the nitty-gritty’s of the field, especially the mathematical underpinnings, this is still a huge deal!

Google open sourced it’s second generation machine learning system TensorFlow!

It is a production ready library which has support for running numerical computations using data flow graphs on multiple CPUs, GPUs or mobile devices! Data Flow graphs as explained here represent mathematical computations described using directed graphs where each node corresponds to a certain numerical computation whereas the edges connecting the nodes represent the data communicated between them.

The documentation looks really impressive, with lots of examples. I am really keen on trying something out with this in the near future.

Considering that this is open source, I strongly believe this will give rise to a lot of innovative & creative applications and from the looks of it bring down the entry barrier drastically of solving real-life problems where this can be applied and made use of to make intelligent systems!

The fact that it also provides portability to work on mobile devices sounds like a game changer that should definitely drive a lot of adoption within the machine learning community & we should see a lot of interesting applications leveraging mobile devices which are packed with loads of sensors collecting all kinds of data from our day to day life!

Source: TensorFlow

VN:F [1.9.22_1171]
Rating: 5.0/5 (1 vote cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Find huge files in linux

Switching to SSD’s for better performance and compromising on disk space can be a tough decision to make. Though nowadays with the prices of SSD’s going down, it is something that a lot of people are consciously doing.

However when you switch from a 500GB (or 1TB) hard disk to an, albeit super fast, 128GB SSD, you will inevitable (having spoilt by past experiences of having ample disk space) run into situations where you need to clean out large files taking space on your disk. Something I had run into myself about a year back when I made the switch myself on my macbook. Oddly enough this problem turned out to be significant enough that I had to seek out a way to quickly find out which were the top biggest files on my file system so I could get rid of them / move to an external hard disk.

Most linux systems ship with a nifty little utility du which displays disk usage statistics. It’s actually very good and fast, however by itself it didn’t quite meet my requirements where I really wanted to see the top few biggest files. So I fiddled with it in conjunction with a few other common linux utilities and I came up with this little shell script I call findhuge which, as the name suggests, helps me find huge files:

This script allows me to search the top _COUNT results (defaults to 10) of _TYPE (defaults to ‘f’, for files) in the _DIR (defaults to the current directory). Which gives me exactly what I need.

The shell script uses find to find files recursively in a directory and then passes that through du to get their sizes in a human readable format, then through sort to sort the results by size in reverse order and head to show only the top results from that list. Neat!

VN:F [1.9.22_1171]
Rating: 5.0/5 (1 vote cast)
VN:F [1.9.22_1171]
Rating: 0 (from 0 votes)

Google Code Jam 2014 Qualifiers

Google Code Jam has always been a competition I have cherished, simply because the problems they post are brilliant and the closest to reality as compared to problems of other competitions. They’re always fun to read, disect & solve, more often than not, the solutions are reasonably trivial (especially for the qualifiers) yet extremely subtle & deceptive.

Today I want to share the solutions of 3 problems I was able to solve, the code should pretty much speak for itself because none of it is very complex.

  1. Magic Trick

    This one was the easiest of all, more of a warmup I suppose. Here’s the solution :

  2. Cookie Clicker Alpha

    Based on Cookie Clicker game developed by Orteil, this one was slightly trickier, and required a little bit of math. I solved it with a recursive algorithm first, then optimized it into an iterative one to pass for the large input. Here’s my solution with both recursive and iterative parts :

  3. Deceitful War

    This one was really subtle, easy enough to understand but hard to implement. It took me a while to put my understanding of how much being deceitful at the game called War translated to my points. Here too I solved both the normal game War and Deceitful War recursively first (it’s just easier to think recursively thanks to Lisp), then translated those to their iterative counterparts. Here is my solution with both recursive & iterative parts :

  4. VN:F [1.9.22_1171]
    Rating: 0.0/5 (0 votes cast)
    VN:F [1.9.22_1171]
    Rating: 0 (from 0 votes)