Dora's world

A peek into my life

KiM - How to Write First Class Paper

This are some points to keep in mind (KiM), from the article on How to write a first-class paper published as nature article. Keep your message clear The most important information should be in the main text. To avoid distraction, writers should put additional data in the supplementary material. Writers should put their results into a global context to demonstrate what makes those results significant or original. In the conclusion, include a one- or two-sentence statement on the research you plan to do in the future and on what else needs to be explored.

TL;DR - What Google Learned From Its Quest to Build Perfect Team

Article What Google Learned From Its Quest to Build the Perfect Team TL;DR Researchers from CMU, MIT and Union College identified two characteristics required for creating a good team On the good teams, members spoke in roughly the same proportion, a phenomenon the researchers referred to as “equality in distribution of conversational turn-taking”. On some teams, everyone spoke during each task; on others, leadership shifted among teammates from assignment to assignment.

2017 - Year Review

I spent my last day of the year 2017, sitting on the main road of the Gangtok city, calling and talking with people who made my year one the most memorable ones. I will remember 2017 as the year when I published my first research paper, received invitation to speak at my first international conference, created a 2000+ strong AI community in Mumbai, my first road trip and leaving my first job.

A Glimpse of Re-Work Deep Learning Summit - Singapore, 2017

Last week of April 2017, I was attending Re-work Deep Learning Summit in Singapore. I was representing and speaking about some of the work we have been doing in this field. The conference had speakers from Google Deepmind, Baidu Silicon Valley AI Lab, Facebook; startups such as,; investor firms such as, etc. In this post, I will introduce to you about all the impressive ideas I learned about in the conference.

Memories and Learnings from my 1st year in IIT

This is a letter, I wrote in my senior year to one of the juniors describing my first year and what I learned from it. These are my views. I want you to comment and us to discuss, it would enrich both of us. You know most of the things about my freshie year. So I won’t go into things which you already know. Fill it with your memory. When I came into IIT, I was a completely ignorant child with just one thing in mind that IITians God hote hai.

Tutorials in NIPS 2016

This is a collection of all material I was able to find for various tutorials of NIPS 2016 for people who were not able to attend it and are really excited to know whats going on and can’t wait for official videos and slides. Crowdsourcing: Beyond Label Generation Link Variational Inference: Foundations and Modern Methods Slides Deep Reinforcement Learning Through Policy Optimization Slides Nuts and Bolts of Building Applications using Deep Learning - Andrew Ng Slides Video from talk by Andrew Ng on same talk but somewhere else is here

Deep Learning and AI videos

I like to hear things in my background while I am working. Music seems pleasure but sometimes also a distraction. At such points I play some good random videos in background. These help me in gaining new knowledge but at the same time does not distract me from work. I have compiled a small list of such videos in deep learning that I have heard so far and loved them.

The End of Deconvolutions

Deconvolutions were introduced in 2014 in “Fully Convolutional Networks for Semantic Segmentation” and has been extensively used in Semantic Segmentation and Generative Adversarial Networks. But its saturated now and the problems involved with it including checkerboard effects play a huge role in the error it produces. This blog post goes down the journey of deconvolutions and problems associated with it. It also suggests some solutions and how it can be replaced by better alternatives such as subpixel-cnn.

Hugo Larochelle TEDx

This TED talk takes you through the journey of Deep Learning in last ten years and its amazing to see how it has evolved from a time when neural nets were not trusted and there were just a few countable people working, to today when there are so many people that you can find 4 research groups working on a similar idea at the end of the day. Pretty insightful and interesting and in a way it shows how a new technology comes into play and we should keep looking for small kicks, they maybe the thing of the future.

Cool NN and DL applications

Deep learning and Neural Networks have some really cool applications (ideas), and here are a few I have come across and are really tangential to mainstream applications. Defeating Image Obfuscation with Deep Learning This paper from Cornell and University of Texas at Austin uses DL to reconstruct the images which are pixelated, blurred or encrypted using privacy-preserving photo (P3) algorithm. These techniques are many times used to hide the identities in some sensitive videos and photos in various media.