Posts

Showing posts from May, 2019

How a Neural Network is like Munna Bhai & Classifying the Largest Number

Image
It took me a good week or so after going through the first 3 videos of the FastAI course for it to click in my head as to what was going on in a neural network. And now, the basics feel very foolish. On the side the past few weeks, I've been doing algorithmic and data structure problems for fun*, and I thought it might be fun to see if a neural network could solve some of these problems. And so, I started out with trying to predict the largest continuous sum in an array . But to define it better , I'd make sure the array only had 100 numbers. But 10 numbers are easier for me to visually see . And so, the array would only have 10 numbers. Shortly after I realized that debugging this would take me too much time , and so I had to think of an easier problem. Okay, how about finding the largest number of a list. But, a regression problem is most likely going to give random numbers, and I'll have to use silly metrics like RMSE which don't make too much sense in a ...

What 3 Weeks of Deep Learning Have Taught Me

Image
I've almost completed the FastAI course for deep learning, and here's a list of things I've learnt: Deep learning is far from magic . So far I was convinced there was more to deep learning than just matrix multiplications and 11th-grade math. 3 weeks haven't shown me the signs. It's hard, or it feels like magic because humans struggle to visualize beyond 3D space . And so, for my first project, I made a simple classifier that predicts the maximum number from a list of 2 numbers. This can be visualized in 3D. More on why I did this in another blogpost, but in 3 lines here's what I learnt: [activation(input x weights + bias)] many times  = output loss = how off the output is from what it should have been, differentiate loss with the weights, and keep fixing the weights, until you're happy.                                           The truth is, you'll never re...