A couple of months ago I wrote this post that predicted: Current techniques for Machine Learning are not going to produce human intelligence That was made after week three of the Intro to Machine Learning course at Coursera I was taking. Well, I finished the course last week, and in the final video, Professor Ng
Tag: neural networks
A few weeks ago I wrote a post about the trickiness of understanding exactly what is going on inside of a neural network. By the time I hit “post”, I had edited out the part of my post that said I understood the principle of how they worked, and so it looked like I just
I remember at school when they started wording problems in ways that tried to match the real world. Fill in the blank questions like: 6 – 2 – 2 = ? were replaced with: “Billy has 6 sweets and gives 2 each to Bobby and Jill. How many sweets does Billy have left?” The same,
I finished week 5 of Machine Learning last week – it was part two of the introduction to neural networks. Although it starts with a visual representation of how the neurons in the brain work, course leader Professor Andrew Ng didn’t suggest this is really how the brain actually works. I’m no a neuroscientist, but