← TuringMD

Backpropagation

How do neural networks learn?

Turns out, at a conceptual level, it’s very similar to gradient descent.

We take the difference between a network’s prediction and the “target” values, and produce a cost function.

Then, by minimising our cost function relative to all possible weights and biases, we learn how to tweak the linear regression models within our neural network.

Congratulations. You now know the basics of machine learning!

Which begs the question: what's next?