The slope of a line is called its gradient.
We can find the gradient of any given point on a graph (2D or otherwise), no matter its shape.
And by knowing these slopes, we know which way the line/curve is pointing, and hence which direction brings us closer to zero.
This is the essence of gradient descent.
Our goal here is to minimise inaccuracy of our regression model by bringing the cost function as close as possible to zero.
Consider a graph of the cost function relative to weight and/or bias.
We start at any point on the cost function’s graph, and follow the gradient down (hence “descent”) to the lowest point.
The value of the weight/bias that corresponds to this minimum tells us how to adjust the line that we draw over our datapoints, thus making our linear regression more accurate.
Interesting, right?
And funnily enough, it has direct relevance to neurology.