How linear regression works intuitively and how it leads to gradient descent
35 comments
·May 5, 2025easygenes
This is very light and approachable but stops short of building the statistical intuition you want here. They fixate on the smoothness of squared errors without connecting that to the gaussian noise model and establishing how that relates to the predictive power against natural sorts of data.
akst
It isn't too hard to find resources on this for anyone genuinely looking to get a deeper understanding of a topic. I think a blog post (likely written for SEO purposes, which is in no way a knock against the content) is probably the wrong place that kind of enlightenment, but I also think there are limits to the level of detail you can reasonable expect from a high level blog post.
And for introductory content there's always that risk if you provide to much information you overwhelm the reader, make them feel like maybe this is too hard for them.
Personally I find the process of building a model is a great way of learning all this.
I think a course is probably helpful, but the problem with things like data camp is they are overly repetitive and they don't do a great job of helping you look up earlier content unless you want to scroll through a bunch of videos, where the formula goes on screen for 5 seconds.
Would definitely just recommend getting a book for that stuff, I found "All of statistics" good, I just wouldn't recommend trying to read it from cover to cover, but I have found it good as a manual where I could just look up the bits I needed when I needed it. Tho the book may be a bit intimidating if you're unfamiliar with integration and derivatives (as they often express the PDF/CDF of random variables in those terms).
jfjfjtur
Yes, and it seems like it could’ve been written in-part by an LLM. But, the LLM could take your criticism, improve upon the original, and iterate that way until you feel that it has produced something close to an optimal textbook. The one thing missing is soul. I noticeably don’t feel like there was anyone behind this writing.
easygenes
Ah, we’re resorting to ad machinum today. :)
BlueUmarell
Any resource/link you know of that further develops your point?
easygenes
CMU lecture notes [0] I think approach it in an intuitive way, starting from the Gaussian noise linear model, deriving log-likelihood, and presenting the analytic approach. Misses the bridge to gradient methods though.
For gradients, Stanford CS229 [1] jumps right into it.
[0] https://www.stat.cmu.edu/~cshalizi/mreg/15/lectures/06/lectu...
[1] https://cs229.stanford.edu/lectures-spring2022/main_notes.pd...
c7b
One interesting property of least squares regression is that the predictions are the conditional expectation (mean) of the target variable given the right-hand-side variables. So in the OP example, we're predicting the average price of houses of a given size.
The notion of predicting the mean can be extended to other properties of the conditional distribution of the target variable, such as the median or other quantiles [0]. This comes with interesting implications, such as the well-known properties of the median being more robust to outliers than the mean. In fact, the absolute loss function mentioned in the article can be shown to give a conditional median prediction (using the mid-point in case of non-uniqueness). So in the OP example, if the data set is known to contain outliers like properties that have extremely high or low value due to idiosyncratic reasons (e.g. former celebrity homes or contaminated land) then the absolute loss could be a wiser choice than least squares (of course, there are other ways to deal with this as well).
Worth mentioning here I think because the OP seems to be holding a particular grudge against the absolute loss function. It's not perfect, but it has its virtues and some advantages over least squares. It's a trade-off, like so many things.
easygenes
Yeah. Squared error is optimal when the noise is Gaussian because it estimates the conditional mean; absolute error is optimal under Laplace noise because it estimates the conditional median. If your housing data have a few eight-figure outliers, the heavy tails break the Gaussian assumption, so a full quantile regression for, say, the 90th percentile—will predict prices more robustly than plain least squares.
null
jampekka
The main practical reason why square error is minimized in ordinary linear regression is that it has an analytical solution. Makes it a bit weird example for gradient descent.
There are plenty of error formulations that give a smooth loss function, and many even a convex one, but most don't have analytical solutions so they are solved via numerical optimization like GD.
The main message is IMHO correct though: square error (and its implicit gaussian noise assumption) is all too often used just per convenience and tradition.
jbjbjbjb
I’ve always felt that ML introductions completely butcher OLS. When I was taught it in stats we had to consider the Gauss-Markov conditions and interpret the coefficients, we would study the residuals. ML introductions just focus getting good predictions.
orlp
This isn't true. In practice people don't use the analytical solution for efficient linear regression, they use stochastic methods.
Square error is used because it is the maximum likelihood estimator under the assumption that observation noise is normally distributed, not because it is analytical.
jampekka
If by stochastic methods you mean something like MCMC, they are increasing in popularity, but still used a lot less than analytical or numerical methods. And almost exclusively only for more complicated models than basic linear regression. Sampling methods have major downsides, and approximation methods like ADVI are becoming more popular. Though sampling vs approximations is a bit off topic, as neither usually have closed form solutions.
Even the most popular more complicted models like multilevel (linear) regression make use of the mathematical convenience of the square error, even though the solutions aren't fully analytical.
Square error indeed gives estimates for normally distributed noise, but as I said, this assumption is quite often implicit, and not even really well understood by many practitioners.
Analytical solutions for squared errors have a long history for more or less all fields using regression and related models, and there's a lot of inertia for them. E.g. ANOVA is still the default method (although being replaced by multilevel regression) for many fields. This history is mainly due to the analytical convenience as they were computed on paper. That doesn't mean the normality assumption is not often justifiable. And when not directly, the traditional solution is to transform the variables to get (approximately) normally distributed ones for analytical solutions.
em500
AFAIK using the analytic solution for linear regression (via lm in R, statsmodels in python or any other classical statistical package) is still the norm in traditional disciplines such as social (economics, psychology, sociology) and physical (bio/chemistry) sciences.
I think that as a field, Machine Learning is the exception rather than the norm, where people people start off or proceed rapidly to non-linear models, huge datasets and (stochastic) gradient based solvers.
Gaussianity of errors is more of a post-hoc justification (which is often not even tested) for fitting with OLS.
easygenes
OLS is a straightforward way to introduce GD, and although an analytic solution exists it becomes memory and IO bound at sufficient scale, so GD is still a practical option.
jampekka
Computationally OLS is taking the pseudoinverse of the system matrix, which for dense systems have a complexity of O(samples * parameters^2). For some GD implementations the complexity of a single step is probably O(samples * parameters), so there could be a asymptotic benefit, but it's hard to imagine a case where the benefit is even realized, let alone makes a practical difference.
And in any case nobody uses GD for regressions for statistical analysis purposes. In practice Newton-Raphson or other more complicated schemes (with a lot higher computation, memory and IO demands) with a lot nicer convergence properties are used.
easygenes
Mini batch and streaming GD make the benefits obvious and trivial. Closed form OLS is unbeatable so long as samples * params^2 is comfortably sitting in memory. You often lose that as soon as your p approaches 10^5, which is common these days. Soon as you need distributed, streaming, or your data is too tall and or too wide then first order methods are the point of call.
brrrrrm
> When using least squares, a zero derivative always marks a minimum. But that's not true in general ... To tell the difference between a minimum and a maximum, you'd need to look at the second derivative.
It's interesting to continue the analysis into higher dimensions, which have interesting stationary points that require looking at the matrix properties of a specific type of second order derivative (the Hessian) https://en.wikipedia.org/wiki/Saddle_point
In general it's super powerful to convert data problems like linear regression into geometric considerations.
null
jascha_eng
The amount of em dashes in this make this look very AI written. Which doesn't make it a bad piece but makes me more carefully check every sentence for errors.
liamwire
I know this is repeated ad nauseam by now, but as an ardent user of em dashes for many years pre-LLM, I think this a bad heuristic.
wodenokoto
Speaking of linear regression, can any of you recommend an online course or book that deep dives into fitting linear models?
lmpdev
Most intro to stats courses will do
I did the Stats I -> II -> II pipeline at uni but you should be fitting basic linear models by the end of Stats I
reify
All thats wrong with the modern world
https://www.ibm.com/think/topics/linear-regression
A proven way to scientifically and reliably predict the future
Business and organizational leaders can make better decisions by using linear regression techniques. Organizations collect masses of data, and linear regression helps them use that data to better manage reality, instead of relying on experience and intuition. You can take large amounts of raw data and transform it into actionable information.
You can also use linear regression to provide better insights by uncovering patterns and relationships that your business colleagues might have previously seen and thought they already understood.
For example, performing an analysis of sales and purchase data can help you uncover specific purchasing patterns on particular days or at certain times. Insights gathered from regression analysis can help business leaders anticipate times when their company’s products will be in high demand.
uniqueuid
While I get your point, it doesn't carry too much weight, because you can (and we often read this) claim the opposite:
Linear regression, for all its faults, forces you to be very selective about parameters that you believe to be meaningful, and offers trivial tools to validate the fit (i.e. even residuals, or posterior predictive simulations if you want to be fancy).
ML and beyond, on the other hand, throws you in a whirl of hyperparameters that you no longer understand and which traps even clever people in overfitting that they don't understand.
Obligatory xkcd: https://xkcd.com/1838/
So a better critique, in my view, would be something that the JW Tukey wrote in his famous 1962 paper: (paraphrasing because I'm lazy):
"better to have an approximate answer to a precise question rather than an answer to an approximate question, which can always be made arbitrarily precise".
So our problem is not the tools, it's that we fool ourselves by applying the tools to the wrong problems because they are easier.
alexey-salmin
That particular xkcd was funny until the LLMs came around
uniqueuid
Well I'd say that prompt engineering is still exactly this?
fph
Aren't LLMs also a pile of linear algebra?
Lirael
[dead]
I really recommend this explorable explanation: https://setosa.io/ev/ordinary-least-squares-regression/
And for actual gradient descent code, here is an older example of mine in PyTorch: https://github.com/stared/thinking-in-tensors-writing-in-pyt...