Yes, we did discover the Higgs
157 comments
·October 23, 2024lokimedes
louthy
> run a basic invariant-mass calculation and see the mass peak popping up.
For the idiots in this post (me), could you please explain what that entails and why it helps confirm the discovery?
fnands
Not the original commenter, but also ex-HEP person:
The invariant mass is the rest mass of the particle (i.e. it's "inherent" mass). You can calculate it by taking the final state decay products of the original particle (i.e. the particles that are actually observed by the detector) and summing up their four-vectors (squared).
You can plot the invariant mass calculated from any particular final state, and for a rare particle like the Higgs the majority of the contributions to your plot will be from background processes (i.e. not Higgs decays) that decay into the same final state.
If you have a lot of Higgs decays in your sample you should be able to see a clear peak in the distribution at the invariant mass of the Higgs boson, a clear sign that the Higgs (or something with the same mass) exists.
Often by the time the discovery has reached statistical significance, you might not really be able to see such a clear sign in the mass distribution. I.e. the calculations are telling you it's there but you can't see it that clearly.
I wouldn't really say this helps confirm the discovery in a scientific sense, just that it's reassuring that the signal is so strong that you can see it by eye.
exmadscientist
> just that it's reassuring that the signal is so strong that you can see it by eye
It's really something when this happens. I worked on a big neutrino experiment searching for theta_13, where our goals were to (a) determine if theta_13 was dead zero or not (being truly zero would have a Seriously Major Effect in theories) and then (b) to measure its value if not.
Our experiment was big, expensive, and finely tuned to search for very, very small values of theta_13. We turned the thing on and... right there there was a dip. Just... there. On the plot. All the data blinding schemes needed to guarantee our best resolution kind of went out the window when anyone looking at the most basic status plot could see the dip immediately!
On the one hand, it was really great to know that everything worked, we'd recorded a major milestone in the field (along with our competition, all of whom were reading out at basically the same time), and the theorists would continue to have nothing to do with their lives because theta_13 was, in fact, nonzero. On the other hand... I wasted how many years of my life dialing this damned detector in for what now? (It wasn't wasted effort, not at all... but you get the feeling.)
Filligree
Squared four-vectors?
I'm only an amateur, but wouldn't that give different results depending on choice of units? I.e, I usually use C=1.
WalterBright
What about the loss of mass released as energy inherent to the decay process?
null
throwawaymaths
Yeah. The higgs evidence is pretty convincing visually. Not so sure about LIGO. There is an extraordinary claim of noise reduction that requires extraordinary evidence and it's all obfuscated behind adaptive machine learning based filtering, and the statistical analysis on that is unparseable to a non-expert (that is worrisome). The pulsar timing network though is easily believable.
Luckily, there's pretty simple statistics that one can throw at that once the third detector comes online. Hopefully that comes in before we spend too much money on LISA.
It's basically this, from the article, but from astro:
> Particle physics does have situations where the hypothesis are not so data driven and they rely much more heavily on the theoretical edifice of quantum field theory and our simulation of the complicated detectors. In these cases, the statistical models are implicitly defined by simulators is actually a very hot topic that blends classical statistics with modern deep learning. We often say that the simulators don't have a tractable likelihood function. This applies to frequentist hypothesis testing, confidence intervals, and Bayesian inference. Confronting these challenging situations is what motivated simulation-based inference, which is applicable to a host of scientific disciplines.
polyphaser
There's no fancy machine learning needed to detect some "bright" LIGO signals (some of the first blackhole-blackhole mergers). Given a set of template signals, a matched filter tries to find the one that best matches the noisy signal your instrument recorded. In order for a MF to work, what you really need is a good understanding of the noise in your instruments observations -- and there're very few people in the world better at that than LIGO folks. LIGO spent almost 20 years in construction and R&D, and almost 30 in planning. When you hear stories about how they can detect trucks miles away, and detect the waves crashing on the coast, it's possible because of scores of PhD students' who spent years characterizing each and every component that affects LIGO's noise levels. All of this to say that it's possible to download their data online and do a quick MF analysis (I did that), and with a little bit of work you get a blindingly bright statistical significance of 20-sigma or so. The actual result quoted in the papers was a bit higher. That's a testament to how well the instrument was built and its behaviour understood.
maxnoe
How do you explain the LIGO detection of a neutron neutron star merger that was at the same time observed as GRB by many, many other telescopes?
throwawaymaths
That's an n of one (I could be wrong but its the only multi messenger GW event we've seen, with how many events are being generated, you'd have expected more). Could be a coincidence. The angular resolution of LIGO is not exactly amazing, and we don't have a real estimate of the distance to the source of the GW? In fact, IIUC the event could be exactly in the opposite direction too.
1 for 29 by 2019:
thaumasiotes
> In these cynical times, it may be that everything is relative and "post-modern subjective p-hacking", but sufficient data usually ends these discussions.
I don't think that's right. I think having an application is what ends the discussions.
If you have a group of people who think CD players work by using lasers, and a rival group who think they do something entirely different, and only the first group can actually make working CD players, people will accept that lasers do what group #1 says they do.
MajimasEyepatch
On the other hand, nearly everyone believes in black holes, and there's no practical use for that information. The difference is that "we pointed a telescope at the sky and saw something" is easier for a layman to understand and requires somewhat less trust than "we did a bunch of complex statistical work on data from a machine you couldn't possibly hope to understand."
6gvONxR4sf7o
I think this is where it's worth differentiating between different types of "believes in" (and why I think modal logics are cool). I can convince myself that a thing seems safe to believe, or I can tangibly believe it, or I can believe it in a way that allows me to confidently manipulate it, or I could even understand it (which you could call a particular flavor of belief). Practical use seems to fit on that spectrum.
I certainly don't believe in black holes in the same manner that I believe in the breakfast I'm eating right now.
hn72774
> there's no practical use for that information
The information paradox is closer to us than we think!
Joking aside, another perspective on practical use is all of the technology and research advanced that have spun out of black hole research. Multi-messenger astronomy for example. We can point a telescope at the sky where two black holes merged.
throwawaymaths
There's a lot of (warranted imo) skepticism there too. Im sorry I can't find the citation but there was a Japanese paper out this year that claimed the ml post processing of the EHT data produces a qualitatively similar image given random data.
dspillett
> people will accept that lasers do what group #1 says they do
Most people. Some fringe groups will believe it is all a front, and they are only pretending that so-called “lasers” are what make the CD player work when in fact it is alien tech from Area 51 or eldritch magics neither of which the public would be happy about. What else would CDDA stand for, if not Compliant Demon Derived Audio? And “Red Book”. Red. Book. Red is the colour of the fires of hell and book must be referring to the Necronomicon! Wake up sheeple!
biofox
Counterpoint: Vaccines work, but far too many people think that COVID vaccines contain Jewish-made GPS tracking devices that act as micro-antennae to allow Bill Gates to sterilise them using 5G.
gpderetta
That's a common misunderstanding. The mind controlling COVID vaccines are being spread by chemtrails. The 5G signal is only used by pilots to decide when to start spraying.
thaumasiotes
[flagged]
nsxwolf
COVID vaccines may "work" but they're pretty lame compared to something like the varicella vaccine where the disease basically disappears of the face of the earth.
whatshisface
By that line of reasoning, the moon does not exist.
dguest
I think the gigantic bumps that Kyle pointed to "discovered" the higgs.
The statistical interpretation showing a 5 sigma signal was certainly essential, but I suspect it would have taken the collaborations much longer to publish if there wasn't a massive bump staring them in the face.
mellosouls
The article here is responding to an original blog post [1] that is not really saying the Higgs was not discovered (despite its trolling title), but raising questions about the meaning of "discovery" in systems that are so complicated as those in modern particle physics.
I think the author is using the original motivation of musing on null hypotheses to derive the title "The Higgs Discovery Did Not Take Place", and he has successfully triggered the controversy the subtitle ironically denies and the inevitable surface reading condemnations that we see in some of the comments here.
[1] https://www.argmin.net/p/the-higgs-discovery-did-not-take
noslenwerdna
He is implying that the scientists involved haven't thought of those questions, when in reality this field is one of the strictest in terms of statistical procedures like pre registeration, blinding, multiple hypothesis testing etc
Also he makes many factual claims that are just incorrect.
Just seems like an extremely arrogant guy who hasn't done his homework
ttpphd
A computer scientist/electrical engineer who is arrogant? I dunno, I need to see the statistical test to believe that's possible.
eightysixfour
Computers are a “complete” system where everything they do is inspectable and, eventually, explainable, and I have observed that people who work with computers (myself included) over estimate their ability to interrogate and explain complex, emergent systems - economics, physics, etc. - which are not literally built on formal logic.
BeetleB
> when in reality this field is one of the strictest in terms of statistical procedures like pre registeration, blinding, multiple hypothesis testing etc
I'm not in HEP, but my graduate work had overlap with condensed matter physics. I worked with physics professors/students in a top 10 physics school (which had Nobel laureates, although I didn't work with them).
Things may have changed since then, but the majority of them had no idea what pre-registration meant, and none had taken a course on statistics. In most US universities, statistics is not required for a physics degree (although it is for an engineering one). When I probed them, the response was "Why should we take a whole course on it? We study what we need in quantum mechanics courses."
No, my friend. You studied probability. Not statistics.
Whatever you can say about reproducibility in the social sciences, a typical professor in those fields knew and understood an order of magnitude more statistics than physicists.
noslenwerdna
As an ex-HEP, I can confirm that yes, we had blinding and did correct for multiple hypothesis testing explicitly. As Kyle Cranmer points out, we called it the "look elsewhere effect." Blinding is enforced by the physics group. You are not allowed to look at a signal region until you have basically finished your analysis.
For pre-registration, this might be debatable, but what I meant was that we have teams of people looking for specific signals (SUSY, etc). Each of those teams would have generated monte carlo simulations of their signals and compared those with backgrounds. Generally speaking, analysis teams were looking for something specific in the data.
However, there are sometimes more general "bump hunts", which you could argue didn't have preregistration. But on the other hand, they are generally looking for bumps with a specific signature (say, two leptons).
So yes, people in HEP generally are knowledgeable about stats... and yes, this field is extremely strict compared to psychology for example.
exmadscientist
> so complicated as those in modern particle physics
But... modern particle physics is one of the simplest things around. (Ex-physicist here, see username.) It only looks complicated because it is so simple that we can actually write down every single detail of the entire thing and analyze it! How many other systems can you say that about?
null
spookie
Other systems might not be part of a field as mature as yours, I would argue.
exmadscientist
It has nothing to do with "maturity" and everything to do with just hierarchy in general. There is something to the old XKCD joke: https://xkcd.com/435/ because the disciplines really are divided like that. You have to know physics to do chemistry well. You have to know chemistry to do biology well. You have to know biology to ... etc.
Whereas to do physics well you need only mathematics. Well, at least, to do the theories well. To actually execute the experiments is, ah, more challenging.
So I would argue the Standard Model is pretty much the only thing in all of human knowledge that depends on no other physical theories. It's the bottom. Shame it's pretty useless (intractable) as soon as you have three or more particles to calculate with, though....
jaculabilis
> I think the author is using the original motivation of musing on null hypotheses to derive the title "The Higgs Discovery Did Not Take Place",
It's probably a reference to "The Gulf War Did Not Take Place" by Jean Baudrillard, which took a similar critical view of the Gulf War as TFA takes of the Higgs discovery.
mellosouls
Possibly! I remember that but completely missed it as an inspiration here.
stephantul
I think it is good this post was written, I learned a lot, but it makes me sad that it was prompted by such an obvious trolling attempt.
scaramanga
not to nitpick, but I think "reactionary" or "aspiring crank" are probably more descriptive :)
"This isn't music, back in my day we had Credence"
ayhanfuat
Here is Ben Recht’s response: https://www.argmin.net/p/toward-a-transformative-hermeneutic...
12_throw_away
Oof.
A Berkeley academic invoking "it's actually your fault for believing the words that I wrote" and following it up with a "I'm not mad, I actually find this amusing" ... it's just disappointing.
dguest
Which is actually very reasonable, it ends with
> In any event, I use irreverence (i.e., shitposting) to engage with tricky philosophical questions. I know that people unfamiliar with my schtick might read me as just being an asshole. That’s fair.
People are piling the hate on Ben Recht here. I appreciate that he's calling his post what it is rather than doubling down.
It's also a great chance to lecture people on 4-momentum, thanks everyone!
munchler
That is some fancy backpedaling.
dekhn
For every time I see a criticism like Recht's (and Hossfelder's), I ask "could this theoretical scientist go into the lab and conduct a real experiment". I mean, find some challenging experiment that requires setting up a complex interferometer (or spectroscope, or molecular biology cloning), collect data, analyze it, and replicate an existing well-known theory?
Even though I'm a theoretical physicist I've gone into the lab and spent the time to learn how to conduct experiments and what I've learned is that a lot of theoretical wrangling is not relevant to actually getting a useful result that you can be confident in.
Looking at Recht's publication history, it looks like few of his papers ever do real-world experiments; mostly, they use simulations to "verify" the results. It may very well be that his gaps in experimental physics lead him to his conclusion.
rob_c
Just to pile on 'Ben', but sorry to break it to machine learning computing enthusiasts.
We(particle physicsts) have been performing similar, and in a lot of ways much more complex analyses using ML tools for decades in production.
Please stop shrouding your new 'golden goose' of AI/ML modelling in mystery it's 'just' massively multi-dimensional regression analyses with all of the problems, advantages and improvements that brings...
Why is there some beef that nature is complex, if you had the same vitriol toward certain other fields we'd be worrying about big-pharma's reproducibility crisis just at the top of the ice-berg of problems in modern science, not that most people are illiterate when it comes to algebra...
rsynnott
Honestly, while it's an interesting article, I'm not sure why one would even give the nonsense it's addressing the dignity of a reply.
Hadn't realised Higgs' boson denialism was really a thing.
thowfeir234234
The parent-poster is a very well known professor in ML/Optimization at Berkeley EECS.
TheOtherHobbes
One of the smaller trade journals of EE was Wireless World. (It closed in 2008.)
In its pages you could find EE professors and chartered engineers arguing that Einstein was so, so wrong, decades after relativity was accepted.
I'd trust an EE to build me a radio, but I wouldn't let an EE anywhere near fundamental physics.
MajimasEyepatch
I can't find the source at the moment, but I've seen it reported in the past that engineers are actually unusually likely to be fundamentalist Christians who believe in creationism. Engineers are also unusually likely to be Islamist terrorists, though there are many reasons for that. [1] There's a certain personality type that is drawn to engineering that believes the whole world can be explained by their simple pet model and that they are smarter than everyone else.
[1] https://www.nytimes.com/2010/09/12/magazine/12FOB-IdeaLab-t....
fecal_henge
All this suggests is that chartership, professorship and shitty journal authorship are poor metrics for credibility.
Keeping EEs and any E for that matter away from fundamental physics is a shortcut to producing a whole lot of smoke and melted plastic.
AnimalMuppet
Uh huh. And that makes said professor an expert in 1) epistemology, and/or 2) experimental particle physics? Why, no. No, it doesn't.
I mean, I'm as prone to the "I'm a smart guy, so I understand everything" delusion as the next person, but I usually only show it in the comments here. (And in private conversations, of course...)
hydrolox
to be fair, maybe there is a decent overlap of people who saw the original and this. At least that might dispel the 'myths' raised in the original. Also, since this rebuttal article was written by a physicist (much more involved in the field), its also defending their own field
12_throw_away
The article this is responding to is some of the worst anti-science, anti-intellectual FUD I've seen in a while, with laughably false conceits like (paraphrased) "physics is too complicated, no one understands it" and thus "fundamental research doesn't matter".
Worse, the author of the original FUD is a professor of EE at Berkeley [1] with a focus in ML. It almost goes without saying, but EE and ML would not exist without the benefit a lot of fundamental physics research over the years on things that, according to him, "no one understands".
KolenCh
Having been in his lectures in the past, he is the kind of person who teach you to question what you are told/taught. You should know he basically does the same thing to the field of ML (as this does to HEP).
You know, when I first read this thread and the 3 posts involved, I find the original post Ben wrote arrogant and hard to swallow. But once I searched who he is, and recognized him, knowing his character I immediately “get his point”. While not an expert in the fields, I have both graduate level educations in HEP and ML. My point is that my conclusion is unlikely due to the lack of understanding of these fields, but more because of my understanding of who he is…
Admittedly, he should not assume people read it as he intended how people would perceive. It took a lot of contextualizations, including the expectation from the title he explained in the later posts, to really take his posts seriously.
xeonmc
> ...is too complicated, no one understands it.
Quoth the AI researcher.
nyc111
This debate reminded me Matt Strassler's recent post that most of the data observed in the accelerators are thrown away [1]:
So what’s to be done? There’s only one option:
throw most of that data away in the smartest way
possible, and ensure that the data retained is
processed and stored efficiently.
I thought that was strange. It's like there is too much data and our technology is not up to it so let's throw away everything that we cannot process. Throwing data "in the smartest way possible" did not convince me.[1] https://profmattstrassler.com/2024/10/21/innovations-in-data...
elashri
I would like a chance to jump into this point because this problem is a function of two things. The throughput that you can make your Tigger (Data acquisition system) save the data and transfer it to permanent storage. This is usually invlove multiple steps and most of them happens in real-time. The other problem is the storage itself and how it would be kept (duplicated and distributed to analysts) which at the scale of operation we are doing is insanely costly. If we are to say save 20% of the generated collision data then we would fill the entire cloud storage in the world in a couple of runs . Also the vasr majority of data is background and useless so you would do a lot of work to clean that and apply your selections which we do anyway but now you are dealing with another problem. The analysts will need to handle much more data and trying new things (ideas and searches) becomes more costly which will be discouraged. So you work in a very constrained way. You improve your capabilities in computing and storage and you present a good physics case of what data (deploy trigger line which is to pick this physics signal) that the experiment is sensitive to and then lets the natural selections take place (metaphorically of course).
Most of the experiments cannot because of the data acquisition problems.
dguest
The technology really is not up to it, though.
To give some numbers:
- The LHC has 40M "events" (bunch of collisions) a second.
- The experiments can afford to save around 2000 of them.
This is a factor of 20k between what they collide and what they can afford to analyze. There is just no conceivable way to expand the LHC computing and storage by a factor of 20k.
Valid question would be why they don't just collide fewer protons. The problem is that when you study processes on a length scale smaller than a proton, you really can't control when they happen. You just have to smash a lot and catch the interesting collisions.
So yeah, it's a lot of "throwing away data" in the smartest way possible.
-------------------
All that said, it might be a stretch to say the data is "thrown away", since that implies that it was ever acquired. The data that doesn't get saved generally doesn't make it off a memory buffer on a sensor deep within the detector. It's never piped through an actual CPU or assembled into any meaningful unit with the millions of other readouts.
If keeping the data was one more trivial step, the experiments would keep it. As it is they need to be smart about where the attention goes. And they are! The data is "thrown away" in the sense that an astronomy experiment throws away data by turning off during the day.
aeonik
My 200Mhz Oscilloscope "throws away" a lot of data compared to the 4GHz scope I used at work, but for the signals I'm looking for, it doesn't matter at all.
RecycledEle
The team behind the LHC laid out the criteria for discovering the Highs Boson before beginning their experiments.
They never came close to what they said they needed.
But they now claim they succeeded in finding the Highs Boson.
And the paper setting out the criteria has been memory holed.
I call BS in the Highs Bozo team.
plorg
I think the person this article is responding to is just a crank, but it is interesting as a layperson to see the basic mechanisms for making this discovery laid out here.
scrubs
Good gracious! C'mon! ... science people want science not nonsense not cheap symbolism.
The article to which the link responds is cynical. And in my experience cynical assessments are made by people more likely to engage in the cynical BS artistry they complain about. Moreover, social media in general in conducive to whining, and what-about-ism which detracts from what science and all natural philosophers take seriously.
We're trying really hard to get away from the shadows on the the cave wall to the light whenever possible, and as often as possible.
And you know what else? The ``rush" is huge when we do so. There's a difference.
Not long after the initial discovery, we had enough data for everyone at the experiments to simply run a basic invariant-mass calculation and see the mass peak popping up.
Once I could "see" the peak, without having to conduct statistical tests against expected background, it was "real" to me.
In these cynical times, it may be that everything is relative and "post-modern subjective p-hacking", but sufficient data usually ends these discussions. The real trouble is that we have a culture that is addicted to progress theater, and can't wait for the data to get in.