Now Fermilab is back with its latest update, using two additional years of MiniBooNE data. The excess is still there, and it has edged even closer to the statistical standards for discovery. If you combine the Fermi and Los Alamos data, we're already there. It's looking more and more like another break in the Standard Model, and the possible explanations include an entirely new type of neutrino.
Given the publication dates, those other articles are almost certainly also based on https://arxiv.org/abs/1805.12028, which incorporates those two additional years of data.
Is there a guide to science for people like me who have barely grasped Special Relativity. I am a full century out of date and I do wonder if I should be worried before we throw the stuff I don't know out and replace it with other stuff I don't know
In which case, wouldn't it just easier to wait til they figure out the new stuff ?
More seriously, if it's any consolation i spent 6 years in grad school physics and I feel like I don't understand all the particles and changing landscape when it comes to particle physics.
It will probably be a small change. If you understand the current version it will very helpful to understand the future version.
If you imagine that elementary particles are small balls of different size and color, then it is not so difficult to understand. https://en.wikipedia.org/wiki/Standard_Model
There is a lot of technical details and math, but you can get a good general understanding with the ball model. [If you ever want to propose an improvement of the standard model, you must definitively learn the technical details and math before.]
That's pretty good... when I first saw e.g. "calculus of variations" I realized I was actually at least 2 more centuries behind than I had previously thought, which was rather humbling...
IIRC there is some experiment where a particle split spontaneously in a pair of neutrino-antineutrino. Each type of neutrino has the same probability. In that experiment they get that only 1/3 of particles are split into "electron neutrinos", so they deduce that there are only three types of neutrinos ("electron neutrinos" / "muon neutrinos" / "tau neutrinos"). So now they are measuring another neutrino that is "sterile" for some reason doesn't appear in that experiment.
> With the constraints of the standard electroweak model, the number of light neutrino species is found to be Nv=3.27±0.30. this results rules out of the possibility of a fourth type of light neutrino at 98% CL.
I've never seen any major institution/publication to claim the number of fundamental particles is finite. We discover new ones as we go. Nobody is really shocked these days.
I've never seen them say it explicitly, but every article I read seems to be along the lines of "physicists thought/hoped the standard model could be complete, but then this little particle came along...". Even the fact that they think they can have a model with a finite number of particles gives the impression they think this is more likely than not to be the case.
Particles have energy levels. There is a maximum energy level and a minimum energy level. We already know that every energy level does not have a particle associated with it (quanta). Therefore there cannot be an infinite number of particles.
Particles have energy levels. There is a maximum energy level and a minimum energy level. We already know that every energy level does not have a particle associated with it (quanta). Therefore there cannot be an infinite number of particles.
Wow. 1,959 detections out of 10^21 interactions. And they talk about statistical significance. In any other field that is so close to zero you would ignore the results.
> Wow. 1,959 detections out of 10^21 interactions. And they talk about statistical significance. In any other field that is so close to zero you would ignore the results.
1959/1e21 is not the measure of the significance in the study though. That total of 1e21 interactions is split into different types of behaviors (based on theory and previous results). It's deltas on those categorized interactions and comparisons with noise/background models/characterizations that are used to assess the statistical significance of the 1959 events. I am not saying this has to be a true result and not a statistical fluke, but 1959/1e21 is not the "sigma" of this experiment.
Imagine you are investigating a bug you find under rocks. You have a prediction about the subvarieties this species of bug comes in. But these bugs are really hard to find. You have to turn over 10^21 rocks to find just 1,959 bugs. And you find unexpected ratios of subvarieties! The statistical significance of this unexpected result is based not on the number of rocks you turn over but on the probability your theory assigns to the distribution discovered in your sample of 1,959.
Would a biologist dismiss the evidence of having looked at say 100 cells in a human body because there are 10^13 of them in total? The latter number has no relevance in computing significance.
The absolute error will be low, but the relative error can still be high. E.g. if you're dealing with events that have a frequency of 1 in 2 million, then 1 million samples can't have exactly 0.5 occurrences, so the relative error is at least 100%.
The result seems to depend critically on the researchers' estimation of the neutral current of neutral pions, which if underestimated could produce the bump in the data at the low energy range observed. I think that with further experiments this result may well fade away.
https://news.ycombinator.com/item?id=17210982
https://news.ycombinator.com/item?id=17225957
https://news.ycombinator.com/item?id=17253127