Antimatter falls down - with (0.75 ± 0.13 ± 0.16)g

  • #1
37,091
13,926
TL;DR Summary
ALPHA-g determines that antimatter falls down as expected.
Took longer than expected, but now we have a result.
Observation of the effect of gravity on the motion of antimatter

Antimatter falls down with (0.75 ± 0.13 (statistical + systematic) ± 0.16 (simulation)) g, compatible with the expected 1 g and ruling out -1 g.
 
  • Like
  • Informative
Likes vanhees71, ohwilleke, TSny and 4 others
Physics news on Phys.org
  • #2
I'm antisurprised.
 
  • Haha
Likes scottdave, vanhees71, Structure seeker and 8 others
  • #3
Pity, There goes my free energy machine. :frown:
 
  • Like
  • Haha
Likes vanhees71, ohwilleke, pinball1970 and 3 others
  • #4
A quick calculation shows this becomes interesting at 100x the sensitivity or so. A hydrogen atom's mass is 99% gluon, and the proton and antiproton's gluon fields are the same by symmetry. So we expect, at most a 2% deviation, which would correspond to antifermions falling up.

That would also make it competitive with the inidirect result from MICROSCOPE.
 
Last edited:
  • Like
Likes Structure seeker and ohwilleke
  • #5
mfb said:
compatible with the expected 1 g
But that conflicts with...
mfb said:
Took longer than expected, ...
Which is it?
:oldbiggrin:
 
  • #6
mfb said:
Antimatter falls down with (0.75 ± 0.13 (statistical + systematic) ± 0.16 (simulation)) g, compatible with the expected 1 g and ruling out -1 g.
If the uncertainties are combined [in quadrature] you get (0.75 ± 0.21)*g.

This is 1.2 sigma from the theoretically expected 1 g. It is conventional to call any discrepancy of less than 2 sigma "consistent" with the theoretical prediction.

This is is inconsistent with -1 g at the 8 sigma level. It is conventional to call anything established or disestablished at more than the 5 sigma level to be scientifically proven or disproven as the case may be.
 
Last edited:
  • Informative
  • Like
Likes vanhees71 and DaveC426913
  • #7
Vanadium 50 said:
A quick calculation shows this becomes interesting at 100x the sensitivity or so. A hydrogen atom's mass is 99% gluon, and the proton and antiproton's gluon fields are the same by symmetry. So we expect, at most a 2% deviation, which would correspond to antifermions falling up.

That would also make it competitive with the inidirect result from MICROSCOPE.
The obvious way to further explore that point would be to replicate it with muons and antimuons, or with other systems made up only of leptons such as muonium.
 
  • Like
Likes Structure seeker
  • #8
ohwilleke said:
The obvious way
"Obvious" does not mean "correct".

Exercise for the student: how far does positronium fall from rest before it annihilates?
 
  • Like
Likes vanhees71 and mfb
  • #9
ohwilleke said:
If the uncertainties are combined you get (0.75 ± 0.21)*g.
How did you combine the uncertainties?
 
  • Like
Likes vanhees71
  • #10
strangerep said:
How did you combine the uncertainties?
You square them, add the squared values, and take the square root of their sum. Subject to some quite weak assumptions about independence of errors (almost always true for statistical v. systemic errors) and the pre-existing assumptions in the underlying numbers that are combined that the uncertainty is Gaussian, it is the correct method.
 
  • #11
ohwilleke said:
You square them, add the squared values, and take the square root of their sum.
This is a common statistical method, yes?

Generally (i.e. in Statistics) Is it a mathematically rigorous result that can be shown to produce the correct error, or is it a fudge factor (albeit well-validated) that is accepted because it produces good results?

(I realize that's murky and poorly-worded. Sorry. do you see what I'm asking? It's the difference between a Mathematics solution and an Engineering solution.)
 
  • #12
Except...

In this paper, the first error is the combined statistical and systematic errors and the second is the modeling error.

There is almost certainly some correlation between modeling errors and the systematic errors. It is also almost certainly the case than if the degree of this were known precisely, they would have corrected for it.

You can't really blame the authors for "fudging" when they were not the ones to have combined them.
 
  • Like
Likes mfb and vanhees71
  • #13
DaveC426913 said:
This is a common statistical method, yes?
Yes, called "root-mean-square", or "combination in quadrature". It's a standard method often known as "GUM", which stands for Guide to the Expression of Uncertainty in Measurement.

DaveC426913 said:
Generally (i.e. in Statistics) Is it a mathematically rigorous result that can be shown to produce the correct error, or is it a fudge factor (albeit well-validated) that is accepted because it produces good results?
It's mathematically rigorous, but only if you understand exactly what meaning of "uncertainty" is being used. E.g., "uncertainty" is not necessarily the same as "error". If you google for "uncertainty vs error" you'll get lots of explanations.

The correct meaning of "uncertainty" here relates to standard deviation of a Gaussian distribution, and relates to the Central Limit Theorem.
 
  • Like
Likes ohwilleke
  • #14
strangerep said:
Yes, called "root-mean-square", or "combination in quadrature". It's a standard method often known as "GUM", which stands for Guide to the Expression of Uncertainty in Measurement.
I see. It is what produces the familiar Bell Curve.

strangerep said:
It's mathematically rigorous, but only if you understand exactly what meaning of "uncertainty" is being used. E.g., "uncertainty" is not necessarily the same as "error". If you google for "uncertainty vs error" you'll get lots of explanations.
I was thinking more in terms of the geometry than the meaning.
strangerep said:
The correct meaning of "uncertainty" here relates to standard deviation of a Gaussian distribution, and relates to the Central Limit Theorem.
Gaussian distribution gives a Bell curve. I think that's what I was looking for.

I get how the distribution of many real-world parameters (ideally) exhibit a Bell curve, like height, weight, etc. (I know that's not the same thing as uncertainty.)

I should withdraw before we get too far off on a side-quest.
 
  • #15
Except...

In HEP "error" and "uncerrainty" are treated as synonyms. It;s even in multiple experiments' style guides.

The Central Limit Theorem does not say that a finite number of non-Gaussian uncertainties always combine to form a perfectly-Gaussian uncertainty.

Unfortunately, the assumptions of classical statistics only partially apply. It's a valuable tool, but it's not a model of perfect rigor.

The authors could have combined the two errors in quadrature themselves, like they did with statistical and systematic to form the first. They chose not to. They surely had a reason for their decision. We shoud be careful in substituting our judgment for theirs,
 
  • Like
Likes vanhees71
  • #16
But is this HEP? I mean, under all the bells and whistles they're dropping the particles and testing how fast they fall*. Isn't that essentially bog-standard classical physics?

*or is that an egregious oversimplification?
 
  • #17
Are you arguing that HEP scientists in a HEP lab publishing a paper with HEP readership shouldn't follow HEP conventions? Well, I suppose everyone has his own opinion.
 
  • Like
Likes DaveC426913 and vanhees71
  • #18
Good point.
 
  • #19
DaveC426913 said:
This is a common statistical method, yes?

Generally (i.e. in Statistics) Is it a mathematically rigorous result that can be shown to produce the correct error, or is it a fudge factor (albeit well-validated) that is accepted because it produces good results?

(I realize that's murky and poorly-worded. Sorry. do you see what I'm asking? It's the difference between a Mathematics solution and an Engineering solution.)
The biggest problem with using this method is that even though HEP physicists almost always model uncertainty distributions as if they were Gaussian, there is plenty of research that shows that, empirically, systemic uncertainty distributions in HEP have fatter tails than a Gaussian distribution (i.e. outlier events are more probable than a Gaussian calculation of the likelihood of systemic error sourced outliers would predict).

HEP deals with this by establishing a 5 sigma threshold for a scientific discovery, which wouldn't be necessary if the systemic uncertainties in HEP were actually Gaussian. If systemic uncertainties were really Gaussian, a result with a 3-4 sigma significance would be sufficient as a threshold for scientific discovery.

But Gaussian uncertainty distributions are so much easier to deal with mathematically (e.g. when combining uncorrelated errors but also in lots of other ways) than non-Gaussian Student's t-distributions that more accurately represent the probable systemic uncertainty distribution (the statistical uncertainties are indeed Gaussian) that they treat uncertainties as Gaussian anyway. They do this even though the more statistically savvy physicists are well aware of this issue. HEP physicists deal with this by setting high thresholds of statistical significance for physics results to compensate for this intentional use of a flawed way to represent uncertainty statistically, and by not taking the probabilities of outliers implied by a Gaussian distribution of systemic uncertainties very seriously (e.g. by thinking of 3 sigma discrepancies as mere "tensions" in the data rather than a true one in a thousand event).
 
Last edited:
  • #20
Vanadium 50 said:
In HEP "error" and "uncertainty" are treated as synonyms. It's even in multiple experiments' style guides.
Absolutely true.

Personally, I prefer "uncertainty" to "error" in my own writing. One reason for this preference is that some of the uncertainty in quantum physics processes is intrinsic to the physical processes involved, which are inherently probabilistic. It is not always actually "error."
 
  • #21
Vanadium 50 said:
In this paper, the first error is the combined statistical and systematic errors and the second is the modeling error.

There is almost certainly some correlation between modeling errors and the systematic errors. It is also almost certainly the case than if the degree of this were known precisely, they would have corrected for it.
The correlation between modeling errors and systemic errors probably isn't so great that it is a big problem to use the combined error, although the correlation may indeed be hard to quantify.

The reason to combine the uncertainties anyway is to give you a useable result so that you can evaluate its overall significance. You can combine them explicitly and get a concrete statement about their significance, or you can combine them intuitively which leaves you with a mushier sense of the result's significance that has systemic basis due to cognitive biases in how people intuitively combined uncertainties without doing the math (which tends to overestimate combined uncertainties, especially when they are similar in magnitude).

I would suggest that the biggest reason not to combine them in a paper isn't that you can't do it in a scientifically defensible reason. Instead, one of the important reasons to break out different kinds of uncertainties is to focus in on what is most important to change in the experiment to get more precise results.

If your modeling uncertainty is big, the message is to improve the model.

If your statistical uncertainty is big, the message is to run more iterations of the experiment.

If the systemic uncertainty is big, the message is to look at the chart in the paper showing the different line items that contributed to systemic uncertainty, and then to consider for each one, how easy it would be to improve that line item and how much of a difference it would make if you did.

Breaking out sources of uncertainty has more impact on fellow HEP scientists than the usual paragraph or two of the conclusion to an experimental HEP paper talking about what direction the authors suggest for further research and to improve the experiment, because HEP physicists are numbers people and not words people.

Vanadium 50 said:
You can't really blame the authors for "fudging" when they were not the ones to have combined them.
Certainly. Any "blame" for combining a systemic + statistical uncertainty and a modeling uncertainty by combining the errors in quadrature, is mine in this case.
 
  • #22
Vanadium 50 said:
Except...

In HEP "error" and "uncerrainty" are treated as synonyms. [...]
Sad.

Vanadium 50 said:
The authors could have combined the two errors in quadrature themselves, like they did with statistical and systematic to form the first. They chose not to. They surely had a reason for their decision.
Indeed, which is why I wanted to explore this aspect a bit deeper. Presumably the nature of their model uncertainty is too far from an ordinary Gaussian-type standard deviation for combination-by-quadrature to be valid.(?)
 
  • Like
Likes ohwilleke
  • #23
If you have Model X return 10 quatloos, and Model Y return 11, what does the Gaussian look like? Is 10.5 even more likely than 11.5?

Depending on the models and where they sample the space of possibilities (extremes? randomly? something else?) I can see arguments for lots of things - I might support 10.5 +/- 0.3 and I might support 11 +/- 2.

That's why I think it's a bad idea to apply some statistical procedure to a paper when the author themselves surely thought about it and rejected it.

It would be nice if we lived in a world where every measurement had nice Gaussian uncertainties. That's not the world we live in.
 
  • #24
Vanadium 50 said:
The authors could have combined the two errors in quadrature themselves, like they did with statistical and systematic to form the first. They chose not to. They surely had a reason for their decision. We shoud be careful in substituting our judgment for theirs,
Apart from the question how to combine them: It's likely simulation uncertainties can be reduced with future work, leading to an updated measurement with improved precision. Quoting them separately tells us how much room for improvement they have.
 
  • Like
Likes ohwilleke

Similar threads

  • Special and General Relativity
Replies
1
Views
550
  • High Energy, Nuclear, Particle Physics
Replies
18
Views
3K
  • High Energy, Nuclear, Particle Physics
3
Replies
83
Views
12K
  • Classical Physics
Replies
20
Views
387
  • High Energy, Nuclear, Particle Physics
Replies
11
Views
3K
  • Introductory Physics Homework Help
Replies
5
Views
1K
  • Special and General Relativity
Replies
29
Views
1K
  • High Energy, Nuclear, Particle Physics
Replies
2
Views
1K
  • Special and General Relativity
Replies
6
Views
2K
Replies
1
Views
509
Back
Top