Dang I read the first paragraph of the article and immediately went searching for the real papers since I didn't expect any media outlet to include them at the bottom, but here they are for anyone who made the same mistake I did! https://arxiv.org/abs/1709.05024https://arxiv.org/abs/1709.10378
Not a cosmologist but here's my go at the de Graff paper. (Let's get this out of the way, the title is click-bait and the paper/researchers makes no such claims as to anything near 50%. New Scientist is trolling for hits with the word "half" or the journalist is fundamentally misunderstanding the work.) In de Graff, et al, they claim 30% of "90% of the missing baryonic matter [that composes the ~25% of our total universe observable from within our light cone]" has been found in the CMB structured as filaments between galaxies. They claim there's effectively a planar network layered on top of Minkowski space composed this baryonic matter. The temperature was at this "Goldilocks" midrange no one had previously analyzed (ranging from 10^5-10^7K). This wasn't previously found because people were searching "only the lower and higher temperature end of the warm-hot baryons, leaving the majority of the baryons still unobserved(9)". [See "Warm-hot baryons comprise 5-10 percent of filaments in the cosmic web.", Nature, Eckert et al for more about baryons of this composition.]
Additionally, these baryons have 10x the density of what we observe (so this could potentially be evidence for the first stable baryonic matter composed of second generation quarks, or more likely the binding energies are different from our standard uud/udd nucleon quarks) permeating the universe, and where the roads in the network meet ("dark matter haloes"), you have embedded galaxies and galaxy clusters. They continue with their analytic methods of the CMASS data, and claim within the framework 30% of the total baryonic content (which, again, all analytical methods put this into no more than ~25%) is composed of this form of this matter. I skimmed their methods and it seemed to at least logically hold -- they are using the appropriate data (SDSS 12) and didn't cherry-pick their galaxy pairs (so, no p-hacking here!).
From what I can tell, this basically also proves that large scale plasma exists between all bodies at any scale (planetary to systems to galaxies to clusters), and universe sized Birkeland currents exist; which is something cosmologists have been trying to prove/disprove for awhile.
So, not only did they find some of the missing matter, they found some of the missing energy, too. This does, however, screw some of the more classical cosmologists.
> Not sure who you're referring to. These results are completely consistent with the standard model of cosmology.
Not if what is being detected is simply mass associated with filamentary currents of energy (with attendant magnetic fields) rather than particular particles.
They're modelling filaments as cylindrical tubes of hot electrons connecting pairs of galaxies. I don't know what you mean by "mass associated with filamentary currents of energy", but while the electrons are hot (~million Kelvin), they're non-relativistic and their kinetic energy is negligible.
> Additionally, these baryons have 10x the density of what we observe (so this could potentially be evidence for the first stable baryonic matter composed of second generation quarks, or more likely the binding energies are different from our standard uud/udd nucleon quarks)
No, these results are not evidence for exotic matter. They measured the over-density of the filaments relative to the average background density of the universe.
> Let's get this out of the way, the title is click-bait and the paper/researchers makes no such claims as to anything near 50%. New Scientist is trolling for hits with the word "half" or the journalist is fundamentally misunderstanding the work.
At first I agreed with you, but I've dug into the articles and re-read the New Scientist article too to make sure, and it seems the story is a bit more complicated than it at first appears (caveat: I'm also not a cosmologist). They should have clarified this research does not involve dark matter though.
Part of the confusion stems from losing context and awareness of implicit limits to the claims when translating exact cosmological terms to popular science. "Baryonic matter" means nothing to the average person, and calling it "observable matter" could also be confusing to lay-people, since this matter isn't actually directly observable:
> “There’s no sweet spot – no sweet instrument that we’ve invented yet that can directly observe this gas,” says Richard Ellis at University College London. “It’s been purely speculation until now.”
However, the researchers do seem to claim they solved the mystery of the missing observable matter by detecting gas filaments:
> “The missing baryon problem is solved,” says Hideki Tanimura at the Institute of Space Astrophysics in Orsay, France, leader of one of the groups. The other team was led by Anna de Graaff at the University of Edinburgh, UK.
Whether that can should be translated as "finding the missing 50% of observable matter" depends on whether those baryons are in fact 50% of missing observable matter. To make things more confusing for non-cosmologists here, the two papers tell slightly different stories, because they don't do the exact same thing. De Graaff's paper mentions a much lower number than 50%, as you stated, but the introduction of Tanimura mentions:
> At high redshift (z ≳ 2), most of the expected baryons are found in the Lyα absorption forest: the diffuse, photo-ionized in- tergalactic medium (IGM) with a temperature of 10⁴ – 10⁵ K (e.g., Weinberg et al. 1997; Rauch et al. 1997). However, at redshifts z ≲ 2, the observed baryons in stars, the cold interstellar medium, residual Lyα forest gas, OVI and BLA absorbers, and hot gas in clusters of galaxies account for only ∼50% of the expected baryons – the remainder has yet to be identified (e.g., Fukugita & Peebles 2004; Nicastro et al. 2008; Shull et al. 2012). Hydrodynamical simulations suggest that 40–50% of baryons could be in the form of shock-heated gas in a cosmic web between clusters of galaxies.
It looks like this is where that half in the New Scientist title comes from: 40-50% of missing baryons should be in these gas filaments. This might appear to contradict De Graaf et al., but the latter mention Tanimura et al. in the conclusions of their paper:
> Similar conclusions to this work have been independently drawn by Tanimura et al. (...) who announced their analysis (...) at the same time as this publication. (my summary: We used different, independent but complementary galaxy pair catalogues). Despite the differences, we achieved similar results in terms of the amplitudes and statistical significances of the filament signal. (...) The fact that two independent studies using two different catalogues achieve similar conclusions provides strong evidence for the detection of gas filaments.
So given that these two groups seem to be in agreement with each other's conclusions, and that Tanimura himself was quoted (so presumably consulted for the article), it seems that the main clickbait aspect of the New Scientist article is that they did not clarify that no dark matter is involved in this story.
And the combination of your observation and DiabloD3's is an interesting one. The papers on Birkeland currents in the context of a galaxy spanning plasma makes for some fun conjecturing. A coulomb of charge moving in a million light year long filament of plasma is lot of energy.
> The approximate distribution in the Universe is 5% regular matter, 25% Dark Matter, and 70% Dark Energy. Half of that 5% was missing, and now found.
> Regular matter makes stars and visible galaxies, so it is "bright". Dark Matter is so named because it does not make things we can see with telescopes directly - it is "dark". We can see the effects it makes with gravity, such as the rotation curves of galaxies, and gravitational lensing. So we know something is there, just not what it is made of. Dark Energy was invented to solve a couple of mysteries. One is the geometrical "flatness" of the Universe, and the other is the apparent acceleration of the Universe's expansion. Like Dark Matter, we don't yet know what it is. But something is causing the flatness and acceleration, so we gave it a name as a place-holder for theories.
> A similar situation happened a century ago, with the precession (shift) of Mercury's orbit with time. We thought it was caused by a planet inside of Mercury's orbit that we hadn't found yet. It was named Vulcan, after the Roman god of fire (not Spock's home planet). It turns out relativity was the right answer - the Suns gravity bends space near it, and causes the orbit to shift. Vulcan was just "a name we gave to whatever causes the observed effect".
> Dark Matter and Dark Energy could turn out to be something entirely different than types of matter and energy, but in the mean time it gives them names we can attach theories about them to.
"...it gives them names we can attach theories about them to." Sometimes I think this gets lost in conversations about scientific topics. It's easy to be flippant about what we know in relation to scientists of the past, and miss out on an appropriate sense of awe about how much in the universe there is to discover.
I have a real trouble figuring out how they could say 2.5% was missing when 95% of all things in the universe is just a theory, according to those figures.
> At the largest size, Google image search tells me that it looks exactly like foam rubber. Foam rubber is created by combining a chemical agent that glomps together through the wonder of polymerization with another chemical agent that delivers lots of gas bubbles to make space between the polymers. Universes are created by rapidly expanding a superdense plasma that glomps together through the wonders of gravity, while lots of expanding vacuum makes space between the galaxy clusters.
OK, I should have pointed to observations which demonstrate that what we can see of the universe does indeed look like foam, at the level of galaxy clusters and superclusters. I'll do it later, if anyone expresses interest.
Is it due to gravity of the filament molecules pulling those molecules together into larger 'strings' or 'planes' which leave 'holes'? Or some other mechanism?
Imagine you have a meter-stick. It has uniform graduation markings to show you centimeters and millimeters.
Now imagine that it is getting longer. No matter how hard you look at it, the stick is still one meter long. That's what the markings on it say, anyway. It's actually getting longer at the same rate as all the other meter-sticks. That's the expansion of the universe.
Now imagine you balanced two marbles on the stick. They bow the stick ever so slightly, and roll together at the 50 cm mark. They are not expanding--or at least not expanding like the meter-stick expands. But they will continue to roll together. They are detectable masses acting under the effect of gravity. If you look closely, they appear to be shrinking. But it's actually the meter-stick expanding.
When you expand this out to galaxy scale, two galaxies that are not moving with respect to each other are getting further apart. The mass in the galaxies have enough intra-galactic gravitational attraction to keep themselves from flying apart along with the vacuum, but not enough the keep their neighbor galaxies close. The inter-galactic gravitation doesn't pull hard enough to overcome the expansion. So they surf away from each other, becoming further apart.
At least, that's how I understand it. I could be wrong.
> At least, that's how I understand it. I could be wrong.
General Relativity deals in spacetime, but it is often convenient to slice spacetime into spacelike volumes in which every point in the volume ("the spacelike hypersurface") is at the same timelike coordinate. One important consideration is that no timelike axis is any more preferred by nature than any other, and one can always find an observer who disagrees with any choice of splitting one happens to make. In particular, two inertial observers related by a Lorentz boost generally will not agree on what event (e.g. two bits of matter colliding) is at what time coordinate, and thus this type of 3+1 slicing will result in different spacelike hypersurfaces for each such observer.
The cosmological frame picks out a frame that a family of special observers can agree on: these observers agree that at the largest scales, the universe is homogeneous and isotropic, that they themselves are moving inertially, and can agree on a function relating a time coordinate to the average density of matter in a universe-sized spacelike hypersurface at that time coordinate. Each such observer is assigned a spacelike location which persists into the infinite future: the observers are all stationary at a constant set of three spatial coordinates. The centres of mass of practically all galaxy clusters are essentially this type of observer, so those remain at the same spatial coordinates at all times too.
We then take this set of coordinates and apply it to a universe described by a Robertson-Walker metric. Our universe approximately obeys the Robertson-Walker (R-W) metric outside of galaxy clusters; more on that in a moment. The R-W metric relies on a 3+1 slicing of a homogeneous and isotropic universe, and uses two coefficients r and k to determine respectively the radius and shape of each spacelike 3-hypersurface. If we knock out a spacelike dimension, we can think of an R-W universe as a stack of infinitesimally thin plates where each plate at time t is related to its infinitesimally earlier predecessor and infinitesimally later successor plate by a function giving its radius r. (It is perfectly reasonable to rotate the axes so that you stack the plates vertically from the floor upwards, where we substitute a height coordinate for the timelike coordinate).
In an expanding-with-a-cosmological-constant R-W universe, r shrinks smoothly towards the past and grows smoothly towards the future. The coefficient k determines whether the 2-d planes give globally Euclidean geometry (k=0), globally hyperbolic geometry or globally spherical geometry. Finally, if r >>> the Hubble volume, there may be no practical way of recovering k != 0 observationally, or in other words the local geometry of a R-W slice of a Hubble volume may be flat even if the global geometry is not.
On this R-W universe we apply the coordinate system above, but remember that our observers stay at fixed spacelike coordinates. We need to notice here that our coordinates do not determine distances by light-travel-time. That's fine, we can use arbitrary measures of distance in General Relativity, and can practically always find a consistent and useful transformation from a description of physics in one system of coordinates to another. We just have to be careful either to work only in generally covariant formulations, or to recognize that using some systems of coordinates entice one into the use of fictitious forces that disappear in other systems of coordinates. In this case, an observer at the centre of mass of our galaxy in spherical coordinates centred on her would naturally say that distant galaxy clusters on this spacelike hypersurface now will be at a larger radial coordinate in future spacelike hypersurfaces, in our cosmological coordinates she and the distant galaxies are all working in coordinates comoving with r, so their spatial distance is constant at all times.
The metric expansion of space is just that: r increases.
> The inter-galactic gravitation doesn't pull hard enough to overcome the expansion
This gets trickier. In the Friedmann-Lemaître-Robertson-Walker model we treat the sources of the matter tensor as a set of perfect fluids with some pressure and density, and the fluids dilute away with expanding Robertson-Walker universe. We ignore the local overdensities of matter ("galaxies" and "people" and so on) and at the largest scales, that's reasonable.
However inside galactic clusters and galaxies, in the standard model there is no expansion at all; gravity doesn't work against it, it just isn't there in the first place. From a technical perspective what we do is treat galaxies as approximate sources of a Schwarzschild metric up to some boundary enclosing the galaxy, and then we embed that into Robertson-Walker space. This is certainly not faultless, but it matches observation extremely well. What does not match observation extremely well is naive quintessence models where the metric expansion works within galaxies and is simply checked by the gravitational interactions of the matter within them, but acts as a cosmological constant outside galaxy clusters.
Likewise, comoving galaxy clusters are just drifting along inertially into the future; even under a different system of coordinates there are no extra forces working to separate them -- throwing away the 3+1 slicing with its preferred timelike axis, in the spacetime view galaxy clusters' timelike worldlines converge near the hot dense phase of the universe.
If you took your second paragraph's metre stick[1] and put it in space as a comoving observer well outside galaxy clusters, it would still be a metre long in the far future, whether measured locally or with a really really really good telescope. "Rulers" aren't expanding in the metric expansion; instead the cosmological coordinates on each spacelike hypervolume are adjusted, and in general coordinates while useful are not themselves physical while an actual metre stick is. Physical objects themselves do not change when we change coordinates; distant galaxies can have no idea that you're putting cosmological or spherical or cartesian or conformal coordinates on them, or calculating their movements against those coordinates.
I'm afraid I don't understand the point in your second paragraph.
> two galaxies that are not moving with respect to each other
They aren't moving against comoving coordinates. But if you choose other coordinates (e.g. spherical coordinates with the origin on the centre of one of the galaxies) they can move against those. We can do various transformations to convert the descriptions of the motions of these galaxies (and any fictitious forces and relating to coordinate motion, and other coordinate-dependent quantities) in one set of coordinates into another set of coordinates. The trick is finding a set of coordinates in which one can extract some intuitions about observables like the cosmological redshift, the dark night sky (cf. Olbers's paradox), or the details of the cosmic microwave background.
- --
[1] in principle, and with some care, you could line up a hundred 1 cm objects (e.g. ball bearings) and they would not separate from each other with the expansion of the universe (one has to be careful about other things that may cause them to move relative to one another, such as radionuclide decay within the objects, interactions with cosmic rays or other particles, and so on; but in standard General Relativity they should continue on their parallel timelike worldlines indefinitely).
> What does not match observation extremely well is naive quintessence models where the metric expansion works within galaxies and is simply checked by the gravitational interactions of the matter within them
Or write some additional details about it. I know that the "popular" explanations claim that "everything" expands, and I understood from your reply that what we see can be technically modeled in equations as if there's nothing that expands inside of whole galaxies, but what is the actual proof that there are actually no expansion forces inside of the galaxies at all? Thanks in advance.
Starting with the "gimme hardcore!" end, I was thinking of how to construct an argument using vierbiens and then how to boil it down to something accessible (or at least representable on LaTeX-free HN), and then remembered that it had already been done by Cooperstock et al.: http://xxx.lanl.gov/abs/astro-ph/9803097 The tl;dr is that if the cosmological expansion induces strain on matter, the strain is too small to be measurable.
Retreating from the hardest of answers, Peter Coles has an old moderate-detail article on this at https://telescoper.wordpress.com/2011/08/19/is-space-expandi... and he refers to both Peacock's and Harrison's textbooks which give greater detail (I recommend the latter if you can get your hands on it at a library).
His approach to the question you're asking ("roughly, does the solar system expand with the universe?") is how I'd go about it too, following on from the comment you replied to. My central point would be that in General Relativity we use exact solutions of the Einstein Field Equations because they're well-understood not because they're accurate. Natural systems don't source e.g the exact Schwarzschild spacetime for several reasons including lack of perfect spherical symmetry, lack of perfect vacuum to infinity outside the source, and nonzero angular momentum. Yet we get good approximate results when we use Schwarzschild to model the Earth or the Sun or the Milky Way, and usually the bad results are fixable with linear corrections). But the real picture is that each of these bodies sources an unknown metric that is slightly different from Schwarzschild, and additionally one has to stitch together two metrics sourced by two bodies each sourcing (different, unknown) Schwarszschild-ish metrics into an (unknown) expanding background.
Numerical relativity has opened up the study of approximate solutions which give better results for real physical systems than the toolkit of known exact solutions (plus linear in v/c corrections), so one could argue that the central research programme in classical General Relativity is the study of the mechanisms that generate the (true) metric.
All that said, we can be much more confident (because of analyses under e.g. the parameterized post-Newtonian formalism and experimental data from gravitational probes of many varieties) about the fit of exact solutions to the bodies in our solar system than the fit of any metric to the cosmos-in-the-large. For the bodies in hydrostatic equilibrium that means Schwarzschild to at least the first order in v/c [in linearized gravity]. If you accept that Schwarzschild is an excellent substitute for the unknown real metric, then you must have a very close fit to a static, asymptotically flat spacetime. Around that you can establish a boundary condition. Outside the boundary is the expanding spacetime, inside is asymptotically flat (i.e., not expanding). Coles discusses some of this ( as does Hossenfelder at http://backreaction.blogspot.com/2017/08/you-dont-expand-jus... ).
Alternatively, you can argue that the real metric sourced by e.g. the Earth (or yourself at a distance where you subtend a very small angle on an observer's sky, or one of your molecules) is less close to Schwarzschild. In that case, Coles takes us back to Cooperstock via Ned Wright's Cosmology FAQ: you will get bad results with poor choices of coordinates (so use e.g. Fermi coordinates because then you know exactly where you have valid and comparable results, and you are forced to consider whether and where geodesics drift apart[1]).
Finally, if you were asking about "naive quintessence models" and their problems, ch 3.2 in Elise Jenning's _Simulations of Dark Energy Cosmologies_ (Springer, 2012) is a decent overview (it contains material from her Ph.D. thesis; she is now at KICP/FNAL).
> "actual proof"
This would make an excellent postdoc research project !
Linked with [1] above, on proving the conjecture, the soft underbelly is the behaviour of geodesics: in an expanding universe, comoving geodesics drift apart. The geodesics in Schwarzschild spacetime do not drift apart. Geodesics in the solar system do not drift apart, and haven't over the course of a few billion years. Geodesics at cosmological scales clearly do drift very noticeably apart over the same period of time. Worse, evidence suggests that the Hubble constant isn't constant in time, so where are the matching perturbations in the orbits of various bodies in the solar system? However, I'm not sure this is the right path to a definitive answer, since one is likely to be able to claim that your atoms in general are not following geodesics; their free-fall is interrupted by the surface of the Earth.
Many thanks, your answer covers everything I wanted to ask you. You recognized that there were two directions in which I wanted to ask you (the second being why you mentioned the "naive quintessence models") and you covered both. Thanks for all the links.
"Both teams took advantage of a phenomenon called the Sunyaev-Zel’dovich effect that occurs when light left over from the big bang passes through hot gas."
That just seems unreal to me.
Edit: To clarify, I'm not accusing it of being made up. I just think it's amazing.
They can tell that the big bang is the light source because of Hubble's Law[0]. In short, the universe itself (so space itself) expands[1], and it does so fast enough to create a Doppler effect in light, causing redshift[2].
When we look further out into space, we effectively look "back" into time because of the limit of the speed of light. The older the light, the more it has redshifted because the more space has expanded while it was travelling. So this redshift increases very predictably the further we look out into space and back into time.
As a result, when observing space we can say when light is left-over from the big bang, because we can tell the age of the light by how much its spectral lines have redshifted.
Just as a heads up, if you ever read stuff about the 'Cosmic Microwave Background' or 'Cosmic Background Radiation', that's the 'light left over from the big bang' bit.
Yeah, totally. The other mindblowing thing is that since photons are moving at the speed of light, thus time for them pretty much stands still.
Thus, if a photon was created at the moment of the big bang and happens to "bump" into something else today then from the standpoint of the photon, it only "lived" for an instant.
(based on my level of physics understanding, so please do correct me if I'm wrong, thanks)
As a former physics student, I see your point but disagree.
Apologies for the wall of text about just this one choice of wording, but discussing why science communication is a very difficult trade-off takes a bit of space.
One reason to stick to "baryons" is that it is the term used by the researchers in their papers. So it is more true to the research that is being summarised.
The start of the article (unlike its title...) mentions that this research is about normal matter quite clearly, so this should give enough context to avoid confusion:
> This is the first detection of the roughly half of the normal matter in our universe – protons, neutrons and electrons – unaccounted for by previous observations of stars, galaxies and other bright objects in space. You have probably heard about the hunt for dark matter, a mysterious substance thought to permeate the universe, the effects of which we can see through its gravitational pull. But our models of the universe also say there should be about twice as much ordinary matter out there, compared with what we have observed so far.
Baryons are subatomic particles[0][1], not atoms, which is what most people think of when discussing "matter". If we say "normal matter", some people will think we're talking about chunks of dirt floating between galaxies. I'm not joking, have you ever been to a public physics lecture? And mistakes like this are not the audience's fault - it's the expert's fault for assuming non-experts know what they know.
For people with the right background (chemists, physicists, astronomers, etc) it is clear this is "just normal matter". For everyone else - so most people - "normal matter" means matter in earth-typical circumstances of pressure and temperature, in a solid/liquid/vapour phase.
The papers describe intergalactic plasma filaments with a temperature between 10⁴ to 10⁵ K. That is pretty exotic by normal-people standards.
And yes, fire is a plasma, but for most people fire is just very hot gas, and the article even builds on that:
> Because the gas is so tenuous and not quite hot enough for X-ray telescopes to pick up, nobody had been able to see it before.
So the other reason reason to stick to "baryons" is that this is exotic matter from the point of view of non-experts. When describing research you have to start from their model of the physical world, and build your way back to the research.
I think the choice in wording appropriate here. The other paragraphs make it clear this is about normal matter, while this sentence makes it clear it's not normal matter in Earth-circumstances, which is how the average human thinks about it if not corrected.
You're very welcome, and don't be hard on yourself!
It is easy to underestimate how difficult it is to give good explanations to non-experts, regardless of the topic at hand. I mean, just look at how how much I wrote about the choice of one word.
When writing for non-experts, an expert must imagine what it is like to not know the right answer. Our brains seem to be terrible at that: most people are unable to "not see" the cow in that famous gestalt picture once spotted[0], and I think the same thing applies to many kinds of thinking.
The only way out for an expert seems to be a lot of exposure to "dumb questions" (there is no such thing, of course) from non-experts, to figure out what logic the latter's wrong conclusions are. It doesn't help that there are many more ways to come to perfectly logical but wrong conclusions based on wrong premises, than there are ways to come to the right conclusion with the right premises.
And even once you are aware of these mismatches, you still have to explain yourself in such a way that there is a path from the incorrect interpretation to the more correct interpretation, without getting lost in trying to explain everything.
So no wonder that even professional educators and science writers, who are supposed to be skilled at this, tend to do a poor job!
Thanks a lot! As a non-expert trying to give baryon plasma clouds a place in my imagination: Since they're in plasma form, I imagine them as a very hot gassy cloud, though the gas is so thin it would probably not heat up any spacecraft flying through? From the Wikipedia article I gather it is absent of leptons, does that mean that if one would capture a few grams of it, compress it and cool it down, it would form hydrogen?
Funnily enough, cosmology's use of "baryon" predates the Standard model (it even predates Politzer, Gross & Wilczek 1973) and leans heavily on the Greek root "barus", meaning heavy i.e. having a significant rest mass (and thus implying "nonrelativistic", more below). Since then we've developed the Standard Model and its baryons have many properties that are essentially irrelevant at cosmological scales after reionization.
More technically (and all to the first order), in \Lambda-CDM the first Friedmann equation can be written as H(a) \equiv H_0 {\sqrt {(\Omega_c + \Omega_b) a^{-3} + (\Omega_{rad+hdm}) a^{-4} + ... }}, the \Omega_{...} being density parameters. Although in this form "c" stands for Cold Dark Matter (CDM) and "b" stands for baryons, the important common attribute here is that the rest mass is high enough that absent highly atypical kicks they move slowly compared to photons[1]. I explicitly added "+hdm" (hot dark matter) to the radiation term, where hdm is mainly relativistic neutrinos, which aren't heavy and thus usually move at speeds closely approaching that of photons.
The relativsitic/nonrelativistic split is important in structure formation, roughly because the former do not stick around during gravitational collapse.
One can consider here that \Omega_c is (still) likely to be mainly nonrelativistic heavy uncharged leptons and that if we lump nonrelativistic charged leptons separately into \Omega_{b+c+l} term they would be a tiny contribution to the density.
Relativistic baryons and relativistic charged leptons could be lumped in with \Omega_{rad}; it wouldn't make much difference. Experimental values for \Omega_{rad} are very small (~ 10^-4 vs \Omega_{b+c} ~ 3 x 10^-1, i.e. about 1:3000) and are often dropped for ease of calculation.
Bound states are either relativistic or not, such things would just take the place of relativistic neutrinos and photons from \Omega_{rad}.
Doing any of this is just fiddling at the margins, however. Cold Dark Matter and nuclei simply dwarf any other matter component across most of the universe's history.
- --
[1] In the FLRW model the components of the \Omega_{...} densities (i.e., matter in the large) are homogeneous, compressible fluids of constant density at rest under a constant spatially isotropic tension in an "equatorial" slicing of the RW spacetime, so we don't even have relativistic baryons (for example): everything is moving inertially as spontaneous symmetry breaking freezes them out of the hotter denser earlier phase closer to the big bang. This is a reasonable approximation at the largest scales.
> Half the universe’s missing matter has just been finally found
The first paragraph clarifies that this headline means "The missing (regular) matter comprising half the (regular matter in the) Universe has been finally found" rather than "Half of the missing (regular) matter in the Universe has just been found (and where could the other half be?)".
EDIT: My understanding is incorrect. See the replies to the post for a correction.
No. They're saying that 50% of the "dark matter" isn't dark, it's just regular matter that we weren't able to see.
The density of this stuff (filaments of gas stretching between galaxies) is so low that it is really, really, really hard to detect.
"Dark matter" theories posit that the missing matter isn't just regular matter that's hard to see; they posit that this dark matter is a fundamentally different... something.
No, this has nothing to do with dark matter. Dark matter has to be in galaxies because the reason we think it exists at all is because galaxies spin 'too fast', so there is more mass in them and therefore more gravity than can be accounted for. This article is talking about gas between galaxies.
This theory (when appropriately hacked) describes rotation curves of galaxies extremely well. Unfortunately it doesn't work at larger scales. There are many nice videos on YouTube by Sean Carroll on MOND, which explain it's limits, albeit for the fairly educated layperson.
Its possible but there is a lot of indirect evidence for dark matter, it fits observation well even if we cant prove it. When gravity calculations give an answer that conflicts with observation, sometimes the equation for gravity ends up being wrong (orbit of mercury) and sometimes you find missing mass (discovery of neptune). I don't know what more probable means in this context, the experts are at least as smart as you and me, they favor some kind of missing mass.
But we know they "spin too fast", by measuring the amount of observable matter and then measuring the "rotation curve" of the galaxy. However, the basic point that this is not half the dark matter, is correct, (based on our current understanding). Even if we can't see ordinary matter, we should be able to see its effect in some way other than gravitational interaction. Dark matter is only seen by its gravitational interaction. It doesn't interact electromagnetically.
Not the same as dark matter. This is normal matter that is just very hard to see because it is laid out in thin tendrils of dense subatomic particles that are stretched between galaxies.
Dark matter is supposed to only have weak interactions with other particles, basically only gravity and not EM or other forces. Which is why we haven't been able to really detect it and can only deduce its existence.
> basically only gravity and not EM or other forces
I don't understand how physicists make any sense of this in any kind of theory. If you had enough dark matter sitting in some spot that could turn into a star, suddenly the claim is any ordinary matter around it would stay near absolute zero no matter how much nuclear fusion was going on at the same spot? How does that work? Or would dark matter just somehow resist even interacting with itself to create friction, etc.?
> If you had enough dark matter sitting in some spot that could turn into a star, suddenly the claim is any ordinary matter around it would stay near absolute zero no matter how much nuclear fusion was going on at the same spot?
You can't. Because becoming a star (initiating nuclear fusion) requires nongravitational interaction between nucleons, which are normal, not dark, matter.
IIUC, If it interacts with itself through other forces, they can't be through the forces that act on normal matter (which they would interact with if that was the case), including the forces involved in nuclear fusion. They'd have to be dark matter exclusive forces.
Interesting, thanks. So what's preventing "dark matter" from simply being, say, lots of photons traveling through intergalactic space then? That seems like the next obvious candidate after ordinary matter.
If dark matter where a lot of photons, then we could detect it. For example the experiments to detect the cosmic microwave background radiation https://en.wikipedia.org/wiki/Cosmic_microwave_background can detect a very small amount of light that is the leftover of the radiation produced soon after the big bang. We have very good estimations of how many photons are out there.
I don't understand, how would you detect photons that are going in a direction orthogonal to you in intergalactic space? They have to be coming toward you for you to detect them, right?
An implicit assumption is that the universe is isotropic and homogenous at that scale. Meaning, it looks the same from any point and any direction.
So you might not detect photons in one particular direction, but we should assume that they should be going in all directions. Otherwise that theory is just moving the goal from "where's this energy" to "why is it pointing towards that".
CMB is consistent with, every direction is the same, so you would have to say "photons are going everywhere in this range, but towards that in this segment of the spectrum".
But isn't it a hell of a lot less of a stretch to imagine there are more photons in other parts of the universe than here, than to imagine there is some sort of mysterious dark matter permeating all of the universe? If we're going to be unable to interact with whatever dark matter is to test may hypothesis, we might as well propose the hypothesis that requires the least radical changes to our fundamental theories of physics, no?
Assume have an equation like `Energy = 0`, where `Energy = Mass + Photons`. It works great for almost everything, we can get the curvature of space by `f(Energy)` where `f(Photons) = 0`.
However there's this one experiment that says the answer is `Energy = 42`. `f(x)` is still useful, everything else works.
We can just amend the equation to be: `Energy = Mass + Photons + DarkMatter` where `DarkMatter = -42` and `f(DarkMatter) = 0`.
Or we can keep the E=M+P but change the way f works to be: `f(Photons, X) = Y` where for almost any value `f(Photons, X') = 0` and in this one case `f(Photons, X") = 42`.
To me it's clear that the first approach is less complicated. The first approach raises one question, "why -42?" The second approach raises "why does X?" exist and "why 42?"
======
By the way, you are mistaken, we do interact with Dark Matter, that's how we detected it (Bullet Cluster). Gravity does interact with it.
Well I read [1] [2] that they do have gravity... which is probably a more direct answer to what you're asking. I'm not sure what counts as "mass" for a photon per se, except that I assume there's an effective mass via the energy.
not exactly. As photons are constantly moving at the speed of light (citing the theory of relativity I don't really understand) they have no resting mass. But they have an impuls and create a force of impact (used in sun sails) from which a theoretical notion of ... mass (kinetic, perhaps?) can be derived. I'd like to know how it's derived originally, too though.
edit: if E=mc² with E(photon)=f•h and f=c/λ, then m=h/(λc) in vacuum. ... I hope that's correct. What's interesting, [m]=J/[a]/m
normally, people don't discuss mass. rather they are simply heavy. most don't even really care why. and those who do, conflate the idea with weight often enough.
We don't know much about what it does do, only what it doesn't do. Scientists have deduced its existence by measuring the speed of the expansion of the universe and determining that the mass that we can see based on the rotation of galaxies, etc. can not account for the speed of the expansion. There should be way more mass out there than there is.
We also know that it doesn't seem to react with light otherwise it would block out the stars from other galaxies as well as the cosmic background radiation. It's difficult to tell how it reacts with ordinary matter because we can't see it, but the assumption is that, since its got such a large gravitational effect, it must not react much at all otherwise it would dominate everything visible since it's 90% of the matter in the known universe.
It's difficult to make sense of what this means, but all of the other theories that explain the rotation of galaxies and the expansion of the universe don't fit very well either.
Just a clarification: galaxy rotation curves and the observed expansion of the universe are not really related.
The empirically observed expansion of the Universe is the motivating evidence for the existence of Dark Energy. Dark Matter, which is motivated (in part) by the inconsistencies in the rotation curves of galaxies is a separate topic. Despite sharing similarities in their names and the fact that they make up the two biggest chunks of energy/mass in the Universe, Dark Energy and Dark Matter are not actually related. The word 'dark' is really just implying that we have not yet observed anything to explain these two phenomenon.
A star begins as a huge lump of gravitationally bound gas. This gas runs into each other, that's how it has pressure and temperature, even when it is as diffuse as a proto-stellar nebula is (which can be on the order of tens to hundreds of atoms per cubic centimeter). The nebula goes through cycles of compression and radiation, as the gas collapses due to gravitation it heats up, raising the pressure and halting the collapse. But then the heat is radiated away and the gas cools, and then it continues to collapse. This process takes a chunk of gas that is on the order of a light-year across and follows it down and down and down as it contracts to the size of a solar system then to the size of a star. Over time the globule gets denser and denser, and thus the force of gravity pulling matter down toward the center gets stronger and stronger. Which means that the amount of pressure necessary to counteract that pull gets higher and higher, and thus the temperature of the proto-star as it collapses goes up. Stars are born hot and bright, even before conditions in their core are hot enough to ignite fusion reactions. Which is the inevitable result of a mass of gas that is dense enough to undergo collapse and is massive enough to result in a body that is over about 75 times the mass of Jupiter (the lower limit of a red dwarf star). It is the energy from those fusion reactions which provide the temperature and pressure increases which ultimately halt further gravitational collapse.
That's how stars are formed.
Absolutely none of this is applicable to dark matter. Dark matter isn't made up of atoms, it doesn't bump into and bounce off of other particles of matter. It doesn't maintain a temperature and pressure the way a gas does. Dark matter interacts extremely weakly. Neutrinos are an example of dark matter, but a type that we know doesn't make up most of the mass of dark matter in the Universe. A neutrino will pass through a chunk of lead a light year thick and then only have a 50/50 chance of being stopped. Dark matter is even more weakly interacting (with ordinary matter and itself). Particles of dark matter are zooming about in orbits around the center of mass of our galaxy. They zip through almost everything they touch without interacting, the exception being black holes, which they simply fall into like everything else, of course. They are like an enormous parade of ghosts that can only interact with other matter through gravitation, meaning orbital dynamics. Because of this they have no way of condensing into forms of higher density like nebulae, stars, or planets.
Imagine a giant ball of yarn larger than our galaxy, except each thread is a flow, a river of huge numbers of ghostly dark matter particles. Except there are many balls of yarn overlayed on top of one another and many flows going through any one point, since they don't interact with each other. Some are traveling around in orbits around the Milky Way in the same direction as our Solar System, some are going the opposite way, some are in orbits at various inclinations to the galactic plane, some are in circular orbits, some are in eccentric orbits and the part of the galaxy where we are is their highest distance from the galactic center, for others it's the closest distance to the galactic center, and so on. All of these flows, this ghostly wind of insubstantial but massive dark matter particles is what together makes up the "dark matter halo" around our galaxy and around typical galaxies. In any given section of the galaxy, say a typical cubic light year, the total amount of dark matter isn't that great, it's vastly lower than the mass of any star that would happen to be there, for example. But the dark matter is everywhere, it flows throughout a region that extends well beyond the edge of the visible galaxy, and it has roughly the same density everywhere, so over huge volumes that mass adds up, and up, and up, and turns out to be, in aggregate, greater than the mass of all of the ordinary matter in our galaxy, by a factor of about 5:1.
How seriously taken are theories that posit the existence of extra spatial dimensions as an explanation for dark matter? E.g. dark matter could be "ordinary" matter but separated from our 3 spatial dimensions by a discrete spatial dimension or dimensions.
Friction, and really any kind of 'contact' between things is just electrons repelling. Some ideas for dark matter don't interact in any way other than gravity, so your idea is pretty mainstream on that, but there's basically no evidence that dark matter exists in some specific form, and in fact every experiment seems to constrain the possible forms it could have more and more. It's still possible it exists, but I would suspect that whatever the explanation is for the 'missing' matter will be pretty revolutionary.
Adding to the other comments that this "regular" matter wasn't really missing. All the models predicted that it was there in the filaments between galaxies but we had no way to detect it. These experiments are just a confirmation of the current consensus and strengthens trust in our models.
No, this is about baryonic (normal) matter entirely. We basically have different ways of determining what the matter makeup of the Universe is which are independent of direct observations. Those techniques actually indicated a higher proportion of atomic (baryonic) matter in the Universe than had been directly detected so far, while still indicating that the vast majority of the matter/energy balance of the Universe is in the form of dark energy (which we are pretty in the dark on its composition) and "cold dark matter" (which has been narrowed down to being primarily made up of "weakly interacting massive particles" of a type or types that are currently outside our understanding of particle physics). Specifically, those lines of evidence say that the expected balance of the density of matter/energy on cosmological scales comes out to 69% dark energy, 26% cold dark matter, and the remainder (actually a bit under 5%) being baryonic matter.
However, we've only detected about half of that expected 5% density of matter (in the form of stars & planets, black holes, giant interstellar gas and dust clouds, etc.) This recent discovery shows that much of the undetected baryonic matter is in the form of huge warm gas clouds surrounding galaxies.
I was also confused about this. However the article states that half of the normal matter (not dark matter) thought to exist has been detected. Basically we just found the missing normal matter and have not yet found dark matter.
"You have probably heard about the hunt for dark matter, a mysterious substance thought to permeate the universe, the effects of which we can see through its gravitational pull. But our models of the universe also say there should be about twice as much ordinary matter out there, compared with what we have observed so far."
This could have been worded slightly differently to emphasize that the second sentence is contrasting with the first, but no; this is about regular matter, not Dark Matter.
I think it's actually saying that we discovered missing matter that isn't dark matter. So now all that's remaining unaccounted-for is dark matter (and dark energy)
Can I just check - my understanding is that we can only see 5% of the expected mass in the universe - so we have just found another 5%? meaning dark matter needs to account for 90%?
plus, how awesomely beautiful is the idea of tendrils of has connecting the galaxies through space.
That's not it. We can figure out what the mass makeup of the Universe is through various independent means, and those are also independent of direct observation. Those lines of evidence tell us the Universe is a 69:26:5 split between dark energy, dark matter, and ordinary atomic (baryonic) matter. None of that has changed.
However, we've only been able to actually detect much less than that 5% contribution of the atomic matter in the Universe, meaning that we expect there's a lot more ordinary matter out there that we can't see. This research indicates that a big chunk of that missing ordinary matter is in the form of huge warm gas clouds on galactic scales.
> You have probably heard about the hunt for dark matter, a mysterious substance thought to permeate the universe, the effects of which we can see through its gravitational pull. But our models of the universe also say there should be about twice as much ordinary matter out there, compared with what we have observed so far.
My understanding is that, up until now, we could only observe half of the 5% expected mass, e.g. 2.5%, and we can now see the rest of it. The rest of the universe is still made up of 25% dark matter and 70% dark energy.
Like many of us, I'm no astrophysicist, but one of the linked papers [1] references that we've thus far been able to locate 10% of baryon content, with this discovery bringing this number up to 30%.
Assuming I'm interpreting this correctly, this means that there's still around 70% of expected baryonic matter unaccounted for.
Tendrils between galaxies is mind-blowing. Aren't galaxies very very far apart? And there is basically nothing in between? How are subatomic particles - baryons - able to survive the constant expansion? Do the tendrils keep getting thinner? Can they be a medium to transmit intergalactic information someday?
Also, to expand on that (I have no knowledge, just making assumptions), the space between galaxies would need to either create new matter, or the matter would get infinitely spread out over time?
So, if the stretching thing is true, would there be areas with absolutely no matter or anything at all?
I too love the idea of there being filaments of matter connecting distant galaxies. I can only imagine the artistic renderings will make the universe look like a neural network.
Can anyone clarify how these filaments remain “hot”?
> I too love the idea of there being filaments of matter connecting distant galaxies. I can only imagine the artistic renderings will make the universe look like a neural network.
There are lots of "cosmic web" renderings from large-scale cosmological simulations of dark mater and the formation of structure in the universe. A major one, now over a decade old is the "Millennium Simulation": http://wwwmpa.mpa-garching.mpg.de/galform/millennium/
Whee, the best thing to say about "... just because something is moving relative to another thing does not necessarily mean it is hot ..." is that this is unsettled. ;)
Ott 1963 shows T' = T / {\sqrt{1 - v^2 / c^2}}, or for short T' = \gamma T, where \gamma is the Lorentz factor, T' is the temperature calculated by an observer in a set of coordinates Lorentz boosted with respect to the coordinates in which the system is at rest with a temperature T.
Asymptotically one would expect that as \gamma -> 1 \RightArrow T = T' and \gamma -> \infty \RightArrow T = \infty assuming one is measuring temperature using photocalorimetry (consider a blackbody radiator...).
However, there are different proposals that depend on how and if one makes the first or second laws of thermodynamics Lorentz covariant, and how one goes about measuring a distant object's temperature. All of this is also in the context of Special Relativity; a general relativist could say that the whole matter comes down to a choice of gauge anyway, and temperatures are only directly comparable for two objects at the same point in spacetime (and even there you get a can of worms because they would then be "in contact" at the point p on the manifold, and even then there is a choice of gauge and some interesting cut-offs in which they could equilibriate (T_obj1 = T_obj2), dissipate excess stress-energy (T_{p} > T_obj1 + T_obj2), or collapse gravitationally).
Unfortunately the matter is not wholly settled and there is an absence of firm experimental evidence, and moreover the debate is confined to flat spacetime.
However, one has to do quite a bit of twisting to assume that T = T' (Landsberg did this in 1966 & 1967).
temperature, in my simple understanding, is brownian motion. What ever that means in the end, we know that vacuum is a pretty good thermal isolator, so the counter question should be, why should it cool down? Sure, heat can be given off as radiated photons which is messured coming from the filaments, but you probably don't even begin to understand why, ie. why the T is hot, I sure don't, so the question seems like stabbing in the dark (puns not intended).
im confused about baryons. they say that it is a particle (presumably like an electron or photon or other particle) but then they go on to say that its a gas, not a particle. very confusing.
Baryons are types of subatomic particles. They include protons and neutrons as well as other kinds that only exist briefly in an atom smasher. In this case, they are using a generic term because they have no way of knowing whether it's primarily protons/neutrons or what else it could be made of.
A gas in this sense is really a very loose collection of light particles, mainly hydrogen/helium or just empty separate protons and neutrons. As opposed to heavier atoms that are the product of stellar fusion and form more solid matter.
If you think about this in terms of cosmology, the universe was originally a solid uniform blob of high energy particles. As it cooled, some of those particles grouped together to form stars and galaxies. However, there are parts in between galaxies that clumped together a bit but didn't really have enough mass to form anything. So it's really just made of tendrils of basic atomic building blocks.
When physicists say "baryonic matter" that's more or less just shorthand for "atomic matter".
Atoms are made up of protons, neutrons, and electrons. Between the 3 types of particles the mass of the protons and neutrons outweighs that of the electrons by about 3 orders of magnitude, almost all of the mass is in the nuclear particles. And those particles are baryons, they are, in fact, the lightest baryons, each made up of 3 quarks. Only the proton and anti-proton baryons are stable in isolation, but in nuclei protons plus neutrons can be stable together and anti-protons plus anti-neutrons can also be stable together.
A baryon is a subatomic particle that is composed of three quarks, such as a proton or a neutron. An electron would be categorized as a lepton, rather than a baryon as it is not made of multiple quarks and is rather an elementary particle, for reference.
Is it just me, or does anyone else get put off by hyperbolic scientific news coverage? Even if the result is interesting, the framing of one result as representative of all unknown normal matter makes me not want to bother reading the article, it's hard enough splitting out facts from bullshit when you have a decent grounding in a subject, it's even harder when your knowledge is almost nonexistent.
Less weird. A large amount of the missing matter, but not all lot it, has been observed where it was theorized. The issue was we've not had reliable ways to measure that specific matter in the area it is
We should assume by default that every headline is clickbait now, unfortunately this is the business model of the internet. It's not good for accuracy and most articles are just a headline and some inane filler up words. I hope we'll smart up and find a better model for news soon. :/
Ok, it seems like a lot of people here think dark matter is a “thing” but it’s not. Dark matter is simply unaccounted for mass in the universe. When we discover mass that we didn’t know about before, the amount of “dark” matter decreases. The word “dark” here merely means “known unknown” because we observe the effect but not the cause. This discovery has unveiled some of the dark matter as baryon particles in a plasma. So some of that dark matter is now known.
If you read vanderZwan's analysis in another comment thread, it seems this discovery did not involve dark matter. It was well below 50%, and the baryon particles were always predicted to be there, they were just finally observed directly.
* 5% "ordinary", baryonic matter <-- all now accounted for
* 27% dark matter
* 68% dark energy
Second, I don't think it's embarassing. I think the other explanations if it does _not_ exist are even more convoluted. At least the dark matter idea is only saying there is stuff that interacts poorly with other matter. That doesn't sound all too crazy as explanations like "despite all extremely precise evidence, theory of gravity must still be somehow wrong on larger scales!!" to me (math is not exactly known for sudden special cases), or having to create all new specific theories on why something else must be at play on a cosmic scale than simply (yes, simply) a dumb weakly interacting particle.
Not a cosmologist but here's my go at the de Graff paper. (Let's get this out of the way, the title is click-bait and the paper/researchers makes no such claims as to anything near 50%. New Scientist is trolling for hits with the word "half" or the journalist is fundamentally misunderstanding the work.) In de Graff, et al, they claim 30% of "90% of the missing baryonic matter [that composes the ~25% of our total universe observable from within our light cone]" has been found in the CMB structured as filaments between galaxies. They claim there's effectively a planar network layered on top of Minkowski space composed this baryonic matter. The temperature was at this "Goldilocks" midrange no one had previously analyzed (ranging from 10^5-10^7K). This wasn't previously found because people were searching "only the lower and higher temperature end of the warm-hot baryons, leaving the majority of the baryons still unobserved(9)". [See "Warm-hot baryons comprise 5-10 percent of filaments in the cosmic web.", Nature, Eckert et al for more about baryons of this composition.]
Additionally, these baryons have 10x the density of what we observe (so this could potentially be evidence for the first stable baryonic matter composed of second generation quarks, or more likely the binding energies are different from our standard uud/udd nucleon quarks) permeating the universe, and where the roads in the network meet ("dark matter haloes"), you have embedded galaxies and galaxy clusters. They continue with their analytic methods of the CMASS data, and claim within the framework 30% of the total baryonic content (which, again, all analytical methods put this into no more than ~25%) is composed of this form of this matter. I skimmed their methods and it seemed to at least logically hold -- they are using the appropriate data (SDSS 12) and didn't cherry-pick their galaxy pairs (so, no p-hacking here!).