Greetings to all readers and subscribers, and special greetings to the paid subscribers!
Here’s a very early draft of Chapter 11 of my new book “Irrational mechanics: Narrative sketch of a futurist science & a new religion” (2024).
Note that this and the others draft chapters are very concise. At this point I only want to put down the things I want to say, one after another. Later on, when the full draft for early readers is complete, I’ll worry about style and all that.
11 - Zooming in and out (II)
The vacuum of empty space is not a nothing, but a something “filled with fluctuating quantum phenomena” [Wheeler 2000], where things happen.
Back in the 1960s, the development of quantum field theory was accelerated and brought out of a stall by a suggestion put forward by Philip Anderson [Close 2011]. In plasma, certain properties of photons are modified and only photons above a certain energy can pass through. Photons, which have zero mass in empty space, behave like massive particles in plasma. Similar effects are found in superconductors.
Anderson noticed analogies with problems that were stalling the development of quantum field theory and suggested that the vacuum of empty space acts like plasma or superconducting matter. Anderson’s suggestion triggered important developments in quantum field theory, including the prediction of the Higgs field and the associated Higgs boson (aka “God Particle” [Lederman 2006]) that was found at CERN in 2012 [Close 2022].
The Higgs field is not directly associated with a force but gives mass to massive particles with a mechanism conceptually similar to Anderson’s suggestion. Massive particles get stuck in the Higgs field when they try to accelerate, and therefore acquire mass, which is resistance to acceleration. Think of a fly that flies freely through the air, but gets stuck in oil.
Science describes matter at different scales. There is a mathematical framework called “renormalization” [Feynman 2006, Laughlin 2006] that often permits deriving effective mathematical models of matter, valid at a certain scale, from known models that are assumed to be valid at a smaller scale.
A coarse effective model is, so to speak, squeezed out of a finer model. In many cases of interest the renormalized effective model turns out to be similar to the underlying fine model, with different values of certain parameters. The small scale physics is lost in translation, or more precisely absorbed into the renormalized parameters. It’s worth noting that, in renormalization methods, scale acts as a new dimension along which these parameters change.
To visualize renormalization in action, think of spins (little arrows that point either up or down) arranged in a grid. Each spin interacts with its neighbors and can flip with a probability that depends on the temperature. At low temperatures the spins tend to align with their neighbors, but high temperatures tend to randomize the spins. This simple model is good enough for some magnetic materials [Wilson 1979].
Now change the scale of the model: group the spins in blocks (for example blocks of three spins a side) and treat each block as a single spin that points in the direction of the majority of spins in the block. Then repeat this process again and again to move toward larger and larger scales. The renormalized models still work, but the temperature must be replaced by new renormalized temperatures that depend on the scale.
Seen from the other side (from large scales toward small scales), renormalization allows us to ignore things below a certain cutoff scale. What happens below the cutoff scale may be unknown, but is of little relevance anyway. Appropriate values of renormalized parameters that absorb the small scale physics can be determined experimentally, and the accuracy of calculations increases with a smaller cutoff scale.
In quantum field theories, renormalization works around certain infinities that pop up in calculations. Renormalization seems “not mathematically legitimate” [Feynman 2006] (it reminds me of emergency surgery) but it gets the job done and has been put on mathematically firmer ground by Ken Wilson’s work on the renormalization group.
We can start from quantum field theories in empty space (assumed to be known and fundamental) and use renormalization methods to derive effective field theories that describe what happens inside matter.
In a material substrate, besides the particles that form the substrate, there are things called quasiparticles. A quasiparticle is an elementary disturbance of matter that behaves and moves in a particle-like way. For example, elastic vibrations (aka sound) in matter can be described in terms of quasiparticles called phonons. Phonons behave very much like photons in many ways.
The similarities between empty space and condensed matter “are legendary in physics,” says Robert Laughlin [Laughlin 2006]. Low energy quasiparticles in condensed matter like superconductors and superfluids behave like particles in empty space, and can be described by quantum field theories.
At low energies, the behavior of quasiparticles depends only on some general properties of the microscopic physics of the substrate, but not on the details. However, the details become important and critical at higher energies.
To visualize a quasiparticle, think of a row of dominos: if you knock one domino down, a particle-like disturbance propagates along the row in a certain direction and with a certain speed, just like a particle.
Of course, the quasiparticles disappear if the particles that form the matter substrate are removed. No domino effect without dominos! But perhaps there are dominos in empty space? Perhaps elementary particles are really quasiparticles in some kind of substrate that permeates empty space?
This seems to be the case indeed. We can, and we should, “consider ‘empty space’ itself to be a material, whose quasiparticles are our ‘elementary particles’,” says Frank Wilczek [Wilczek 2021].