Anti-Matter Antidote

On my New Physics tab, I have a set of links that document some important facts that are unexplained by modern particle theory. These aren’t obscure points of experience. Rather, they include facts such as “the proton weighs 50 times as much as it should” and “quazars precede galaxy formation.” They are “first order” facts that should cause every particle theorist to blush in shame.

Experimenters at CERN have now magnified the problem.

The reigning theory of the universe holds that it formed from a super-hot gas – so hot that the very fabric of space contained more energy than the existing particles. As the universe cooled, that energy was converted to particles.

One problem with this theory is that energy is converted to matter through a process called “pair production.” You can’t make only one particle – you have to make two.

Specifically, the particle comes with an “anti-particle” with equal mass and opposite charge. The conundrum is that those particles attract, and when they meet, they annihilate each other. The matter and anti-matter convert back to pure energy.

This leads the physicists to wonder: how did we end up with a universe composed only of matter? In principle, there should be equal amounts of matter and anti-matter, and every solid object should be annihilated.

The answer proposed by the theorists was that matter and anti-matter are slightly different – and most importantly in their stability. Anti-matter must disappear through some unknown process that preserves matter.

The experiment reported today attempted to measure differences between the most important building-block of matter – the proton – and its antiparticle. None was detected.

In consequence, everything created by the Big Bang (or the Expansive Cool – take your pick) should have disappeared a long time ago. There should be no gas clouds, no galaxies, no planets, and no life.

If that’s not a reason to be looking for new theories of fundamental physics, then what would be?

Reconciling Scripture and Evolution

Posted in a discussion of our symbiotic relationship with mites, this summarizes my position succinctly:

The biologists that rely upon strictly biochemical processes of evolution will never be able to calculate rates, because the forcing conditions have been lost in prehistory. I found it interesting to ask “why does every civilization develop the concept of a soul”, and eventually concluded that Darwin was half right: life is the co-evolution of spirit with biological form. The addition of spirit influences the choices made by living creatures, and so changes the rates.

Given this, I went back to Genesis and interpreted it as an incarnation (“The SPIRIT of God hovered over the waters” – and then became God for the rest of the book), with the “days” of creation reflecting the evolution of senses and forms that enabled Spirit to populate and explore the material conditions of its survival (photosensitivity, accommodation of hypotonic “waters above”, accommodation of arid conditions on the “land”, accommodation of seasons with sight (resolving specific sources of light), intelligent species in the waters and air, and mammals on earth (along with man)).

Couple this with the trumpets in the Book of Revelation, which pretty clearly parallel the extinction episodes identified by paleontology – including injection of the era of giant insects – and it looks like science and scripture actually support each other.

The only point of significant disagreement is spirit itself. Given my knowledge of the weaknesses of modern theories of cosmology and particle physics, I found myself considering the possibility of structure inside of the recognized “fundamental” particles. It became apparent to me that it wouldn’t be too difficult to bring spiritual experience into particle physics. To my surprise and delight, I became convinced that this reality is constructed so that love inexorably becomes the most powerful spiritual force.

A Massive Mystery

Quantum Mechanics describes particles as vibrations in time and space. The intensity of the vibration in time (i.e. – when it is) reflects the particle’s energy; the intensity of the vibration in space (i.e. – where it is) reflects its momentum.

In large-scale reality, such as baseballs and buildings, those vibrations are way too small to influence the results of experiments. In studying these “classical” systems, physicists discovered certain mathematical laws that govern the relationship between momentum (p) and energy (E). Believing that these rules should still be manifested in the quantum realm, they were used as guidelines in building theories of vibration.

In Special Relativity, that relationship is (m is the mass of the particle):

m2 = E2 – p2

In the case of electromagnetic waves, we have m = 0. Using a fairly simple mathematical analogy, the equation above becomes a wave equation for the electromagnetic potential, A. An electric field (that drives electricity down a wire) arises from the gradient of the potential; a magnetic field (that causes the electricity to want to turn) arises from the twisting of the potential.

The contribution of P.A.M. Dirac was to find a mathematical analogy that would describe the massive particles that interact with the electromagnetic potential. When the meaning of the symbols is understood, that equation is not hard to write down, but explaining the symbols is the subject of advanced courses in physics. So here I’ll focus on describing the nature of the equation. Let’s pick an electron for this discussion. The electron is a wave, and so is represented by a distribution ψ.

Physically, the electron is like a little top: it behaves as though it is spinning. When it is moving, it is convenient to describe the spin with respect to the motion. If we point our right thumb in the direction of motion, a “right-handed” electron spins in the direction of our fingers; a “left-handed” electron spins in the opposite direction. To accommodate this, the distribution ψ has four components: one each for right- and left-handed motion propagating forward in time, and two more for propagation backwards in time.

Dirac’s equation describes the self-interaction of the particle as it moves freely through space (without interacting with anything else). Now from the last post, we know that nothing moves freely through space, because space is filled with Dark Energy. But when Dirac wrote his equation, Einstein’s axiom that space was empty still ruled the day, so it was thought of as “self-interaction”. That self-interaction causes the components of the electron to mix according to m, E and p. When the self-interaction is applied twice, we get Einstein’s equation, relating the squares of those terms.

So what does the mass term do? Well, it causes right-hand and left-hand components to mix. But here’s the funny thing: imagine watching the electron move in a mirror. If you hold up your hands in a mirror the thumbs pointed to the right, you’ll notice that the reflection of the right hand looks like your left hand. This “mirror inversion” operation causes right and left to switch. In physics, this is known as “parity inversion”. The problem in the Dirac equation is that when this is applied mathematically to the interaction, the effect of the mass term changes sign. That means that physics is different in the mirror world than it is in the normal world. Since there is no fundamental reason to prefer left and right in a universe built on empty space, the theorists were upset by this conclusion, which they call “parity violation”.

Should they have been? For the universe indeed manifests handedness. This is seen in the orientation of the magnetic field created by a moving charged particle, and also in the interactions that cause fusion in the stars and radioactive decay of uranium and other heavy elements.

But in purely mathematical terms, parity violation is a little ugly. So how did the theorists make it go away? Well, by making the mass change sign in the mirror world. It wasn’t really that simple: they invented another field, called the Higgs field (named after its inventor), and arbitrarily decided that it would change sign under parity inversion. Why would it do this? Well, there’s really no explanation – it’s just an arbitrary decision that Higgs made in order to prevent the problem in the Dirac equation. The mass was taken away and replaced with the Higgs density and a random number (a below) that characterized its interaction with the electron: m ψ was replaced with a H ψ.

Now here’s a second problem: if space was empty, why would the Higgs be expected to have a non-zero strength so that it could create mass for the electron? To make this happen, the theory holds that empty space would like to create the Higgs field out of nothingness. This creation process was described by a “vacuum” potential with says that when the Higgs density is zero, some energy is available to generate a density, until a limit is reached, and then increasing the density consumes energy. So space has a preferred density for the Higgs field. Why should this happen? No reason, except to get rid of the problem in the Dirac equation.

And what about the other spinning particles? Along with the electron, we have the muon, tau, up, down, strange, charm, bottom, top and three neutrinos, all with their own masses. Does each particle have its own Higgs field? Or do they each have their own random number? Well, having one field spewing out of nothingness is bad enough, so the theory holds that each particle has its own random number. But that begs the question: where do the random numbers come from?

So now you understand the concept of the Higgs, and its theoretical motivations.

Through its self-interaction, the Higgs also has a mass. In the initial theory, the Higgs field was pretty “squishy”. What does this mean? Well, Einstein’s equation says that mass and energy are interchangeable. Light is pure energy, and we see that light can be converted into particle and anti-particle pairs. Those pairs can be recombined to create pure energy again in the form of a photon. Conversely, to get high-energy photons, we can smash together particles and anti-particles with equal and opposite momentum, so that all of their momentum is also converted to pure energy (this is the essential goal of all particle colliders, such as those at CERN). If the energy is just right, the photons can then convert to massive particles that aren’t moving anywhere, which makes their decay easier to detect. So saying that the Higgs was “squishy” meant that the colliding pairs wouldn’t have to have a specific energy to create a Higgs particle at rest.

Of course, there’s a lot of other stuff going on when high-energy particles collide. So a squishy Higgs is hard to detect at high energies: it gets lost in the noise of other kinds of collisions. When I was in graduate school, a lot of theses were written on computer simulations that said that the “standard” Higgs would be almost impossible to detect if its mass was in the energy range probed by CERN.

So it was with great surprise that I read the reports that the Higgs discovered at CERN had a really sharp energy distribution. My first impression, in fact, was that what CERN had found was another particle like the electron. How can they tell the difference? Well, by looking at the branching rations. All the higher-mass particles decay, and the Higgs should decay into the different particle types based upon their masses (which describe the strength of the interaction between the Higgs field and the particles). The signal detected at CERN was a decay into two photons (which is also allowed in the theory). I am assuming that the researchers at CERN will continue to study the Higgs signal until the branching ratios to other particles are known.

But I have my concerns. You see, after Peter Higgs was awarded the Nobel Prize, his predecessor on the podium, Carlo Rubia (leader of the collaboration that reported the top particle discovery) was in front of a funding panel claiming that the Higgs seemed to be a bizarre object – it wasn’t a standard Higgs at all, and the funding nations should come up with money to build another even more powerful machine to study its properties. Imagine the concern of the Nobel committee: was it a Higgs or not? Well, there was first a retraction of Rubia’s claim, but then a recent paper that came out saying that the discovery was not a Higgs, but a “techni-Higgs”.

One of the characteristics of the scientific process is that the human tendency to lie our way to power is managed by the ability of other scientists to expose fraud by checking the facts. Nobody can check the facts at CERN: it is the only facility of its kind in the world. It is staffed by people whose primary interest is not in the physics, but in building and running huge machines. That’s a really dangerous combination, as the world discovered in cleaning up the mess left by Ivan Boesky and his world-wide community of financial supporters.

Generative Orders Research Proposal – Part V

Research Program

In this section, we suggest a research program, motivated by a strategy of incremental complexity. The initial steps of the program focus on the characteristics of the lattice. As these are resolved, the parameter space of the theory is reduced, helping to focus analysis of fermion dynamics in the later stages.

The reference model as described suggests that in theories of generative order, many of the intrinsic properties of the standard model may be extrinsic properties of one-dimensional structures. If this is so, ultimately theorists should be able to calculate many of the fundamental constants (G, h, c, α, etc.), and to establish correspondence theorems to existing models of gravitation, quantum electrodynamics (QED), quantum chromodynamics (QCD) and the weak interactions. It is the opinion of the author that the single-component reference model is unlikely to satisfy these requirements.

Conversely, the isotropy of space suggests that a single-component lattice is likely to be sufficient to explain gravitation. The initial work therefore focuses on creation of models that can be used to assess the stability of behavior against segmentation length and interaction models. The program will also help scope the computational resources necessary to analyze fermion kinematics.

The steps in the research program are successively more coarse. Particularly when the program progresses to consideration of fermion kinematics, years of effort could be invested in analysis of a single particle property, such as spin. Considering the history of quantum mechanics and relativity, the entire program can be expected to take roughly a century to complete.

Given the resources available to the proposer, funding of the research program for the first year focuses on two goals.

  1. Re-analysis of the spectra of side-view elliptical galaxies with the goal of establishing a high-profile alternative to the Hubble expansion.
  2. Identifying research teams that would be capable and interested to pursue the modeling effort.
  3. A successful effort would culminate with invitation to a symposium considering new horizons in the theories of cosmology and particle physics.

Modeling Program

Exposure of generative orders to the community of particle theorists is going to result in a large body of objections to the reference model. To avoid these being raised as impediments to obtaining research funding for interested theorists, we list the challenges that must be overcome to elaboration of a satisfactory model, and consider possible mechanisms that might lead to the observed behavior of known physical systems.

The program is presented in outline form only. If desirable, elaboration can be provided.

  1. Precession of perihelion – due to “drag” between the inconsistent lattice configurations of bound gravitational bodies. Explore parameterization of lattice structure – sub-unit modes, lattice sheer and compressibility. Again a fairly approachable study that could stimulate further work by demonstrating feasibility of explanations of large-scale phenomena, with correspondence to parameterization at the lattice scale. Success would begin to break down resistance to the idea that a preferred reference frame is disproven by observations that support Einstein’s theories of Special and General relativity.
  2. Dynamics of the formation of galactic cores, parthogenesis, black hole structure – Relative energetics of lattice cohesion vs. encapsulation of lower-dimension structures. Initial density and uniformity of 1-dimensional structures (also considering observed smoothness of lattice compression – I.e. “dark energy” distribution). Success would be a clear demonstration of worthiness as an alternative to the Big Bang theory.
  3. Superfluid transport of particles through the lattice.
    1. Fiber characteristics
    2. Intrinsic angular momentum
    3. Virtual photon / gluon emission as an analog of lattice disruption
    4. Conservation of momentum
    5. Theory of kinetic energy (scaling of scale of lattice distortion vs particle velocity)
    6. Effect of sub-unit distortion on particle propagation
  4. Gravitation/QED/GCD
    1. Equivalence of gravitational and kinetic masses
    2. Electric charge signs. Note that 2 sub-units with 1 thread each are not equivalent to one subunit with 2 threads.
    3. Thread dynamics and interaction with lattice
  5. Lattice distortion and correspondence to quantum-mechanical wave function
    1. Pauli exclusion principle
    2. Wave-particle duality (theory of diffraction)
    3. Hydrogen energy levels
    4. Wave-function collapse
  6. Weak interactions
    1. Thread transfer processes
    2. Temporary creation of unstable higher-dimensional structures.
  7. Theory of light
    1. Electric field as a mode of coupling due to lattice disrupted by thread oscillations
    2. Magnetic fields as a special mode of particle coupling with a lattice distorted by motion of threads
    3. Speed of light
    4. Light decay during lattice propagation
    5. Theory of microwave background radiation (lattice relaxation or light decay)
  8. Theory of anti-particles
    1. Sub-unit chirality and lattice coherence
    2. Annihilation as synthesis of higher-order structures
    3. Pair production as decomposition of higher-order structures
    4. Meson theory

Generative Orders Research Proposal – Part II

Assessment of GI Theories

The principle of gage invariance has underpinned development of theoretical physics for almost a century. Application of the principle is conceptually simple: experimental data is analyzed to propose “invariants” of physical systems. The (generally simple) equations that describe these properties (energy, momentum, mass, field strength) are then subjected to classes of transformations (rotations, velocity changes, interactions with other particles), and the equations are manipulated until the proposed invariants are maintained under all transformations.

Having recognized gage invariance as an organizing principle, theorists beginning with Einstein have sought to construct “Grand Unified Theories” that unite all of the invariants in a single framework. There is not fundamental reason for this to be so – the proposition was motivated by reasonable success in explaining experimental studies performed at particle colliders.

Two kinds of difficulties have arisen for the proponents of these theories. First, particle colliders have become enormously expensive and time-consuming to construct. That has been ameliorated somewhat by the introduction of astrophysical data, and the attempts to connect the history of the early universe to the properties of the gage theory. In the interim, however, the enormously prolific imaginations of the particle theorists were insufficiently checked by experimental data. This led them to emphasize numerical tractability in constructing their theories.

Given this situation, we should perhaps not have been surprised to learn that as astrophysics observatories proliferated, the theorists faced intractable difficulties in reconciling predictions with data.

As if these problems were not serious enough, the focus on explaining observations of the behavior of particles under unusual conditions has led to a certain myopia regarding the intractability of what I would call “first order” phenomena: things that are obvious to us in our every-day lives, but have yet to be satisfactorily explained by theory.

We continue with an enumeration of defects.

The Supremacy of Formalism

The program for constructing a Grand Unified Theory of physics is a theoretical conceit from the start. There is no a priori reason to expect that summing the effects of independent forces is not a satisfactory and accurate means of describing the universe.

Once the program is undertaken, however, every term in every equation falls under the microscope of formal criteria. For example, the Higgs field was motivated as a means of restoring parity invariance to the Dirac equation. Similarly, sparticles were introduced to eliminate the distinction between particles with Bose and Fermi statistics. The “strings” of super-string theory were invented to cut off integrals that produce infinities in calculations of particle kinematics. Although these innovations are sufficient to achieve consistency with phenomenology, there is absolutely no experimental evidence that made them necessary. They were motivated solely by abstract formal criteria.

The tractability of formal analysis also has a suspicious influence over the formulation of particle theories. The dynamics of two-dimensional “strings” in super-string theory are susceptible to Fourier analysis. However, Fourier modes are normally far-field approximations to more complex behavior in the vicinity of three-dimensional bodies. In a three-dimensional manifold such as our reality, it would seem natural that particles would manifest structure as toroids, rather than as strings. Unfortunately, the dynamics of such structures can be described only using computational methods, making them an inconvenient representation for analysis.

Finally, while the Large Hadron Collider (LHC) is now marketed principally as a Higgs detector, the original motivation for its construction was a formal problem in particle kinematics: the Standard Model predicted that reaction rates would exceed unity in the vicinity of momentum transfers of 1 TeV. Something truly dramatic was expected from the experimental program, which, at least from the press reports, appears not to have manifested.

Violations of Occam’s Razor

A widely held principle in the development of scientific theory, Occam’s Razor recognizes that a simpler theory is more easily falsified than a complex theory, and so should be preferred as a target for experimental verification. By implication, theorists faced with unnecessary complexity (i.e. – complexity not demanded by phenomenology) in their models should be motivated to seek a simpler replacement.

The most egregious violation of the principle is the combinatorics of dimensional folding in “big bang” cosmologies derived from super string theories. There are tens of millions of possibilities, with each possibility yielding vastly different formulations of physical law. In recent developments, the Big Bang is considered to be the source of an untold number of universes, and we simply happen to be found in one that supports the existence of life.

The Higgs as a source of mass is also an apparent superfluity. In the original theory, each particle had a unique coupling constant to a single Higgs field. The number of parameters in the theory were therefore not reduced. More recently, theorists have suggested that there may be multiple Higgs fields, which is certainly no improvement under the criteria of Occam’s Razor.

The vastly enlarged particle and field menageries of GUTs are also suspicious. There are roughly ten times as many particles and fields as are observed experimentally; the addition of seven extra spatial dimensions is also of concern.

Unverifiable Phenomena

Particularly in the area of cosmology, the theories take fairly modest experimental results and amplify them through a long chain of deduction to obtain complex models of the early universe. Sadly, many of the intermediate steps in the deduction concern phenomena that are not susceptible to experimental verification, making the theories unfalsifiable.

The point of greatest concern here is the interpretation of the loss of energy by light as it traverses intergalactic space. In the reigning theory, this is assumed to be due to the special relativistic “red shift” of light emitted from sources that are moving away from the Earth at a significant fraction of the speed of light. Of course, no one has ever stood next to such an object and measured its velocity. In fact, the loss of energy is interpreted (circularly) as proof of relative motion.

The “red shift” interpretation is the principle justification of the “Big Bang” theory, which again is a phenomenon that cannot be directly verified. There are difficulties in the theory concerning the smoothness and energy density of the observable universe. These are purported to be side-effects of “inflationary” episodes driven by symmetry breaking of Higgs-like fields. No demonstrated vacuum potential manifests a sufficient number of e-foldings of space, and the relevant energy scales are many orders of magnitude beyond the reach of our experimental facilities.

Finally, the 2012 Nobel prize was awarded for studies that indicated that the universe is “inflating”. The inference was achieved by looking at the spectra of distant light sources, and determining that they no longer followed the predictions of Hubble’s law. However, extrapolating from those measurements into the distant future is troubling, as even in the context of the Big Bang model, this opens the door to additional effects that may mitigate or reverse the predicted inflation. Obviously, since these effects would occur on time-scales exceeding the existence of the Earth (which will be vaporized when the sun’s photosphere expands), they will never be verified.

Axiomatic Contradictions

As discussed in the previous section, the lack of an explanation for mass verges on an axiomatic need in the theory. That is to say, it appears to require a fundamental reevaluation of the abstract principles used to construct physical theories.

There are at least two other phenomena that directly violate fundamental axioms in the theory. The first is the existence of non-uniformity in the structure of space-time that is not associated with matter (so-called “dark energy”). Special relativity and all of its dependent theories (i.e. – all of particle physics) rests upon the assumption that space is empty. In the era in which special relativity was formulated, evidence (the Michelson-Morley experiment) suggested that there was no “luminiferous ether” – no medium in which electromagnetic radiation propagated. Dark energy is in fact an ether, and its existence requires a dynamical explanation for the Michelson-Morley results. (It is my opinion that this is why Einstein called the vacuum energy the worst idea he ever had – the existence of such a term undermines all of special and general relativity).

Finally, the work of the Princeton Engineering Anomalies Research team demonstrated couplings between psychological states and the behavior of inanimate objects that are outside of the modes of causality allowed by existing physical theory. The rejection of these findings by the mainstream physics community indicates that accommodating these findings is going to require rethinking of the axioms of the theory. The most extreme examples concern the structure of time – the standard model allows non-linear causality only at quantum scales, and some studies of the “paranormal” appear to indicate non-causal behavior (information preceding effects) on macroscopic scales.

Sorry to Get All Technical on You…

To this point, I’ve been writing about spirituality with a certain confident imprecision. That confidence is backed by a model of physics that I am fairly confident can overcome many of the difficulties in modern particle theory. I wrote a Templeton Fund proposal a couple of years back, and sent it around to my erstwhile peers in the community. Response was tepid, at best.

Having published The Soul Comes First, I’m getting ready to put the research proposal back around in the community. I thought that it wouldn’t hurt to serialize it first here, as that may reach people with an interest in these matters that I can’t contact directly. I’ll start that tonight. It will run for the next two weeks. Then I’ll get back to moral philosophy, starting with the matter of death.

This will be fairly technical. Any of you readers that know some science buffs, you might have fun getting them to read through it.