# Pure Derivation of the Exact Fine – Structure continued and As a Ratio …

Theorists at the Strings Conference in July of 2000 were asked what mysteries keep to be revealed in the 21st century. Participants were invited to help formulate the ten most important unsolved problems in basic physics, which were finally chosen and ranked by a distinguished panel of David Gross, Edward Witten and Michael Duff. No questions were more worthy than the first two problems respectively posed by Gross and Witten: #1: *Are all the (assessable) dimensionless parameters that characterize the physical universe calculable in rule or are some merely determined by historical or quantum mechanical accident and incalculable?* #2: *How can quantum gravity help explain the origin of the universe?*

A newspaper article about these millennial mysteries expressed some interesting comments about the #1 question. Perhaps Einstein indeed “put it more crisply: *Did God have a choice in creating the universe?*” – which summarizes quandary #2 in addition. While certainly the Eternal One ‘may’ have had a ‘choice’ in Creation, the following arguments will conclude that the reply to Einstein’s question is an emphatic “No.” For already more certainly a complete spectrum of unheard of, precise basic physical parameters are demonstrably calculable within a *single dimensionless Universal system* that naturally comprises a literal “*Monolith*.”

Likewise the article went on to ask if the speed of light, Planck’s continued and electric charge are indiscriminately determined – “or do the values have to be what they are because of some thorough, hidden logic. These kinds of questions come to a point with a conundrum involving a mysterious number called alpha. If you square the charge of the electron and then divide it by the speed of light times Planck’s (‘reduced’) continued (multiplied by 4p times the vacuum permittivity), all the (metric) dimensions (of mass, time and distance) cancel out, yielding a so-called “pure number” – alpha, which is just over 1/137. But why is it not precisely 1/137 or some other value thoroughly? Physicists and already mystics have tried in vain to explain why.”

Which is to say that while constants such as a basic particle mass can be expressed as a dimensionless relationship relative to the Planck extent or ratio to a slightly more precisely known or obtainable unit of mass, the inverse of the electromagnetic coupling continued alpha is uniquely dimensionless as a pure *‘fine-structure number’ a* ~ 137.036. however, assuming a rare, invariantly discrete or *exact* fine-structure numeric exists as a “literal continued,” the value must nevertheless be empirically confirmed as a ratio of two *inexactly* determinable ‘metric constants,’ h-bar and electric charge e (light speed c being exactly *defined* in the 1983 adoption of the SI convention as an integer number of meters per second.)

So though this conundrum has been deeply puzzling almost from its inception, my impression upon reading this article in a morning paper was utter amazement a numerological issue of invariance merited such distinction by eminent modern authorities. For I’d been obliquely obsessed with the fs-number in the context of my colleague A. J. Meyer’s form for a number of years, but had come to accept it’s experimental determination in practice, pondering the dimensionless issue regularly to no avail. Gross’s question consequently served as a catalyst from my complacency; recognizing a rare position as the only fellow who could provide a categorically complete and consistent answer in the context of Meyer’s main basic parameter. nevertheless, my pretentious instincts led to two months of inane intellectual posturing until sanely repeating a simple procedure explored a few years earlier. I merely **looked** at the consequence using the 98-00 CODATA value of *a*, and the following solution closest hit with complete heuristic force.

For the fine-structure ratio effectively quantizes (via h-bar) the electromagnetic coupling between a discrete unit of electric charge (e) and a photon of light; in the same sense an *integer is discretely ‘quantized’* compared to the ‘fractional continuum’ between it and 240 or 242. One can easily see what this method by considering another integer, 203, from which we subtract the 2-based exponential of the square of 2pi. Now add the inverse of 241 to the resultant number, multiplying the product by the natural log of 2. It follows that this pure calculation of the fine-structure number exactly equals

**137.0359996502301…**– which here (/100) is given to 15, but is calculable to any number of decimal places.

By comparison, given the experimental uncertainty in h-bar and e, the NIST evaluation varies up or down around the mid 6 of ‘965’ in the invariant ordern defined above. The following table according gives the values of h-bar, e, their calculated ratio as and the actual NIST choice for *a* in each year of their archives, in addition as the 1973 CODATA, where the standard two digit +/- experimental uncertainty is in bold kind within parentheses.

year…*h-* = N*h**10^-34 Js…… e = Ne~10^-19 C….. *h/*e^2 = *a * =….. NIST value & ±(**SD**):

2006: 1.054.571 628(0**53**) 1.602.176 487(0**40**) 137.035.999.**6**61 137.035.999 679(0**94**)

2002: 1.054.571 680(**18**x) 1.602.176 53o(**14**o) 137.035.999.**0**62 137.035.999 11o(**46**o)

1998: 1.054.571 596(0**82**) 1.602.176 462(0**63**) 137.035.999.**7**79 137.035.999 76o(**50**o)

1986: 1.054.572 66x(**63**x) 1.602.177 33x(**49**x) 137.035.9**8**9.558 137.035.989 5xx(**61**xx)

1973: 1.054.588 7xx(**57**xx) 1.602.189 2xx(**46**xx) 137.03**6**.043.335 137.036. 04x(**11**x)

So it seems the NIST choice is approximately determined by the measured values for *h* and e alone. However, as explained at http://physics.nist.gov/cuu/Constants/alpha.html, by the 80’s interest shifted to a new approach that provides a direct determination of *a *by exploiting the quantum Hall effect, as independently corroborated with both theory and experiment of the electron magnetic-moment anomaly, consequently reducing its already finer tuned uncertainty. in addition it took 20 years before an improved measure of the magnetic moment *g*/2-factor was published in mid 2006, where this group’s (led by Gabrielse for Hussle at Harvard.edu) first calculate for *a* was (A:) 137.035999. 710(0**96**) – explaining the much reduced uncertainty in the new NIST list, as compared to that in *h*-bar and e. However, more recently a numeric error in the initial QED calculation (A:) was discovered (we’ll refer to it as 2nd paper B:) which shifted the value of a to (B:) 137.035999. 070 (0**98**).

Though it reflects a nearly identically small uncertainty, this assessment is clearly outside the NIST value concordant with estimates for h-bar and elementary charge, which are independently determined by various experiments. The NIST has three years to sort this out, but meantime confront an embarrassing irony in that at the minimum the 06-choices for h-bar and e seem to be slightly skewed toward the expected fit for *a*! For example, adjusting the last three digits of the 06-data for h and e to accord with our pure fs-number yields an imperceivable adjustment to e alone into the ratio h628/e487.065. Had the QCD error been corrected prior to the actual NIST publication in 2007, it rather easily could have been uniformly modificated to h626/e489; though questioning its coherency in the last 3-digits of *a *with respect to the comparative 02 and 98 data. In any case, far vaster improvements in multiple experimental designs will be required for a comparable reduction in error for h and e in order to settle this issue for good.

But again, already then no matter how ‘precisely’ metric measure is maintained, it’s nevertheless infinitely short of ‘literal exactitude,’ while our pure fs-number fits the present values of h628/e487quite precisely. In the former regard, I recently discovered a mathematician named James Gilson (see http://www.maths.qmul.ac.uk/%7Ejgg/page5.html ) also devised a pure numeric = 137.0359997867… nearer the revised 98-01 standard. Gilson further contends he’s calculated numerous parameters of the standard form such as the dimensionless ratio between the masses of a Z and W ineffective gauge boson. But I know he could never construct a single Proof employing equivalences capable of *deriving Z and/or W masses per se from then precisely confirmed masses of heavy quarks and *

__(see essay referenced in the resource box), which themselves consequence from a single over-riding dimensionless tautology. For the numeric discreteness of the fraction 1/241 allows one to__

*Higgs fields*__construct__

*physically meaningful dimensionless equations*. If one instead took Gilson’s numerology, or the perfected empirical value of Gabreilse et. al., for the fs-number, it would destroy this discreteness, precise self-consistency and ability to already

*write*a meaningful dimensionless equation! By contrast, perhaps it’s then not too surprising that after I literally ‘found’ the integer 241 and derived the exact fine-structure number from the resultant ‘Monolith Number,’ it took only about 2 weeks to calculate all six quark masses employing real dimensionless examination and various fine-structured relations.

But as we now aren’t really talking about the fine-structure number per se any more than the integer 137, the consequence *definitively answers* Gross’s question. For those “dimensionless parameters that characterize the physical universe” (including alpha) are ratios between chosen metric parameters that without a single unified dimensionless system of mapping from which metric parameters like particle masses are calculated from set equations. The ‘standard form’ gives one a single system of parameters, but **no** method to calculate or __predict__ any one and/or all within a single system – consequently the experimental parameters are put in by hand without any order.

Final irony: I’m doomed to be demeaned as a ‘numerologist’ by ‘experimentalists’ who continually fail to recognize a hard empirical proof for quark, Higgs or hadron masses that may be used to exactly calculate the present standard for the most precisely known and heaviest mass in high energy physics (the Z). So contraire foolish ghouls: empiric confirmation is just the final cherry the chef puts on top before he presents a “Pudding Proof” no sentient being could resist just because he didn’t assemble it himself, so instead makes a mimicked mess the real deal doesn’t resemble. For the base of this pudding is made from melons I call Mumbers, which are really just numbers, pure and simple!