Today marks 50 years since Steven Weinberg submitted his famous paper, “A Model of Leptons”, to Physical Review Letters. In this short paper (of just 3 pages) he proposed a theoretical framework within which the electromagnetic force and the weak nuclear force could be understood as two different aspects of a single underlying force, dubbed “electroweak”.

The paper was actually published on 20 November 1967, and it has had an interesting history. At first it was ignored, garnering just 2 citations in the first 3 or 4 years. When this neglect changed, it changed dramatically, and for several decades “A Model of Leptons” was the most cited paper in the literature on high-energy, fundamental physics. It won Weinberg the Nobel Prize in Physics (shared with Salam and Glashow) in 1979.

The idea that the fundamental forces of nature (electromagnetism, weak, strong, and gravitational) might be different aspects of a single, simpler force that could be described more economically has been, it is fair to say, one of the leading ideas in physics in the last 100 years or so. Einstein tried to unify electromagnetism and gravity, and huge piles of ink were lavished on the effort to unify electromagnetism with the weak and the strong forces into a “grand unified theory”. String theory, for several decades now the sexiest branch of theoretical physics, is another example of this same ambition.

But it is noteworthy that Weinberg’s proposal is the only successful example of unification that we have managed to find.

Here is a good history of the paper’s composition and reception from CERN Courier.

A few interesting items I’ve stumbled upon in the last few weeks:

When Mother Teresa was canonized last year, I missed this superb reflection on her life by Fr George Rutler, who knew her personally. “The canonization of Teresa of Calcutta gives the kind of satisfaction that comes from having your mother declared Mother of the Year.” It’s a quite beautiful tribute to her and her significance for the rest of us.

Bob Dylan’s Nobel lecture finally appeared, and it’s well worth a listen (or, if you must, a read). Fr Schall has interesting things to say about it, both for better and worse, although I think he underestimates the degree to which Dylan’s body of work has a transcendent dimension.

Speaking of Dylan, one of the best things I’ve read about him since he won the Nobel last year is this essay by Carl Eric Scott, published in Modern Age. Scott selects “To Ramona” as one of Dylan’s most underrated songs, a judgement with which I heartily agree.

Ben Blatt has written a book called Nabokov’s Favorite Word is Mauve: What the Numbers Reveal About the Classics, Bestsellers, and Our Own Writing, in which he subjects famous works of literature to statistical analyses. It prompted one of the most enjoyable scathing reviews that I’ve seen in a long while, from Matthew Walther: “Never, I think, has a purported piece of “literary criticism” been so disconnected from literature and non-suggestive of all the things that might, and very frequently do, induce people to read.” The review was so withering that I actually got the book, just to see how bad it was. It’s tremendously bad.

In the midst of a stew of troubles, Anthony Esolen wrote a graceful critique of illiberal habits of education. It was an elegant farewell note to Providence College.

And finally, from New Criterion, a very interesting biographical essay about Fr Reginald Foster, an American priest who was for many years the Vatican’s chief Latinist.

For an envoi, here is Bob Dylan singing “To Ramona”, live in Manchester in 1965:

Uncertainty The Soul of Modeling, Probability & Statistics William Briggs (Springer, 2016) 278 p.

Being something of a beginner in the art of statistical analysis, I thought this book on the philosophical and conceptual underpinnings of statistical methods would be instructive, and I was right. I learned so much I’m not sure I want to learn any more.

In a nutshell: Briggs is critical of most of the standard apparatus of statistical methods, both technical and interpretive. Hypothesis testing, regression, data smoothing, quantification of everything, and, above all, p-values he condemns to perdition. The problem is not that such methods have no value, but that they are widely misunderstood and misapplied, with the result that the conclusions drawn from statistical analyses are often either simply wrong or the uncertainty in those conclusions is underestimated (and by an unknown amount). He gives many examples of ways in which standard techniques lead to spurious “significant” results.

By criticizing standard statistical methods, one might get the impression that Briggs’ is a lone voice crying in the wilderness, but he has plenty of citations to offer for most of his arguments. He belongs to an alternate, minority, but not negligible tradition.

Some of the important points he makes:

Probability is logical. Logic concerns relationships between propositions, and so does probability, except that in the latter case the logic is extended to propositions the truth of which is uncertain. This point was made lucidly and rather beautifully by Jaynes, and reading Briggs has made me want to return to that book to read more of it.

Probability is not a cause. Probability can tell us about correlations, but nothing at all about causes. The habit of inferring causes from statistical correlations, absent a corresponding causal model, is a bad habit that leads many astray. In general, uncertainty reflects our ignorance of causes rather than our knowledge of them.

Probability is conditional. Probability statements are always conditional on a set of premises. This is no such thing as Pr(X), but only Pr(X|Y) — that is, the probability of X given some set of premises Y. If the premises change, the probability of X will, in general, change. Thus Briggs, while not quite a Bayesian, does think the Bayesians have it over the frequentists when it comes to the debate over whether probability is objective (ie. out there) or subjective (ie. in the mind). Probabilities reflect the uncertainty in propositions given what we know; they do not exist outside our minds, and they change when our knowledge changes. A corollary is that one should never say, “X has a probability of Z”. Nothing has a probability. Probability does not exist. One should only say, “Given premises Y, the probability of X is Z.”

Probability is often not quantifiable. If we know “Most people like ice cream and Sue is a person”, the probability that Sue likes ice cream cannot be naturally or unambiguously quantified unless the meaning of “most” is clarified. Moreover, it is often a mistake to force probabilistic arguments into a quantified form. Briggs argues that the habit of doing so (as with “instruments” for assessing subjective attitudes about politics or emotional responses to stimuli, for instance) often leads to misleading results and promotes the vice of scientism.

Statistical significance is not objective. No probability model can tell one whether a given probability is significant or not. This is an extra-statistical, and often an extra-scientific, question. Whether it is judged significant is a matter of prudential judgment based on the specific question at issue and the decisions to be made about it. Thus he would like to disrupt the “turn the crank” model of statistical analysis in which “significant” results pop out of the sausage-maker, returning such questions to spheres of deliberation and judgment.

Probability models should be predictive. Briggs’ principal constructive suggestion (apart from shoring up our understanding of what probability is) is that statistical models should be predictive. They should state their premises in as much detail as possible, and should predict observations on the basis of those premises (taking into account uncertainties, of course). If the models fail to predict the observables, they are not working and should be amended or scrapped. As I understand it, he is proposing that fields which lean heavily on statistics should, by following his proposals, become more like the hard sciences. True, progress will be slower, and (acknowledged) uncertainties larger, but progress will be surer and causes better understood.

***

Briggs has some fun pointing out common fallacies in statistical circles. There is, for instance, the We-Have-To-Do-Something Fallacy, in which a perceived imperative to do something about something (usually something political) leads to the employment of some defective or fallacious statistical method, the defectiveness or fallaciousness of which is then ignored. Or the Epidemiologist’s Fallacy, in which a statistician claims “X causes Y” even though X was never measured and though statistical models cannot in any case discern causes. (This fallacy is so-called because without it “most epidemiologists, especially those in government, would be out of a job”.) Or the False Dichotomy Fallacy, which is the foundational principle of hypothesis testing. Or the Deadly Sin of Reification, whereby statisticians mistake parameters in their statistical models for real things. And so on.

***

Much of this might seem rather obvious to the uninitiated. I’m not an adept of the standard techniques, so I was at times a little puzzled as I tried to discern the particular bad habit Briggs was criticizing. But, as is increasingly appreciated (here and here, for instance), the use and abuse of the standard techniques have led wide swathes of the scientific community into error, most commonly the error of over-certainty, which is actually an uncertainty about what is true. An audience for this book clearly exists.

Were his recommendations to be followed, he argues that the effects would be

a return to a saner and less hyperbolic practice of science, one that is not quite so dictatorial and inflexible, one that is calmer and in less of a hurry, one that is far less sure of itself, one that has a proper appreciation of how much it doesn’t know.

But, on the other hand, it would reduce the rate at which papers could be published, would make decisions about significance matters of prudential judgment rather than scientific diktat, and would make scientific conclusions more uncertain. He is fighting an uphill battle.

Briggs is an adjunct professor at Columbia, and has done most of his scientific work in climate science (and is, as you would expect, skeptical of the predictions of statistical climate models, which provide a few of his case studies). He seems to be something of an atypical academic: this book, for instance, includes approving reference to Aristotle, Thomas Aquinas, and even John Henry Newman (whose Grammar of Assent he cites as an example of non-quantitative probabilistic argumentation). It’s quite a rollicking read too. Briggs has a personality, and doesn’t try to hide it. Personally I found the tone of the book a little too breezy, the text sometimes reading almost as if it were transcribed lecture notes (I make no hypothesis), but overall the book is smart and clear-eyed, and I’m glad to have read it. Now back to Jaynes.

***

I found a good video which illustrates the problem with relying on p-values to determine statistical significance. When I consider that many of the findings of the social sciences are based on this criterion I’m not sure whether to cringe or weep. No wonder there is a replication crisis. Witness the dance of the p-values:

Here is a short video illustrating why it is reasonable to doubt the putative findings of many (and perhaps most) published research papers employing statistical methods. This argument and others are set forth in detail by Ioannidis.

These short “lessons” were originally serialized in the Italian press, and are here collected and rendered into elegant English. Rovelli is an eminent physicist who gives us a series of meditations on developments in physics since 1900.

They are arranged in order of increasing speculation: he begins with general relativity and quantum mechanics, presenting in non-technical language the main points — space and time are dynamic and responsive, and are filled with a restless boil of quantum fields. He proceeds to give brief — and I do mean brief — overviews of modern cosmology and the Standard Model of elementary particles. All of this is solid science; questions linger, of course, and he draws attention to those loose threads and nagging problems, but basically he is describing successful theories.

In the last three sections of the book he moves to topics of greater uncertainty. The outstanding problem of how to reconcile general relativity with quantum mechanics he broaches with a very interesting discussion of theories of loop quantum gravity, the basic postulate of which is that space-time is quantized. (Rovelli is himself one of the architects of this theory.) Amazingly, and rather gratifyingly, he doesn’t even mention the other principal effort to solve this problem: string theory. This is unquestionably the book’s finest witticism, one that I imagine has raised a few consternated eyebrows in faculty lounges.

The last section specifically about physics tackles the vexing puzzles that arise at the intersection of gravity, quantum mechanics, and thermodynamics. Laying great stress on the time-irreversibility of thermodynamic processes, he argues that thermodynamics has something crucial to tell us about the uni-directionality of time itself. This is a common trope in physics circles, but, correlation not being causation, it seems to me suggestive at best. But then he reminds us of Hawking radiation, in which quantum effects near black holes actually cause them to radiate heat, and one feels a chill of delight running up the spine.

Alas, the same cannot be said of the book’s final chapter, in which Rovelli takes a step back to ponder the implications of all this for human self-understanding. He emphasizes that modern physics has revealed the world to be radically different from the way we intuitively think of it, which is fair enough, and then argues that more such intuitions — those pertaining to human freedom, for instance, or consciousness — are due to be superseded by counter-intuitive scientific explanations. There appears to be nothing more to his argument than the power of analogy. He tries to declare a peace between his commitment to the power of physics to completely describe the world, on one hand, and his commitment to the legitimacy of humanistic values, on the other, but it is far from convincing. And he is rather dispiritingly emphatic in his devotion to immanence:

“Immersed in this nature that made us and that directs us, we are not homeless beings suspended between two worlds, parts of but only partly belonging to nature, with a longing for something else. No: we are home.”

Nothing new here, of course, and this view does have about it a certain poetry — he even cites Lucretius, the patron poet of materialism — but there are such a host of issues being passed over in silence that such poetry as it possesses sounds rather hollow.

The book is written in a lyrical tone, and would be accessible, I imagine, to anyone who has an interest in the subject matter. There is only one equation — Einstein’s field equation for general relativity, which he describes as “the most beautiful of theories,” and I’ll not argue with that.

Jonathan Haidt is an unusually interesting academic. He is a psychologist who has in recent years turned his attention to matters of public import, and has especially emerged as an advocate of greater “viewpoint diversity” in the academy. To that end, he has founded Heterodox Academy, a forum for highlighting findings that run counter to received opinion in academic disciplines, particularly in the social sciences.

Earlier this month he gave the keynote address at the annual meeting of the American Psychological Assocation. His lecture is entitled “What’s Happening to Our Country? How Psychology Can Respond to Political Polarization, Incivility and Intolerance”, and in it he considers a number of long-term polarizing trends in American society and what to do about them.

He’s an engaging speaker. If you’re interested in understanding the Trump phenomenon, or fancy the thought of seeing a crowd of left-wing academics called out for bias by one of their own guild, this lecture might be for you. If you’re of conservative temperament, you might be pleasantly surprised to hear that an eminent academic considers you anything other than roadkill on the upward way of enlightenment. As he says in the lecture, every healthy society needs a party of order and stability as well as a party of change and progress. It sounds sensible to me (except the bit about change and progress). The lecture is about 50 minutes long, once the introductions are over.

If you enjoy this talk, you might also enjoy a TED talk he gave on the respective moral motivations of liberals and conservatives.

A few quick notes on books I’ve read recently. The theme for today is Books With Subtitles:

Medieval Literature A Very Short Introduction Elaine Treharne (Oxford, 2015) 144 p.

I acquired this book in the hopes that it would help extend my list of “to read” medieval literature. I was interested in learning about medieval masterworks a little off the beaten trail (viz. not Dante, Chaucer, or Malory). As such, I was fairly disappointed with the book, which makes only brief mention of particular works. Instead, the book takes a wide view of medieval literature, discussing its social context, some principal themes, methods of book production, and so on. It has a rather academic tone (“Literary spaces, literary identities” is the title of one chapter, for instance). This is fine; no doubt it was what the author was going for. It just wasn’t what I was looking for.

Classical Literature A Very Short Introduction William Allan (Oxford, 2014) 135 p.

This is more like it. In an effort to organize my Greco-Roman reading lists, I nabbed this brief volume to get a bird’s eye view. I could hardly have done better. Allan gives a brief introduction to the historical and social context for classical literature, and then proceeds by genre — epic, lyric poetry, drama, historiography, oratory, pastoral poetry, satire, and novel — summarizing the principal features of each literary type and highlighting a few of the principal works. I didn’t need him to tell me about Homer or Herodotus, but I’m happy to have a better understanding of where Horace and Juvenal fit into the picture, not to mention Plautus and Petronius. My reading list is now in pretty good shape, I’d say. I faint to think how long it will take to get to all these books, but it is nice that there is always more to look forward to.

What Makes It Great? Short Masterpieces, Great Composers Rob Kapilow (Wiley, 2011) 314 p.

Rob Kapilow takes about twenty short pieces of music, mostly excerpted from larger works, and examines each of them in detail, highlighting the compositional techniques and describing the musical structure. The pieces are presented chronologically, beginning with the early eighteenth century (“Spring”, from Vivaldi’s Four Seasons) and finishing with the early twentieth century (“…Des Pas sur le Neige”, from Debussy’s Preludes). They range in length from about one minute (the ‘Trepak’ movement from Tchaikovsky’s Nutcracker Suite) to ten minutes (Wagner’s prelude to Tristan und Isolde), with the median being around 3-4 minutes. The book is written at a level appropriate for a general reader with some musical education: each piece is illustrated with numerous excerpts from the score, so he assumes a competence with musical notation, and he uses some, but not too much, technical language to describe what the composer is doing. This is just the sort of mini-listening project that I relish, and it was enjoyable for me to see the musical logic of the various pieces unveiled. For instance, the piece by Debussy is one that I’ve heard numerous times, but I’d never stopped to appreciate the fact that it is based on such a simple musical idea, suitably varied and lavishly harmonized. Likewise, I’d not discerned the musical reasoning informing a little piece like Schumann’s Traumerei, or the way in which Bach’s Prelude in C grows from a tiny musical seed. The book is full of little insights into the craft of musical composition, and I found it very enjoyable.

Physics On Your Feet Berkeley Graduate Exam Questions Dmitry Budker & Alexander O. Sushkov (Oxford, 2015) 216 p.

This was great fun. I did not have to take many oral exams during my graduate studies, but this book gives a flavour (minus the stress) of what I might have encountered. The authors have selected about sixty oral exam questions given to Berkeley grad students over the years, providing both the questions and, on the flip side, the solutions. The questions are drawn from across the spectrum: mechanics, fluids, electromagnetism, squalid state, nuclear & particle, astrophysics, optics, and molecular physics. Because of the oral exam context, none of the questions can call for lengthy calculations; more often they lean on physical intuition, approximations, and a basic (but wide) knowledge of principles. Which is not to say that they are easy! Admittedly, I have been out of the game for a decade now, and I am getting rusty, but these questions are meant to be challenging, and they succeed. Still, I enjoyed trying my hand at one question each day. I’d like to find another such book and continue the practice.

There’s a very exciting announcement today from the LIGO experiment: they are reporting the first ever direct observation of gravitational waves. Read all about it.

The existence of gravitational waves — which are “ripples” in spacetime produced by catastrophic astrophysical events like black hole collisions or supernovae — are one of the most important predictions of general relativity. Today’s discovery will go into every future textbook on the subject, and the scientists involved go straight to the Nobel shortlist.

The LIGO experiment (LIGO = Laser Interferometer Gravitational-Wave Observatory), if you haven’t heard of it, is one of the most amazing physics experiments ever conceived. Gravitational waves travelling through the detector change its size by a small amount, and so the experiment consists of making continual, very precise measurements of distance. The sensitivity is exquisite: they can detect a change in length of a fraction of the radius of a proton.

The particular observation reported today is of a collision of two black holes at an estimated distance of 1.3 billion light years. Here is the technical paper describing the discovery.

One hundred years ago today, on 25 November 1915, Einstein first presented the field equations for General Relativity during a lecture in Gottingen. GR is regarded, with justice, as among the most beautiful and creative achievements in the history of science. I know of none greater, and I am thankful to have had the opportunity to spend many happy hours working with the field equations — and some unhappy ones too, of course, because they are fiendishly difficult to solve!

On the same date, 25 November 1915, Einstein’s paper on the perihelion advance of Mercury was published in Königlich Preußische Akademie der Wissenschaften. This was the first, and is still one of the most important, experimental tests of General Relativity.

The recent Synod on the Family in Rome hasn’t, by and large, been a laughing matter, so this provides welcome comic relief.

Fr Longenecker, a long-time blogger at Standing on my Head, has recently launched a new blog: The Suburban Hermit. If you’ve an interest in things Benedictine, or like to look at old abbeys and read old books, it might be for you. Just today he wrote about our sort-of patroness, St Julian of Norwich.

Canada has a new Prime Minister, and he’s setting a new tone in international affairs.

Janet Cupo is planning to host an online book club during Advent this year; we’ll be reading Caryll Houselander’s The Reed of God. There’s probably still time to get a copy if you’re interested; mine arrived in the mail today.

Did you know there is an animal that can survive being dehydrated for 10 years, being kept at 200 degrees below freezing, and going to outer space? Meet the mightiest wee bit of them all: the tardigrade.

You might have learned the “divisibility trick” in grade school. It says that if you want to know whether a number is divisible by 3, there is a shortcut: if the sum of its digits is divisible by 3 then the number itself is divisible by 3. For example, is 459 divisible by 3? Well, 4 + 5 + 9 = 18, which is divisible by 3, so 459 is divisible by 3 as well.

This trick also works with the number 9. Again, you can try it with 459.

A week or two ago I was reading Anthony Esolen’s “Word of the Day” blog in which he stated a result about the divisibility trick generalized to a base-X number system; namely, in a base-X number system the divisibility trick works for X-1 and its factors. I was intrigued, and, as I had given some thought to the divisibility trick a few years ago and had some notes on it, I sat down last night and came up with what I think is a sound proof of the claim.

I am sure there is a nice way to formulate the argument — my approach leans heavily on modular arithmetic, which is closely related to the elegant theory of cyclic groups — but I went about it in the most simpleminded way imaginable. You can read my argument here:

An amusing application of this result is in a binary (base-2) number system. The claim simply says that any binary number for which the sum of its digits is divisible by 1 (which is all of them, since every positive integer is divisible by 1) is itself divisible by 1 (which is all of them, for the same reason). So the claim is almost empty in that case.