Posts Tagged ‘Science’

Susskind & Friedman: Quantum Mechanics

February 22, 2018

Quantum Mechanics
The Theoretical Minimum
Leonard Susskind and Art Friedman
(Basic, 2014)
xx + 364 p.

Books on quantum mechanics tend to come to two main varieties: introductions for non-scientists, which normally focus on the conceptual underpinnings of the subject and avoid mathematics, and technical books written for advanced undergraduates or higher. This book, however, doesn’t quite fall in either camp. It spends a good deal of time carefully exploring the conceptual foundations, but it also does contain enough mathematics — all of it fairly gentle, but pertinent — so that the reader is not only told, but also is able to see, how certain of the most famous predictions of quantum mechanics follow from those conceptual foundations.

Susskind and Friedman begin with a simple quantum system, a single quantum spin, and use it to lay out the unusual logic of quantum states, emphasizing how it differs from the logic of classical physics. They discuss both time independent and time dependent quantum mechanics, emphasizing the value of the former for deducing the energy states of a system and of the latter for deducing how the system evolves in time. About one-third of the book is devoted to an exploration of entanglement, traditionally one of the strangest aspects of the quantum world, but they take some pains to argue that entanglement does not imply any sort of non-locality, as is sometimes claimed. Later sections of the book transition to the topic of wavefunctions and particles, and creep right up to the edge of quantum field theory, so as to peer over for a moment. At the end, they give a nice treatment of the quantum mechanical harmonic oscillator, which is one of the simplest but most important quantum systems.

Susskind is one of the best-known physicists of his generation. It is not all that common, I think, for such an eminent scientist to have a passion for teaching his subject to beginners, so I very much admire what he is doing here. The development of the subject is clear, with intermediate steps worked, and the significance of conclusions are emphasized. The book has a welcoming tone, and the enthusiasm of the authors is evident. They do not refrain from an occasional joke. The book is, apparently, derived from an online course which Susskind has given at Stanford. (Friedman is one of his students, and it is unclear to me exactly what his role has been in writing the book.) The book has been typeset with LaTeX, as is right and just.

It’s a little hard to say who the target audience is. It would be accessible, certainly, to an interested reader who had been trained in, for instance, engineering. The mathematics required doesn’t extend much beyond complex numbers, basic calculus, and linear algebra, and even these are given quick explanations in the text. I could see it being a very good read for an ambitious high school student or beginning undergraduate who has an interest in the subject. Or, for that matter, the target might be me: a trained physicist who has been out of academia for a while and would enjoy a trip down memory lane.

The book is part of a series, in fact, that goes under the title “The Theoretical Minimum”. It was preceded by a book on classical mechanics (to which this volume makes occasional reference), and has now been succeeded by a recent book on special relativity and classical field theory. I’ve not read either of those, but I had enough fun with this one that I might.

Way over yonder

June 16, 2017

A few interesting items I’ve stumbled upon in the last few weeks:

  • When Mother Teresa was canonized last year, I missed this superb reflection on her life by Fr George Rutler, who knew her personally. “The canonization of Teresa of Calcutta gives the kind of satisfaction that comes from having your mother declared Mother of the Year.” It’s a quite beautiful tribute to her and her significance for the rest of us.
  • Bob Dylan’s Nobel lecture finally appeared, and it’s well worth a listen (or, if you must, a read). Fr Schall has interesting things to say about it, both for better and worse, although I think he underestimates the degree to which Dylan’s body of work has a transcendent dimension.
  • Speaking of Dylan, one of the best things I’ve read about him since he won the Nobel last year is this essay by Carl Eric Scott, published in Modern Age. Scott selects “To Ramona” as one of Dylan’s most underrated songs, a judgement with which I heartily agree.
  • At City Journal, John Tierney writes about something we don’t hear much about: the left-wing war on science.
  • Ben Blatt has written a book called Nabokov’s Favorite Word is Mauve: What the Numbers Reveal About the Classics, Bestsellers, and Our Own Writing, in which he subjects famous works of literature to statistical analyses. It prompted one of the most enjoyable scathing reviews that I’ve seen in a long while, from Matthew Walther: “Never, I think, has a purported piece of “literary criticism” been so disconnected from literature and non-suggestive of all the things that might, and very frequently do, induce people to read.” The review was so withering that I actually got the book, just to see how bad it was. It’s tremendously bad.
  • In the midst of a stew of troubles, Anthony Esolen wrote a graceful critique of illiberal habits of education. It was an elegant farewell note to Providence College.
  • And finally, from New Criterion, a very interesting biographical essay about Fr Reginald Foster, an American priest who was for many years the Vatican’s chief Latinist.

For an envoi, here is Bob Dylan singing “To Ramona”, live in Manchester in 1965:

Briggs: Uncertainty

November 4, 2016

Uncertainty
The Soul of Modeling, Probability & Statistics
William Briggs
(Springer, 2016)
278 p.

Being something of a beginner in the art of statistical analysis, I thought this book on the philosophical and conceptual underpinnings of statistical methods would be instructive, and I was right. I learned so much I’m not sure I want to learn any more.

In a nutshell: Briggs is critical of most of the standard apparatus of statistical methods, both technical and interpretive. Hypothesis testing, regression, data smoothing, quantification of everything, and, above all, p-values he condemns to perdition. The problem is not that such methods have no value, but that they are widely misunderstood and misapplied, with the result that the conclusions drawn from statistical analyses are often either simply wrong or the uncertainty in those conclusions is underestimated (and by an unknown amount). He gives many examples of ways in which standard techniques lead to spurious “significant” results.

By criticizing standard statistical methods, one might get the impression that Briggs’ is a lone voice crying in the wilderness, but he has plenty of citations to offer for most of his arguments. He belongs to an alternate, minority, but not negligible tradition.

Some of the important points he makes:

Probability is logical. Logic concerns relationships between propositions, and so does probability, except that in the latter case the logic is extended to propositions the truth of which is uncertain. This point was made lucidly and rather beautifully by Jaynes, and reading Briggs has made me want to return to that book to read more of it.

Probability is not a cause. Probability can tell us about correlations, but nothing at all about causes. The habit of inferring causes from statistical correlations, absent a corresponding causal model, is a bad habit that leads many astray. In general, uncertainty reflects our ignorance of causes rather than our knowledge of them.

Probability is conditional. Probability statements are always conditional on a set of premises. This is no such thing as Pr(X), but only Pr(X|Y) — that is, the probability of X given some set of premises Y. If the premises change, the probability of X will, in general, change. Thus Briggs, while not quite a Bayesian, does think the Bayesians have it over the frequentists when it comes to the debate over whether probability is objective (ie. out there) or subjective (ie. in the mind). Probabilities reflect the uncertainty in propositions given what we know; they do not exist outside our minds, and they change when our knowledge changes. A corollary is that one should never say, “X has a probability of Z”. Nothing has a probability. Probability does not exist. One should only say, “Given premises Y, the probability of X is Z.”

Probability is often not quantifiable. If we know “Most people like ice cream and Sue is a person”, the probability that Sue likes ice cream cannot be naturally or unambiguously quantified unless the meaning of “most” is clarified. Moreover, it is often a mistake to force probabilistic arguments into a quantified form. Briggs argues that the habit of doing so (as with “instruments” for assessing subjective attitudes about politics or emotional responses to stimuli, for instance) often leads to misleading results and promotes the vice of scientism.

Statistical significance is not objective. No probability model can tell one whether a given probability is significant or not. This is an extra-statistical, and often an extra-scientific, question. Whether it is judged significant is a matter of prudential judgment based on the specific question at issue and the decisions to be made about it. Thus he would like to disrupt the “turn the crank” model of statistical analysis in which “significant” results pop out of the sausage-maker, returning such questions to spheres of deliberation and judgment.

Probability models should be predictive. Briggs’ principal constructive suggestion (apart from shoring up our understanding of what probability is) is that statistical models should be predictive. They should state their premises in as much detail as possible, and should predict observations on the basis of those premises (taking into account uncertainties, of course). If the models fail to predict the observables, they are not working and should be amended or scrapped. As I understand it, he is proposing that fields which lean heavily on statistics should, by following his proposals, become more like the hard sciences. True, progress will be slower, and (acknowledged) uncertainties larger, but progress will be surer and causes better understood.

***

Briggs has some fun pointing out common fallacies in statistical circles. There is, for instance, the We-Have-To-Do-Something Fallacy, in which a perceived imperative to do something about something (usually something political) leads to the employment of some defective or fallacious statistical method, the defectiveness or fallaciousness of which is then ignored. Or the Epidemiologist’s Fallacy, in which a statistician claims “X causes Y” even though X was never measured and though statistical models cannot in any case discern causes. (This fallacy is so-called because without it “most epidemiologists, especially those in government, would be out of a job”.)  Or the False Dichotomy Fallacy, which is the foundational principle of hypothesis testing. Or the Deadly Sin of Reification, whereby statisticians mistake parameters in their statistical models for real things. And so on.

***

Much of this might seem rather obvious to the uninitiated. I’m not an adept of the standard techniques, so I was at times a little puzzled as I tried to discern the particular bad habit Briggs was criticizing. But, as is increasingly appreciated (here and here, for instance), the use and abuse of the standard techniques have led wide swathes of the scientific community into error, most commonly the error of over-certainty, which is actually an uncertainty about what is true. An audience for this book clearly exists.

Were his recommendations to be followed, he argues that the effects would be

a return to a saner and less hyperbolic practice of science, one that is not quite so dictatorial and inflexible, one that is calmer and in less of a hurry, one that is far less sure of itself, one that has a proper appreciation of how much it doesn’t know.

But, on the other hand, it would reduce the rate at which papers could be published, would make decisions about significance matters of prudential judgment rather than scientific diktat, and would make scientific conclusions more uncertain. He is fighting an uphill battle.

Briggs is an adjunct professor at Columbia, and has done most of his scientific work in climate science (and is, as you would expect, skeptical of the predictions of statistical climate models, which provide a few of his case studies). He seems to be something of an atypical academic: this book, for instance, includes approving reference to Aristotle, Thomas Aquinas, and even John Henry Newman (whose Grammar of Assent he cites as an example of non-quantitative probabilistic argumentation). It’s quite a rollicking read too. Briggs has a personality, and doesn’t try to hide it. Personally I found the tone of the book a little too breezy, the text sometimes reading almost as if it were transcribed lecture notes (I make no hypothesis), but overall the book is smart and clear-eyed, and I’m glad to have read it. Now back to Jaynes.

***

I found a good video which illustrates the problem with relying on p-values to determine statistical significance. When I consider that many of the findings of the social sciences are based on this criterion I’m not sure whether to cringe or weep. No wonder there is a replication crisis. Witness the dance of the p-values:

Here is a short video illustrating why it is reasonable to doubt the putative findings of many (and perhaps most) published research papers employing statistical methods. This argument and others are set forth in detail by Ioannidis.