Richard Feynman on Explaining Things Clearly

My dad is a physicist who idolized Richard Feynman, which meant that I idolized him as well growing up. One thing about him that made a huge impression on me was how simply and clearly he explained things. Feynman was a genius. If he could explain physics clearly to anyone, then no one had any excuse to put on airs.

The Long Now Foundation recently republished one of my favorite essays about Feynman, Danny Hillis’s, “Richard Feynman and the Connection Machine.” (Hat tip to William Barnhill.) It not only shares many great anecdotes about Feynman, it also is an oral history of the early days at Hillis’s groundbreaking parallel computing company, Thinking Machines.

The whole essay is fantastic, but I like this excerpt about Feynman’s explaining prowess in particular:

In the meantime, we were having a lot of trouble explaining to people what we were doing with cellular automata. Eyes tended to glaze over when we started talking about state transition diagrams and finite state machines. Finally Feynman told us to explain it like this,

“We have noticed in nature that the behavior of a fluid depends very little on the nature of the individual particles in that fluid. For example, the flow of sand is very similar to the flow of water or the flow of a pile of ball bearings. We have therefore taken advantage of this fact to invent a type of imaginary particle that is especially simple for us to simulate. This particle is a perfect ball bearing that can move at a single speed in one of six directions. The flow of these particles on a large enough scale is very similar to the flow of natural fluids.”

This was a typical Richard Feynman explanation. On the one hand, it infuriated the experts who had worked on the problem because it neglected to even mention all of the clever problems that they had solved. On the other hand, it delighted the listeners since they could walk away from it with a real understanding of the phenomenon and how it was connected to physical reality.

We tried to take advantage of Richard’s talent for clarity by getting him to critique the technical presentations that we made in our product introductions. Before the commercial announcement of the Connection Machine CM-1 and all of our future products, Richard would give a sentence-by-sentence critique of the planned presentation. “Don’t say `reflected acoustic wave.’ Say [echo].” Or, “Forget all that `local minima’ stuff. Just say there’s a bubble caught in the crystal and you have to shake it out.” Nothing made him angrier than making something simple sound complicated.

But what Richard hated, or at least pretended to hate, was being asked to give advice. So why were people always asking him for it? Because even when Richard didn’t understand, he always seemed to understand better than the rest of us. And whatever he understood, he could make others understand as well. Richard made people feel like a child does, when a grown-up first treats him as an adult. He was never afraid of telling the truth, and however foolish your question was, he never made you feel like a fool.

Folded Corners

I was recently reminded of a Richard Feynman anecdote I once read in James Gleick’s biography, Genius: The Life and Science of Richard Feynman. I had given the book to my Dad as a gift almost 20 years ago, and — as is tradition for me when I give books to family — I borrowed and read the book immediately.

Yesterday, I borrowed the book from my Dad again to see if I could find the passage. I vaguely knew where the anecdote was, but it was going to require some flipping and scanning, as the book is over 500 pages long with dense type.

To my delight, the corner of the page I was seeking was folded over! It was the only corner folded in the whole book. Apparently, I had been so struck by that very anecdote when I first read the book 20 years ago, and I had folded the corner to hold the place.

I stopped folding corners years ago (reverting instead to Post-Its), and I barely even read paper books anymore, thanks to my Kindle. This discovery was a nice, visceral reminder of the joys of paper artifacts that you can touch and feel and fold.

For those of you curious about the anecdote that has stuck with me all these years, it was about an informal lecture Feynman gave to his peers while he was at Los Alamos working on the Manhattan Project. Here’s the passage, from pages 181-182 of the original hardback edition:

Meanwhile, under the influence of this primal dissection of mathematics, Feynman retreated from pragmatic engineering long enough to put together a public lecture on “Some Interesting Properties of Numbers.” It was a stunning exercise in arithmetic, logic, and — though he would never have used the word — philosophy. He invited his distinguished audience (“all the might minds,” he wrote his mother a few days later) to discard all knowledge of mathematics and begin from first principles — specifically, from a child’s knowledge of counting in units. He defined addition, a + b, as the operation of counting b units from a starting point, a. He defined multiplication (counting b times). He defined exponentiation (multiplying b times). He derived the simple laws of the kind a + b = b + a and (a + b) + c = a + (b + c), laws that were usually assumed unconsciously, though quantum mechanics itself had shown how crucially some mathematical operations did depend on their ordering. Still taking nothing for granted, Feynman showed how pure logic made it necessary to conceive of inverse operations: subtraction, division, and the taking of logarithms. He could always ask a new question that perforce required a new arithmetical invention. Thus he broadened the class of objects represented by his letters a, b, and c and the class of rules by which he was manipulating them. By his original definition, negative numbers meant nothing. Fractions, fractional exponents, imaginary roots of negative numbers — these had no immediate connection to counting, but Feynman continued pulling them from his silvery logical engine. He turned to irrational numbers and complex numbers and complex powers of complex numbers — these came inexorably as soon as one from facing up to the question: What number, i, when multiplied by itself, equals negative one? He reminded his audience how to compute a logarithm from scratch and showed how the numbers converged as he took successive square roots of ten and thus, as an inevitable by-product, derived the “natural base” e, that ubiquitous fundamental constant. He was recapitulating centuries of mathematical history — yet not quite recapitulating, because only a modern shift of perspective made it possible to see the fabric whole. Having conceived of complex powers, he began to compute complex powers. He made a table of his results and showed how they oscillated, swinging from one to zero to negative one and back again in a wave that he drew for his audience, though they knew perfectly well what a sine wave looked like. He had arrived at trigonometric functions. Now he posed one more question, as fundamental as all the others, yet encompassing them all in the round recursive net he had been spinning for a mere hour: To what power must e be raised to reach i? (They already knew the answer, that e and i and π were conjoined as if by an invisible membrane, but as he told his mother, “I went pretty fast & didn’t give them a hell of a lot of time to work out the reason for one fact before I was showing them another still more amazing.”) He now repeated the assertion he had written elatedly in his notebook at the age of fourteen, that the oddly polyglot statement eπi + 1 = 0 was the most remarkable formula in mathematics. Algebra and geometry, their distinct languages notwithstanding, were one and the same, a bit of child’s arithmetic abstracted and generalized by a few minutes of the purest logic. “Well,” he wrote, “all the mighty minds were mightily impressed by my little feats of arithmetic.”