Talk at “2nd annual phenomenological approaches to physics” (Sept 2019)

A talk given at the “2nd annual phenomenological approaches to physics” conference (Stony Brook University) on 28th September 2019, based on the paper “Persistence and nonpersistence as complementary models of identical quantum particles” (2019).

Please note that the transcript has been lightly edited for clarity.


Slide 1

Slide 2

According to our everyday conception of the physical world, the world out there consists of things or objects…

Slide 3

…and these things persist over time, so that the appearances that we see in the moment are, as it were, temporal cross-sections of the life history of objects. And the idea of persistence is what underwrites the possibility of us saying that this thing that I’m seeing now is the same as the object that I saw earlier.

Slide 4

And these objects possess properties—shape, colour, position, and so on.

Slide 5

And these objects are re-identifiable, which means that there are some properties of these objects which are relatively stable over time, and these are the perceptual handles by which we can reidentify the objects.

Slide 6

So in other words, I want to distinguish re-identifiability from persistence. Persistence is an underlying theoretical assumption, if you like, re-identifiability is what is what allows agents in particular to actually say “this is the same as something else.”

Slide 7

And this everyday conception of reality is abstracted by classical physics.

Slide 8

There, the physical world, consists if you like, of point particles. I’m talking of classical mechanics.

Slide 9

And these particles are persistent, so they can be theoretically labelled.

Slide 10

And they possess properties. So here the classical framework makes a sharp distinction between static properties like mass and charge, which characterise the object, and dynamic properties, such as position and velocity.

Slide 11

And finally these things can be re-identified.

Slide 12

Now the classical conception, here, introduces the possibility that two particles could be entirely identical with one another. In other words, have the same, for example, the same static properties, the same masses, the same charges.

This is obviously not something that has any correspondence in our everyday experience, but it gives rise to the following puzzle: that we could have two particles at the beginning here, supposed to be identical particles, which I can arbitrarily label A and B at some initial time.

Slide 13

And then I may see them at some other locations later on. And because of my underlying assumption of persistence, I know that one of them is A and one of them is B, but by measuring their masses at this time, I cannot tell which is which.

So there’s a gap here between my theoretical assumption of persistence, on the one hand, and my ability as an agent to really re-identify, on the other.

Slide 14

But here the classical framework saves the agent by saying that the particles travel along continuous trajectories, and it’s possible in principle, for an ideal observer, to track these particles over time without causing disturbance.

So in other words, there’s no problem. Re-identifiability is still possible by an ideal agent in this context. So the bottom line here is that the possibility of identical things in classical physics is a new possibility that arises because of this abstraction, but on the other hand, it doesn’t cause any particular theoretical challenge… we can deal with a bunch of identical particles, in an ordinary way, we can label them over time, and we can in principle, re-identify them.

Slide 15

But this situation fundamentally changes with quantum theory.

Slide 16

And this first came to light through a derivation by Bose of the Planck blackbody radiation curve. So this diagram here is supposed to represent cells in phase space, and these are representing, let’s say, photons. And, in Bose’s calculation, what he had to do is to calculate the number of ways of organising a number of photons amongst these cells in phase space.

And remarkably to get the right answer, really to get the mathematical right answer, he effectively assumed that all that matters is the number of photons in each box.

So in other words, it’s just like dollars in your bank account. You don’t care which dollars are in your bank account, just that there are dollars in your bank account. So this is the remarkable thing.

Slide 17

And this was interpreted almost immediately afterwards by Langevin in the famous 1927 Solvay Conference as follows:

“Formally one decomposed the phase space into cells, attributing an individuality to each constituent of the system."—This is the procedure that was developed by Boltzmann.

“It seems today that one must modify this method by suppressing the individuality of the constituents of the system.”

Slide 18

And interestingly, considerably later, Schroedinger, in the last decade of his life, made this very powerful statement:

“If I observe a particle here and now, and observe a similar one a moment later, at a place very near the former place, not only cannot I be sure whether it is the same, but that statement has no absolute meaning.”

So it’s a very radical re-conceptualization of what ’object’ might mean. However, the situation is not that simple. It never is in quantum theory it seems.

Slide 19

Very shortly after Bose presented this calculation, Heisenberg and Dirac, who were developing the quantum formalism that we now use, they incorporated Bose’s procedure via a rule, a new mathematical rule, called the Symmetrization Postulate.

Slide 20

And roughly speaking, Dirac argued as follows. He said that imagine you’re trying to describe two electrons in a helium atom, and you write down a state like this, where one and two—Dirac implicitly assumed—refer to this electron and that electron. So the labels are referring to specific electrons.

Slide 21

He said that for an observer that cannot re-identify the electrons, this [second] state should yield the same predictions as the first state. So here we have two states, but they would yield exactly the same predictions for all of these observers, which means that we have a formal redundancy in the theory.

Slide 22

And we can remove that redundancy by simply disallowing either of these states individually, and actually saying that the only allowable states are things like this, symmetrized states.

Slide 23

And he noticed that there was another possibility, where you have a minus sign in there…these would both satisfy his criterion.

And he showed that indeed, if you impose this rule, it [the symmetrized state] recovers essentially the results of Bose’s calculation. And the other [antisymmetric] case corresponds to what we call fermions.

And I just want to emphasise that this rule underpins everything we understand about the periodic table, that you cannot understand the behaviour of atoms more than hydrogen without using these rules. You cannot understand the behaviour of metals, the conductivity of metals, the stability of matter, without these rules. So they’re absolutely fundamental.

Slide 24

And this is how Dirac interpreted or read his rule, he said, “If a system in atomic physics contains a number of particles of the same kind, the particles are absolutely indistinguishable from each other.”

I want to emphasise that, in the language I’m using, he’s assuming that the particles are persistent at the theoretical level, but they are not re-identifiable by any observer. So it’s a very different position.

Slide 25

So just to emphasise, there are two interpretational options, the first arising more from the Bose way of thinking. It also, by the way, for those interested, is what’s suggested by a reading of the quantum field theoretic formalism.

So, on this view, what we call identical particles are not persistent individuals. So they lack persistence-hood, so they’re not really even individuals as we ordinarily understand that word.

Slide 26

The other option is that identical particles are persistent at a theoretical level, but they’re not re-identifiable by any observer.

And the second view I think, is probably dominant amongst physicists, based on the textbook descriptions. Even in analytical philosophy, I found that vast majority of people subscribe to the second view, often not even acknowledging the first view exists.

But Steven French for example, wrote an article, a very nice article, where he lays out these two metaphysical packages, as he called them.

Slide 27

However, the problem with both of these interpretational options are twofold. First of all, if you actually look at what physicists do when they interpret experimental data, they routinely assume persistence and re-identifiability.

Slide 28

So for example, here’s a bubble-chamber image, showing, as you can see, what looked like particle tracks. That’s what physicists say: “This is a track created by a particle.”

Slide 29

If you zoom in, you see that this track here looks like a sequence of bubbles. So what we have is the raw data as a sequence of bubbles, but it is we who impose this additional idea and say, “Well, no, this sequence of bubbles is created by the same object.”

So we are, by a matter of fact, assuming persistence, and we are also saying they are re-identifiable, at least in this context. These particles are also re-identifiable. And this is the very bedrock of our theorising, so we have to take this rather seriously.

But neither of these interpretational options really give any kind of account of how we could reconcile the practise and the interpretation.

Slide 30

The second difficulty is that neither of these interpretations, when mathematized in a natural way, actually give rise to the mathematical form of the symmetrization postulate.

In other words, a certain bar that you might try to impose on any interpretation of any part of quantum theory is to say, “Okay, it’s all very well you say this… that this is how we should conceptualise what’s going on, but can you mathematize what you’re thinking in an natural way, and then systematically derive the mathematics that you’re trying to interpret?”

Slide 31

If we can do that, that gives us a lot more confidence about the interpretation. However neither of the interpretations I described give rise to such an understanding of the symmetrization postulate. Dirac was well aware of this in 1930.

One of the unfortunate things is that in almost all textbooks of quantum theory I’ve seen, they give the impression that it’s very trivial to derive the symmetrization postulate via the second interpretation, which is not true.

So, the question then is, well, how do we proceed? How do we come up with a better interpretation of identical particles?

Slide 32

I want to zoom out here slightly and say that the challenge that we face in understanding identical particles is just a special case of the problem we have of interpreting quantum theory as a whole that we’re all, in one way or the other, talking about.

So I want to ask the question, what is it about quantum theory that makes it so difficult? One answer is that, well, it’s counterintuitive. It’s not conforming to our everyday experiences. There’s truth in that, but that doesn’t particularly help us, I think, in making progress. What I’d like to propose here is another way of thinking about why it’s difficult.

Slide 33

That if you take for example, classical physics, which we say generally we do understand—even though we may disagree with some of its presumptions, like determinism—we can understand what it’s saying when it’s expressed in natural language.

The classical conception of reality is essentially based on this clockwork universe, to put it in a nutshell.

Slide 34

And that really lies at the heart of classical physics. So there’s a classical conception of reality [bottom], and then there’s a classical mathematical framework [middle], which arises as a natural mathematization of this conception.

So what I mean by the framework here is things like, the system is described by states, and the dynamics is a one-to-one map over state space, and measurements in principle are possible that read out the state without changing it.

Slide 35

And then specific theories of classical physics [top] are built within that framework by making more precise what we mean by the state in that particular case, what we mean by the dynamics in that particular case.

So this flow from a conception of reality all the way up is very elegant and gives you a sense of conceptual understanding of the mathematics, no matter how complex it might be.

Slide 36

So what’s the situation in quantum theory? Well this is what we’d like, I think. We would like some sort of quantum conception of reality. But the situation historically was that the quantum mathematical framework…

Slide 37

…by which I mean this abstract quantum formalism, with the idea of states being represented by complex vectors, measurements represented by self-adjoined operators. Outcome probabilities governed by the Born rule. And unitary dynamics, which has been mentioned. And the idea of composite systems. By the way, this is what directly gives rise to the possibility of entanglement.

…All of this was arrived at through a rather ad hoc process, involving some new physical ideas like de Broglie’s wave particle duality, and a lot of mathematical guesswork, and induction.

Slide 38

So this framework was arrived at in a rather ad hoc way. And interpretations of quantum theory, pretty much until the present day, largely take the quantum framework as a given, and then try to append an interpretation to it.

They do that in all kinds of different ways, often by reformulating the framework, in a different mathematical form or language.

But there a lot of problems with this, and what often happens, is that the interpretation explains one part of the mathematics, but neglects the rest.

Example: in the many worlds interpretation, even if you accept their derivation of the Born Rule, they presume the rest of the mathematical structure of quantum theory.

Slide 39

Same thing with the pilot wave model. It accepts the basic form of the Schroedinger equation. That’s taken as a given. And so then, what are you explaining? Well you’re explaining only a little piece of that mathematics.

So, this is the problem with this interpretational approach. We have many different interpretations on the table. They disagree, and there’s no way to really adjudicate between them.

So the idea that I’ve been pursuing for the last 15 or so years, is to basically raise the bar of what it means to give an adequate interpretation of quantum theory. And the idea is that what we should do, is follow a two-step process…

Slide 40

…that rather than try to interpret in one step, what we should do is instead to formulate a set of physical principles, which are somewhat transparent to us conceptually, when expressed in natural language, analogous to the postulates underlying classical physics, and then derive the quantum mathematical framework from that… that’s step one.

Slide 41

And that’s known as the reconstruction programme of quantum theory.

Slide 42

And the second step would be interpretation. So once we’ve got the physical principles which distill the full essence of quantum theory down to some statements that we can grasp conceptually, then we interpret, we interpret those. We don’t interpret the mathematics directly.

And this has a two-fold benefit. First of all, your interpretation is, as it were, battle-tested against the whole of the quantum formalism, and not just one little piece. And the second thing is that the physical principles are much more philosophically digestible. We can reflect on them because they’re expressed in natural language rather than some arcane mathematical formalism, which is difficult to grasp conceptually.

Slide 43

I should just say that there are a number of reconstructions that have been created over the last 10 or so years, but there’s been very, very little work that I’m aware of, that tries to say, “okay, here’s a reconstruction. Now let’s interpret it.” So it’s very much an open area of research.

So let me just say very briefly how you reconstruct, because it sounds perhaps even magical, how you reconstruct this mathematics? Isn’t that difficult, I mean, how do you even begin?

Slide 44

The approach that I take, and many of my colleagues (for example Lucien Hardy, who has come up with another reconstruction which is well known) is the informational approach.

Slide 45

And I think it’s very nicely captured by a quote from John Wheeler, who was mentioned earlier, and he captures this in the slogan, “It from bit.”

Slide 46

So the idea is to hold our theories and the concepts within them very lightly, and pay great attention to the bare facts of experience obtained by observers (agents) interacting with measurement apparatuses.

Slide 47

I cannot get into the details of this, but I just want to give you a sense of how I proceed and how many of my colleagues proceed.

We consider things like measurement devices like this. On the left is a source of electrons. The measurement device itself is a Stern Gerlach device. These are detectors, and we just imagine detections, data. Here, the possible outcomes are labelled one and two.

Slide 48

One can then allow for the possibility, of the detector to be much larger (coarse-grained) so (roughly speaking) it can’t distinguish between these two options.

Slide 49

And then we can imagine experimental setups like this, where the series of outcomes are obtained. And our job from a theoretical point of view is to say, “what’s the probability of that probability sequence?”… or rather, “what’s the conditional probability of the second and third outcome, given the first?”

Slide 50

And in the same experiment, there are different possible sequences of outcomes…

Slide 51

…and the job of the calculus of quantum theory is to relate the probabilities of these together.

So this is just a quick illustration of what I’m talking about. In my previous work, which I won’t be talking about today, I derived the abstract quantum formalism, which I showed to you earlier, from a particular perspective, known as the Feynman perspective.

Slide 52

And the summary of the reconstruction, if I expressed it in words, is given here. As I say, I won’t go into the details of this, but I just thought it’d be interesting to mention this to you, and focus on the implications which are very interesting.

You see, because you reconstruct the quantum theory formalism step by step, you know exactly what you’re putting in.

Slide 53

And the extraordinary thing for me is that there is no mention of space, of things in space, the dimensionality of space, the topology of space. There’s no concept of matter. There’s no concept of energy or momentum that go into the derivation.

So the abstract quantum formalism, which I showed you earlier, can be derived as a kind of free-standing structure, which stands independently of many of the fundamental concepts that underlie classical physics, like space and matter and energy and momentum.

So this is I think, a very striking implication, and we can make this confident statement only because of the reconstruction… we know exactly what we’ve put in.

Slide 54

Another implication derives from the fact that the abstract quantum formalism is thereby derived as a whole—the unitary evolution and measurement process, the Born rule, are derived as one whole thing.

Slide 55

What this means is that, from this point of view, quantum states arise directly as a way of connecting the observer’s experiences. So, the idea of collapse is certainly not as puzzling as it normally is.

I hedge what I say here, because it’s still a puzzle as to where you make the cut between the observer and the quantum world.

All right? So why do we describe certain physical processes unitarily, and others, we say, “This physical process is a measurement.” But what grounds that? That’s an unanswered question.

So the measurement problem, as it were, doesn’t go away, but I think we can make it more precise, and be clearer about what really needs to be explained.

Slide 56

Now, before I go on to talk about identical particles, as an interlude, I need to talk about the Feynman formulation of quantum theory. This is the formalism in which I reconstruct the symmetrization postulate, and I also recommend it to you if you’re not familiar with it, because it’s a very clean and simple way of expressing quantum theory, without Hilbert spaces, and unitary operators and so on.

Slide 57

So the way it works is this. Imagine a particle that’s moving from A to B. This is moving in one spatial dimension for simplicity. Classically, it moves along a definite path. So then how do we create a quantum model of this system?

Slide 58

Feynman says, well, what should do is consider all possible paths from A to B, and assign each of them a complex number—an amplitude. And then the total amplitude, we shall say, going from A to B, is the sum of those amplitudes. That’s the sum rule.

Slide 59

And if you want to work out the amplitude of a particular path, what we can do is to break it up into sub-paths, work out the amplitudes of those, and multiply them together—that’s the product rule.

Slide 60

And finally, we have a probability rule that links this abstract level of amplitudes with something that we can actually access experimentally.

So this is the probability rule that says that the conditional probability of B given A is equal to modulus z squared.

Slide 61

And that’s it. Those are the key rules in Feynman’s perspective of quantum theory.

And remarkably it only takes one additional assumption to derive the abstract formalism I showed you earlier, minus the tensor product rule.

And so how do we deal with identical particles? We haven’t said anything about that yet.

Slide 62

Well imagine I’ve got two non-identical particles…

Slide 63

…that scatter off each other like that. Let’s call the amplitude of that \(\alpha_{12}\).

Slide 64

But they could also scatter in a different way…

Slide 65

…where the blue one goes up. Let’s call the amplitude of that \(\alpha_{21}\).

Slide 66

Now we imagine two identical particles, each of which behaves just like either of the non-identical particles.

Slide 67

And they scatter. And there’s only now one way that can happen, from an observer’s point of view. The amplitude of this, according
to Feynman’s way of thinking, is \(\alpha_{12} \pm \alpha_{21}\).

So this is the symmetrization postulate in Feynman’s language, and that is the form in which I’ll derive it.

Slide 68

Let me, show you how this is done. I’m going to omit a lot of technicalities so I can focus on the concepts.

Slide 69

So imagine that I’ve got two identical particles, let’s say. What we do is we focus on the data, the raw data. And so when I say we focus on the raw data, there are no objects yet. There’s just flashes, if you like, to make it more picturesque.

So you imagine two flashes at some initial time…

Slide 70

…and two flashes later on. That’s all we have. That’s the raw data.

Slide 71

Now there are two kinds of models that one can create of this flash data. One is what I call the persistence model, where we assume that there are individual persistent entities that underlie each of the flashes. This would be the most natural or obvious model to make.

Slide 72

And in that case, we can tell two different stories about what’s happened here. Either the particles—now the objects—have moved or somehow transitioned like this. I call that the direct transition.

Slide 73

Or they’ve gone like this. And what allows us to tell these stories is the underlying assumption of this persistence model, that there are individual persistent entities that underlie these flashes.

Slide 74

However, there is another model, and in that model we refrain from assuming that there are individually persistent entities. We just imagine that there’s one abstract entity which manifests itself as two flashes.

It’s a different way of thinking. It’s perfectly reasonable. And I want to emphasise that, in everyday life, I believe we actually move between these models all the time.

So imagine that you’ve come to this room, and it’s completely dark. And suddenly you start seeing flashes of light. Imagine that there are two flashes of light at each moment.

Slide 75

One possible model is that you’ve been trapped in the room with two synchronously flying fireflies. And so you’ve got two flashes of light every time. So that’s the persistence model.

But the other possibility, is that there’s some sort of Star Trek-like holographic projector in the room that’s projecting two flashes of light at every moment. And something like that would be the nonpersistence model, where we’re saying there’s one entity which is generating both flashes. There’s nothing else going on. There aren’t any fireflies, it’s just two flashes at every time.

And we switch back and forth between these models. When we’re looking at a computer screen, we know that all of the pixels of light there are being generating by one underlying object. We certainly don’t get surprised when a little object apparently just disappears from the screen.

So I contend that we actually use these models, but we switch back and forth.

Slide 76

What I’m going to say about identical particles, is that we can’t do without either of these models. We actually have to synthesise both of them.

Let me show you how I propose to do that.

Slide 77

We can describe each of these models within the Feynman framework that I’ve just sketched. What that means is that, in this case, we can assign an amplitude to this process \(\alpha_{12}\)…

Slide 78

…and likewise an amplitude to the indirect transition.

Slide 79

And we could also then say in the nonpersistence model there’s a certain amplitude associated with this process—two flashes, two flashes—being noncommittal about what’s going on. We just say there’s a single, persistent system that’s generating these two flashes.

Slide 80

And then the key idea is to actually synthesise these, mathematically, and say that, the amplitude in this picture here, in the nonpersistence picture, is some unknown function of these two amplitudes.

So that’s the idea of fusing these two pictures at a mathematical level.

Slide 81

I then make an analogous assumption about what happens if you have flash, flash, flash—three sets of flashes.

Slide 82

And so something analogous happens here: in a persistence model, there are four possible stories that the model tells about what may have happened. Each of these has an amplitude. And we assume that the amplitude in the nonpersistence model is some function of those four.

And there’s one more technical assumption which I won’t go into, but I just mentioned it. It’s called the Isolation Condition. So there is one more assumption that needs to be made. And remarkably that’s enough. Astonishingly, at least to me, that once you require consistency of these assumptions with the Feynman formalism, it actually determines what \(H\) is.

Slide 83

So now I’m going to get a little technical, but I didn’t want it to appear just like magic, so I’m going to give you one example of how this is done.

The essential idea is that it’s sometimes possible to calculate an amplitude of a certain process in two different ways, according to these rules. And for consistency they must agree. Each such call for consistency gives rise to what’s called a functional equation. If you formulate enough of these equations—as it turns out three is sufficient—it determines \(H\).

Slide 84

I’m just going to give you a quick example of how this is done, just a graphical illustration.

So I want to calculate the amplitude of this process. I can do it, according to the Feynman rules, in two ways.

Slide 85

First, I can work out the amplitude of the bottom process…

Slide 86

…which itself I could calculate using the H function.

Slide 87

I can do the same for the top process.

Slide 88

And then, by Feynman’s product rule, I can just multiply those amplitudes together. That’s the amplitude of that process.

Slide 89

However, I could’ve calculated that amplitude using this G function instead. So rather than break this process up into two parts, I can say, “Well let me just directly work with what’s on the left.”

Slide 90

We can again use the Feynman product rule…

Slide 91

…to work out the amplitudes of each of these four processes in terms of these quantities here.

Slide 92

And likewise for all the others.

Slide 93

And then we can use this G function to combine those amplitudes together.

Slide 94

All right, and so… the consistency requirement, is that well, if everything I’ve said is internally consistent, these two expressions had better be equal.

So, excuse the technicality, but I hope I given you a feeling about how it’s actually done in practice.

Slide 95

When you solve the functional equations—there are three of them—you get the function H, and this is the symmetrization postulate in Feynman’s form.

Slide 96

It generalises to any number of particles. Just for completeness sake, this is what it looks like. So you get the bosons and fermions, as the two possibilities.

Slide 97

So what’s the interpretation of this? What I’ve just described is the reconstruction part, which I already completed a few years ago, but it’s taken time for me for the penny to drop, and for me to understand, at least in a way that convinces me, what this really means. How do I interpret this?

Slide 98

Let me summarise: the key feature of the reconstruction is that
we have two different models of the same bare data, but they’re incompatible. They’re incompatible in terms of what they assert about the underlying reality.

Slide 99

And yet—and this is the extraordinary thing, I think—you can synthesise them, mathematically. Now the question is what does that mean, and how is it possible to synthesise two things that are incompatible? Does that actually make any sense?

I would say that the way to understand it, is that each of these models is telling a story that goes beyond the data. So it’s quite possible for them to be inconsistent if taken literally, but yet be able to be synthesised.

Slide 100

And on this basis, I believe that it’s correct to call these complementary models of the same situation in the sense of Bohr.

Slide 101

Now Bohr and complementarity has been mentioned many times. Let me just give a quote here. He is here talking about the nature of light.

Slide 102

For example, here’s a double slit experiment, where you have a screen, and you actually see this thing being built up in real-time. It’s a beautiful experiment.

And so you can see that the detections are particle-like, which naturally leads us to think, “Oh, these are particles.” Right, that would be the particle model. But the distribution of these detections is wave-like, which leads us to a very different model.

And so we have good grounds for each of these models, and yet they’re incompatible.

Slide 103

And so the way I understand this—well this is my current understanding—of the complementarity of, as it were, particle and wave, is that the particle model is a classical type of model, where it says the electron has the property of position in space, and that is all. It’s not moving anywhere. It just has a property of position. In other words, it’s just half of the classical model.

Slide 104

The wave model… I think the term “wave” is unfortunate because all the waves we know of in nature are locally instantiated, so when we say “wave,” that is just a collective noun for local disturbances. So the notion of wave is a higher-level notion, which is essentially dispensable. Right? But when we talk about wave model of an electron, we’re actually saying it’s not located anywhere. It’s not in space. It doesn’t have the property of being located.

What does it have? Well, from what I can see, there are two levels of model that one can create for the electron. The minimal requirement is directionality. It has a directionality.

Slide 105

The more comprehensive model will say that it also has a wavelength. It actually has some additional property.

Slide 106

And most importantly, as I said, it’s not located anywhere. And just by the way, when you go to many electrons, then you have a wave function in configuration space, so this wave, whatever it is, doesn’t even exist in ordinary space. So this wave model is very abstract. It abstracts the wave from our ordinary space perceptions.

Slide 107

And one can see the Feynman’s sum rule as an expression, a formal expression, of this wave particle complementarity. So here is a double-slit experiment. Emission of electrons at A, detection here, at C. In between are the double slits.

On the wave picture, all we can say, is there’s a directionality. There’s a moving from A to C. And we want to know the amplitude of this process, but we’re not thinking of this electron as located between detections, so we cannot say that it passed through one slit or the other.

So there’s only an amplitude for this process. That’s all we can say.

Slide 108

However there’s another model, which is the particle model, where we presume the particle is always located. According to that model, we can tell two possible stories. The top story and the bottom story, and we can associate an amplitude with each of them.

And we can regard then the Feynman sum rule as a way of synthesising these two particle and wave pictures. And the reconstruction of the Feynman rules that I briefly mentioned that I’ve done essentially carries this through mathematically. Although it’s actually a lot more complicated that the derivation of the symmetrization postulate.

Slide 109

So just to conclude then, so with some basic thoughts about this… I think, certainly for me as a physicist, the reason I’ve invested so much energy in this is I do want a sense of understanding of what aspects of my ordinary understanding of the physical world I can keep, and what I need to give up (or modify) in the face of quantum theory. I do want a sense of that continuity.

I don’t want to just say this is some sort of black box, and it somehow describes a reality that’s simply beyond my ordinary perception and comprehension.

So… connecting this to the everyday conception I mentioned at beginning… we tacitly assume that objects persist over time. In other words, we assume that the appearances are generated by persistent objects. That’s another way of putting it.

Slide 110

We also assume that objects take the same form between their appearances that they do during their appearances. So, this [object in hand], when I’m not looking at it, takes the same form.

But that gets us into direct trouble with the double-slit experiment. So that’s something we should be very cautious about.

Slide 111

The two complementarities I talked about, directly challenge these assumptions. The idea now is that the persistence of individual objects is a modelling assumption that’s somehow hardwired into our perception. It’s something that children learn over the first 18 months of life as Piaget showed, so it’s something that seems to be very deeply part of our way of engaging with the world, but it’s still really an assumption.

And there’s another, alternative assumption, at least theoretically, which is that the appearances all around us are manifestation of one thing, one system.

Slide 112

The second is the more familiar one, which is that the object’s appearance depends critically on the choice of measurement that we make. And between appearances, we cannot even say, in general, that an object has a particular property.

At most we can say—this is echoing Michael Epperson’s work—is that it has the potentiality of realising one of a set of possible properties upon measurement.

Slide 113

One of the things I want to just impress upon you is that Bohr’s formulation of the concept of complementarity was essentially descriptive. The formalism was already given. He’s providing an interpretation.

But what’s possible now through the reconstruction programme is to turn this complementarity idea into a constructive means to derive this mathematics. And so what I’m saying is that complementarity can be turned into a procedure for actually synthesising incompatible models.

Slide 114

And so, for me, the important aspect here is that we can now move beyond the everyday conception of reality in a very controlled way. We can say, “Here are two different pictures that we can make, two different classical type of models that we can make of the same data, and yet we can synthesise them.”

And then we thereby get a model which is more abstract than any everyday model, and yet we can still grasp it in some definite way.

Slide 115

Finally, I’m very much learning about phenomenology, but I see many points of contact. I’d just like to make a couple of simple points.

The first is that this reconstruction programme is very much based on the idea of focusing on primary data, and the motivation is the same as the notion of immediate givenness of sensory perception in phenomenology: what is immediately given to us has an immediate justifiability to us.

We don’t feel we need to justify why do we see two flashes? We just say “Well, there are just two flashes.” And these are intersubjectively communicable. That’s critical. So that’s coming in the emphasis on looking at measurement devices and clicks.

Slide 116

And the whole programme, as Wheeler’s quote brought up very elegantly, is that the assumptions that we make in everyday life and that we make, for example, in classical physics, are very much bracketed. The idea is we want the data to breathe. We want to give it space to speak to us, and we need to bracket if we’re going to do that.

Slide 117

The other idea I want to bring out—this is a question really, to the phenomenologists, about whether this interesting or could be developed in some way—is that we can view these complementary pictures as perspectives, as different perspectives or stories about the basic fact of experience, or what we perceive.

But, interestingly enough, the perspectivity here is quite different, for example, that found in the phenomenology of perception. The perspective here doesn’t arise because you’re the observer, who’s spatially located, and changes his or her relationship—spatial relationship—to an object, or sees the object at different moments in time. This shift of perspective is arising due to a shift of mental model, a change of mental model from one to the other. So it’s more internal shift that’s happening.

Slide 118

We commonly have there the idea that if you have a physical object, you can have different perspectives on it—spatial perspectives. We can also have different temporal perspectives–we can observe it at different times. But they’re all consistent with one another, and can be fused together into a single picture. That’s our everyday experience.

But here we have different perspectives which cannot be fused together at the same level. It’s something we have to go to a completely different level if we’re going to synthesise them.

Slide 119

So, that’s the end of my talk. I’ll just leave you with a couple of references. The first is the main paper, and it’s written very conceptually, and so I think it ought to be rather accessible.

The second paper is where the actual mathematics of reconstruction is done. And finally, this is my website on which you can find for example, recorded talks, which show how to reconstruct the Feynman rules of quantum theory.

Thank you very much for your attention.

[Michel Bitbol]: Thank you, it’s going to be a very short question, but first of all, I would like to say something about your work. I thought that your paper, the paper you sent me, was a wonderful paper, and I think your talk was really a wonderful talk, so thank you, first of all.

Second of all, so, very short question. Is this nonpersistence model simply the field theoretic model?

[Philip]: That’s a great question. The way I see it is the following. In a field theoretic model, the basic concept is Fock space. So in other words, all we care about is how many excitations are there, as it were, in each cell. So on the surface it looks like, “Oh, this is just a nonpersistence model.”

However, you then assume the formal structure of this commutation and anti-commutation relationships which bring in the fermionic and bosonic behaviour.

And so the question is: what does that formalism mean? And so we have here a classic situation, where if you look at non-relativistic [quantum] physics, and the symmetrization postulate, the natural reading of the formalism of the symmetrization postulate is to say the identical particles are persistent, but not re-identifiable.

But there’s actually a one-to-one correspondence between a quantum field theory of a constant number of particles, and the non relativistic formalism. So you’ve got here a situation with two formalisms, that actually that map to one another, can be naturally read in two different ways. And you can see here the danger of reading the formalism.

So what happens with a quantum field theory formalism is you see these quanta, and you think, “Oh, this is nonpersistence,” but then you forget about explaining, or accounting for, the commutation and anti-commutation relations. And that’s where we smuggle in, from my point of view, the persistence model.

[Philipp Berghofer]: I guess I was just thinking about your talk, and I personally should think to emphasise, that I think that that this project of reconstructing physics, reconstructing quantum physics… quantum mechanics… is really super interesting, I think, from a phenomenological point of view, particularly if we consider that from a phenomenological viewpoint, what you’re going do with this kind of fieldwork that was mentioned by Harald [Wiltsche].

So, secondly, do you think that the work you are doing also has some direct consequences concerning our picture of quantum field theory. So, how would so consider the relationship between quantum mechanics and quantum field theory?

‘Cause if you think about it, particles from the perspective of classical mechanics, versus the perspective of quantum field theory, then you’ve mentioned this briefly, particularly if you subscribe to the field interpretation, according to which the fundamental objects are not particles but fields, and particles are accelerations of fields and so on, they we would say, “Okay, we should subscribe to the—” (movement drowns out speaker)…according to which it is not some solid stuff, but is sitting over time, more as an analogy to a wave in the sea (inaudible)…one particular wave over that time.

I’m a little confused. So how do you estimate the relationship between quantum mechanics and quantum field theory according to your view?

[Philip]: Yeah, so it’s a great question, and it’ll take a little bit to get through this. So as I was saying to Michel, if you restrict quantum field theory to a given, constant number of particles, then you can make a mapping between the quantum theoretic description… quantum mechanical description, of a set of identical particles, and the quantum field theoretic description.

So at that level, the fact that one would be inclined to read this mathematical formalisms differently—the first being interpreted on the basis of the idea of persistence but not re-identifiabilty, the second as nonpersistence—that is really a kind of trick. We’re essentially being fooled, because we’re not being careful enough, in my view.

So what I’m claiming here is that, in both cases you need the idea of persistence and nonpersistence. So that’s the first part of it.

But then I suppose I want to also talk about, the idea of, that what quantum field theory brings in, when you allow the number of quanta to vary… so you could have three flashes and then two, then six.

Right? It can vary. That then, of course, brings in a further layer of complexity. Right, there’s another layer that we’ve introduced into the formalism, the possibility that a number of flashes can vary. That obviously then drives home the idea that these particles aren’t continuously persistent, so that the number, from the particle point of view, is varying.

So that’s an embellishment on top, and that’s something that I do want to elaborate. I want to understand better what conceptual innovation is involved in that step. So it’s very much an open question for me.

[Vincent]: So when we break a particle down into these two complementary ways of representing them, the position and something more or less looks like momentum, with… direction. What do you make of the fact that these things are related by boosts? So, if we are particularly talking about a single particle, we can always boost into the frame with one of ‘em where’s there’s no directionality.

And is there some kind of pointer to a fully relational description? So, in other words, if we have just the one particle, and one of those things doesn’t really exist. The momentum is kind of, there’s no state of affairs with respect to the momentum. And if we have two, the logical way of thinking about it is that, there’s only some smaller state of affairs, with respect to the relative momenta of the two particles, but not the thing. So is there a way, some way for us to start connecting quantum mechanics up to the special relativity in this particular context?

[Philip]: The simple answer is I don’t know. Those are two very interesting thoughts, and the second one, about the relationalism, is particularly…

The relational ideas have…it feels like have been more directly explored in, for example, shape dynamics, which you mentioned, and it’s been explored in different ways by a number of different people.

But yeah, in this context… remember the basic focal point, is here the abstract quantum formalism, which is kind of an abstract shell in which we build quantum theories. And at that level—the abstract formalism—there’s no concept of particle in space, with momentum or position. That hasn’t been introduced yet.

So, I think the question that you’re asking, is kind of at the next level, which is to say when you try to actually build explicit quantum theories of stuff in space, then is there any benefit to formulating a relational quantum theory of stuff in space, rather than what we normally do. I don’t really know the answer to that.