What explains that we have records of the past but not of the future?
Submitted for PH373 (The Philosophy of Time) Summer Exam
In this essay, I shall consider the conditions under which a record can convey information, and show that low entropy is one necessary condition. Accordingly, I consider Boltzmann’s arguments about entropy and statistics to explain that records of the past depend on an increase in entropy in one direction, namely from the past towards the present. I elucidate the criticism that since the laws of physics are time-symmetric the statistical argument does not provide robust grounds for thinking the increase is so unidirectional, and suggest that the past hypothesis ultimately grounds the entropic asymmetry that explains why only records of the past, and not the future, can exist.
By a “record” (or “measuring device”) I am referring to any system which begins in some ‘ready’ state, and which after interacting with another system reliably transitions into another ‘record-bearing’ state, so that one can infer with high probability from the latter the nature of the interaction – some event in the past. Contrarily, observing a record can never lead one to have knowledge of the future, since any observed state is compatible with a large number of possible future events. I shall henceforth refer to this condition as ‘epistemic asymmetry’.
Ludwig Boltzmann argued that this epistemic asymmetry is grounded in entropic asymmetry, which can in turn be explained by statistical mechanics. Entropy is a measure of disorder in a system. Any system can be understood at two levels of granularity: its ‘macrostate’, an overall high-level description of the system’s behaviour; and its ‘microstate’, the configurations of the units that make it up (like atoms or molecules). When we speak of a system being high or low entropy, we are referring to its macrostate exhibiting a large or small amount of order respectively. But since most possible microstates will lead to apparent randomness in a system’s macrostate, and only a small number of specific microstates cause it to appear ordered, there is a statistical tendency for the constituent elements to become disordered. This is the second law of thermodynamics: the overall entropy of a closed system always either increases or stays the same.
The relevance of this to measuring devices is that for a record to provide us with knowledge, it must be in a low state of entropy. It cannot have arisen spontaneously; we must have some reason for thinking that the record-bearing state is improbable, but for the interaction that led it to that state. For instance, a footprint on a beach is indicative of someone having trodden on the sand only because that configuration of sand particles very rarely organises itself into that shape. Thus, the low entropy condition of a record must have arisen from a state of even lower entropy. Absent Boltzmann’s idea, this leads to an infinite regress; but by understanding that entropy increases over time for statistical reasons, we see that there are very few situations that could have led to the observed state of the measuring device, but many situations to which it could lead. A footprint tells us someone stepped on the beach; it does not preview who will do so tomorrow.
But this explanation is not without flaws. For one thing, the laws of physics are time-symmetric. But Boltzmann’s explanation seems to imply that entropy should increase statistically towards both the past and the future. If we do not properly ground the entropic asymmetry of time, it will hardly suffice as an explanation for epistemic asymmetry. Moreover, although low-entropy states are statistically improbable, they are not impossible, so on this view it may be possible to sometimes have records transmit information about the future.
The statistical response from Boltzmann has it that the universe – for the statistical reasons given – is almost always in a state of maximal or near-maximal entropy. But since time ‘extends’, as it were, for a very long time, there will inevitably be occasions in the lifespan of the universe when entropy decreases before returning to the maximum. Since the very possibility of life presupposes low entropy, we only ever observe such instances. Thus life necessarily takes place on an entropic gradient: a period in which entropy is decreasing from one side, as it fluctuates down from its usual maximum condition; and increases on the other, as it returns to that state. On this view, time has no ‘intrinsic’ direction; it is simply a matter of how creatures on the entropic gradient perceive it.
As discussed, a record can only transmit information from a low entropy state. For us to have confidence in what it tells us, it might seem knowledge that it was previously in the ‘ready’ condition, requiring yet another record to tell us that it was so. But this is absurd, for it leads to an infinite regress. So it must be that we have an independent reason for thinking that the record-bearing state was not simply the initial state, but a consequence of some event. Therefore we can only have knowledge of events on the part of the entropy gradient on which entropy was decreasing. Definitionally, to those creatures – say, humans – that direction will be called ‘past’; the other direction, of which we cannot similarly obtain knowledge from measuring devices, we call ‘future’. Thus that we can have records of the past but not the future is a direct consequence of how these are defined by creatures such as ourselves. The direction of time is not an objective feature of the universe, but a matter of how we perceive the entropic gradient. There may be other creatures whose perception is ‘reversed’, who have knowledge of what we call the future, but which they call the past.
Huw Price has noted a flaw in this reasoning, namely that fluctuations from the ordinary maximum state of entropy are statistically more likely to be very small than very big. Thus, conditional on our existing and having perceptions, we should expect that we are located on a very slight deviation rather than a very large one. But if that is so, the evidence we have from the direction of lower entropy, that we call ‘the past’, would in fact be the result of random fluctuations. They would, in other words, not be reflective of actual events from that time, but totally illusory. On Boltzmann’s view then, we must expect with high probability that our records of the past, and thus any knowledge we have of the past, is mistaken.
In order to solve this problem, several philosophers have argued for the necessity of an additional cosmological assumption, the “past hypothesis”, which posits that the universe began in a condition of extraordinarily low entropy. On this account, it is much easier to explain why entropy statistically increases only towards the future. Thus, knowledge of the past encoded in records need not be abandoned as probably illusory, but as legitimately reflecting the way that it has interacted with events as entropy has increased throughout time – still, no doubt, with fluctuations, but with a genuine overall direction that is not merely a matter of perspective (in the sense that all creatures at any point in time will see entropy increasing in only one direction).
Although critics have argued the sheer improbability of the universe having been in such a low entropy condition, Callender and others have defended the hypothesis as a ‘brute fact’: if we accept it, then we can make sense of the several temporal asymmetries that are otherwise perplexing, including the epistemic asymmetry that has been the subject of this essay.
Result
Mark: 76% (Low First), averaged across this and another answer.