When Science Doesn't Know: Uncertainty, Probability, and Complexity
Physics and mathematics are commonly regarded as complicated domains of research. They are also exact. How could this be when at the heart of modern physics there’s Heisenberg’s uncertainty principle, and mathematics has been working with probabilities and complex systems for decades?
“It is impossible to not know in mathematics!”, said the German David Hilbert, at the International Congress of Mathematicians in 1900. His conviction was also meant to be an urge for researchers, given that he had presented at the same Congress a list of mathematical problems thought to be essential for progress in the field. Although Hilbert’s list provided mixed results throughout the years ― some problems were deemed unclear, others impossible ―, the German kept his confidence until the end of his life. In 1930, he declared:
“We must not believe those who, on a sharp philosophical tone, foresee the decline of culture and accept ignorance. For us [mathematicians], there can be no ignorance, and in my opinion, same goes for natural science. Against the foolish ignorabimus, our slogan will be «We must know ― we will know.»”
Hilbert makes a reference to the Latin saying ignoramus et ignorabimus, meaning “we don’t know and we will not know”, which the physiologist Emil du Bois-Reymond had proposed in 1872, in a speech on the limits of science. Du Bois-Reymond maintained that there is an ultimate limit of nature, which is why complete knowledge is impossible. At the same time, consciousness, which is necessary for knowledge, is also an enigma. Few years later, he proposed a list of things which will remain unknown, among which the ultimate nature of matter and energy, the origin of movement, and the origin of senses.
For their time, both researchers had solid arguments to support their beliefs. But after a couple of decades, things changed. We now know more about biology and neurology than du Bois-Reymond could ever imagine, including on topics such as consciousness. In mathematics, even during Hilbert’s life, the Austrian Kurt Gödel showed that there will always be purported results which are impossible to prove. Some of them were found even in Hilbert’s list.
Hilbert’s and du Bois-Reymond’s confidence relied on a kind of scientific or philosophical optimism (or pessimism), but the truth is that both natural science and mathematics have their own methods that work with the unknown directly. Or rather, with the impossible-to-know.
The Past, Present, and Future of an Equation
The use of mathematics in describing then predicting Nature’s behavior feels like a superpower. If you understand the equations that describe the movement of a body under some conditions, then you can make a mostly mathematical theory through which you know precisely what that body would do in a million years, and what it was doing twenty seconds ago. The mathematical model, when it is detailed enough, gives not only a description of the current situation, but also a prediction and an archive. When the speed of an object is described by a mathematical law that depends on time, then the “time” variable could equally be instantaneous, in the present, or in the distant future. This is the superpower of mathematics: universality, through which a law that describes a behavior is eternal and general.
In philosophical terms, such a belief emerged since the early days of using mathematics to express physics, especially by Isaac Newton, and it was called determinism. An equation doesn’t just describe the behavior of a physical system, but it determines it completely, with all its past, present, and future behavior.
Determinism allows for completely controlled simulations, for example. If you know the equations that describe the evolution of a system and set an initial configuration, then the future states of the system are determined by those equations; you know exactly what to expect.
Surprise seemed to have vanished from science, at least after a theory is formulated. The scientific method still implies experiments, trials, errors, and corrections, but once the law of movement is found, for example, it contains all the information regarding the system.
Towards the beginning of the twentieth century, progress with chemistry, or the young atomic theory, then nuclear and quantum physics contradicted much more expectations than theories could foresee.

In 1927, the German physicist Werner Heisenberg introduced the uncertainty principle of the subatomic world. When you want to measure the position, mass, and velocity of a particle, the accuracy of the said measurement has a mathematically established threshold. Mass and velocity are usually studied together through momentum, which is their product, and it is known at least since Newton. But Heisenberg showed that the position and momentum of a subatomic particle have uncertainty degrees that are inversely proportional to one another. Moreover, their combined uncertainty is never smaller than a universal constant (known as the reduced Planck constant). The more you want to know where an object is, the less accuracy you will have when measuring its mass and velocity and vice versa. No measurement, not even mathematical computations, could give total accuracy: error is mandatory, and it has a minimum value.
Heisenberg’s uncertainty principle is much more than a mathematical inequality and comes when physics was already studying the dual nature of particles. As such, wave functions were used increasingly, and Newton’s approximations of “point-like body” for particles were dropped when studying, say, electrons.
Almost at the same time as Heisenberg, the Austrian Erwin Schrödinger came with a fundamental equation of the subatomic world, which now bears his name. Schrödinger’s equation, published in 1926, replaces the fundamentals of Newton with a completely new mathematical framework, that was fit for describing the dual nature of particle-waves. One of the main changes was that instead of variables like position, velocity, momentum, or energy, Schrödinger uses probability functions. The variables of physics become functions that could provide many values, none of which is certain, but such that their possible values obey some laws. You no longer speak about the value of, say, velocity of a particle, but of the probability of it to have said value.
This highly sophisticated theoretical apparatus, for which mathematicians learned from physicists, along with increasingly weird experiments, such as the double slit experiment, made determinism seem obsolete. When nature is theorized through probabilities, through mandatory errors ― that is, through “true values” that are impossible to know ―, and when such theories are experimentally verified, to say that an equation could completely describe or prescribe the past, present, and future of a physical system is a case of ignorance.
Complex or Just Complicated?
The equations of quantum mechanics did not replace classical, Newtonian methods for good, but reduced their field of application. The quantum theory simply showed that the claim to universality of mathematical and physical laws was too bold when there were so many phenomena that defied intuition.
But we don’t even need individual objects that behave unpredictably, such as subatomic particles or black holes. Complexity and surprising results could emerge from a great number of components which one could understand individually, but not in an aggregate. Quantum physics itself took inspiration from thermodynamics and statistical mechanics. Even a glass of water contains so many molecules (of the order of 1023 or Amedeo Avogadro’s constant) that it is impossible to study them individually. Hence, statistical methods are in order.
A glass of water is not the best example of a complex system, since individual molecules don’t interact in significant ways. Take for example a mathematical object that took inspiration from nature: a cellular automaton. Perhaps the best known is The Game of Life, introduced by the British mathematician John H. Conway in 1970. This automaton is also called a zero-player game, because it uses an initial configuration and a set of rules by which the game evolves with no further intervention. The rules are inspired by the evolution of populations and by biology, so after its introduction as a fun activity (like much of Conway’s work), it found many applications.

In theory, The Game of Life could work well with as many components as needed. The greater their number is, the more spectacular the evolution. Then, from a point forward, it could become almost unpredictable. For example, researchers have found many special configurations such as The Garden of Eden. Such a configuration cannot be attained throughout the evolution of the game so, in a sense, it is primordial. You can only obtain it if you manually program it at the start of the game.
But perhaps the most important example of a complex system from the present is that of artificial intelligence.
Unlike “complicated” systems, where many parameters are involved, but they can be analyzed deterministically with a sufficiently powerful computer, a complex system has not only many components, but also a great number of interactions between them. There is no rigorous definition for complex systems, but some essential features are recognized. For example, non-linear interactions between components are a requirement. In mathematical terms, two pieces of information (such as inputs and outputs) are linearly dependent if they obey some proportionality rule. For example, you know that the amount of coffee your machine produces is always half of the amount of water you pour, so you use 500 ml of water every morning to get 250 ml of coffee. Non-linear interactions, on the other hand, use complicated formulas, which are sometimes probabilistic, so you can never know precisely what you’re going to get, even with repeated identical inputs.
Complex systems also require a significant amount of feedback. The output is reused as input, so the system processes it again, then again and so forth. In artificial intelligence, for example, feedback is very important, because it models learning, based on reward or sanction, which reflects the quality of the result.
From nonlinear interactions and the reuse of output, complex systems produce emerging behavior, where some kind of order or self-organization emerges. In The Game of Life, there is a configuration that is reached periodically. Other spectacular examples of self-organization are found in flocks of birds flying or swarms of bees, which are known to have sophisticated hierarchies and social rules, so much so that emerging behavior is also known as swarm behavior.
It’s features such as these that make impossible the study of complex systems like artificial intelligence models with certainty and perfect precision.
Science is Not Folklore
The arguments for complexity, uncertainty, and probabilistic or statistical results are contained in science and mathematics themselves. Add applications and interpretations for philosophy and faulty public communication or science journalism and you get a dangerous picture of misinformation, conspiracy, and “hidden truths”. The fact that science and mathematics, at their core, are not democratic, and after a certain point, they are not for everybody is a hard truth to swallow, but a truth nonetheless. Debates are not used to settle mathematical proofs, and once the result is established, they could at most be used to socialize (or rather divide). Whoever doesn’t understand this misses the whole point of scientific and research rigor. But this doesn’t mean that mathematics or science have “hidden truths” that are for some “initiates” only, unless by “initiation” one understands rigorous, academic study over years or decades.
Many of the results that we regard casually today, for example when using the adjective “quantum” out of place, rely on very abstract research. The fields of scientific study have grown so specialized that experts in a domain only get to know very narrow topics. On the domain at large, an honest researcher declares their incompetence.
A mathematical or scientific result should never be confused with its consequences, to say nothing about the explanations or interpretations that researchers from within or without give. From this point of view, theory can never be subjective, or emotion driven. The result of an experiment doesn’t change because it doesn’t meet expectations or the financial future of the laboratory relies on the outcome.
However, in most societies, if not in all, the physical and intellectual access of the wide audience in a research facility or institute is limited, and academic publications are far from accessible. Perhaps such things are inevitable and obligatory in a society that has a high standard of development and education. But since people first respond to emotion, and then to reason, one of the consequences is that opinions, commentaries, and interpretations could threaten the facts themselves ― not in terms of existence, but of prevalence and spread.
Public speech cannot replace academic courses, but it doesn’t mean it must not aim to educate. The main condition is that the speakers themselves must be educated enough to understand what they’re communicating, to whom, and to have the best intentions.
Finally, I think that the ultimate argument belongs to honesty, including that of intellectual flavor. Non-specialists who are certain they know are as wrong as the researchers who don’t dare to say they don’t know, either because they cannot know or they’re still working on it.
Thank you for reading The Gradient! The articles here will always be free. If you've read something useful or enjoy what we publish, please consider a small contribution by sharing on social media, starting a paid subscription, or a tip.