2nd Workshop

Our 2nd workshop took place as a series of two symposia in the context of the 10th ESHS (European Society for the History of Science) Conference in Brussels, from 7th to 10th of September 2022, with the title “The Perils and Promises of Prediction in Science and Science Policy”.

 

The program of the conference can be found here.

 

 

“The Perils and Promises of Prediction
in Science and Science Policy”

Abstract:

In the realm of public policy, decisions rely heavily on the ability to predict future events. Since scientific knowledge is the primary tool for making predictions, scientists often take upon themselves the role of advisor to politicians, policymakers, and the wider public as well.

Prediction, however, is not a concept with a fixed meaning. Rather, it has a historical character and has evolved over time. Moreover, the meaning of ‘prediction’ varies across different scientific fields. Although the role of prediction in the sciences has been a subject of considerable historiographical and philosophical debate, the historicity of prediction has not been systematically addressed. Furthermore, few studies have delved into the ways scientists derive and use predictions or into the particularities of predictions in different scientific fields and their role in policymaking.

The aim of this symposium is to address these neglected issues historically. It is part of a project on the perils of prediction in the physical sciences, whose aim is to investigate the many faces of prediction in scientific practice and public policy, to explore its significance in different fields, and to historicize its character and epistemic value. The questions addressed by the contributors include the following:

• How is prediction defined/understood in different scientific fields and in different historical periods?
• What counts as an adequate/successful prediction in different sciences? How are criteria set for evaluating the quality of a prediction?
• How does prediction bear upon policy making? What happens when different scientific communities make competing claims of exclusive expertise on the prediction of a phenomenon?

These questions will be addressed through historical case studies of prediction in meteorology, volcanology, ecology, and seismology.

Who Predicts? Dutch Storm Warnings and the Nature of Weather Forecasting 1860-1920
David Baneke

If the weather forecast predicts an 80% chance of rain, few people will be able to tell exactly what that means, but we do know how to act. We bring an umbrella and make sure we have a plan B for outdoor activities. And if it does not rain, we may make cynical jokes about the weather service, but we generally accept that this is no reason to write off meteorology as a whole. In other words: weather forecasts are a remarkably successful example of how people can deal with uncertainty in a fairly robust forecast. In this paper, I want to examine how we have learned to do this collectively. I will do that by analyzing the development of storm warnings in the Netherlands from 1860 to about 1920. The storm warnings were started by C.H.D. Buys Ballot, who had very specific ideas about the nature and practice of weather forecasting. I will analyze his ideas, and follow the development of the storm warnings from the perspective of scientists as well as users: sailors and fishermen. It took time for the storm warnings to gain the trust of sailors, and even more time before they paid routine attention. Less obviously but equally importantly, it also took time for meteorologists to find a suitable format for the storm warnings, and to learn what kind of expertise they could expect from their users. Studying this case will contribute to our understanding of how scientific weather forecasts became part of everyday life.

 

Untamable Volcanoes:
The Place of Non-predictive Sciences in the Public Arena
Joseba Pascual Alba

The geological sciences deal with vast time-scale and model natural ‘epistemic things’ (Rheinberger, 1997) which are extremely random and unpredictable. As historical scientific objects, volcanoes have a hybrid nature: they are fleeting but stable, observable and hidden (Arabatzis, 2011). They are a particular case of untamable phenomena (Hacking, 1990). On September 19th 2021, at lunchtime, the people of the Canary Island of La Palma witnessed a volcanic eruption. Some geologists had previously said that there was only a 20% chance of a volcanic reactivation. While people are used to trusting in numbers (Porter, 1995) and the media and policy makers need predictions, volcanologists actually move through the quicksand of unpredictability (Kampourakis & McCain, 2020). Geological models seem to be heuristic, rather than predictive tools (Oreskes et al., 1994). In the ‘Cumbre Vieja’ eruption, volcanologists were asked to explain why the eruption was happening, why it was not accurately predicted and why they could not foresee when the end would come. And yet, they were constantly in the media. My general aim is to use the tools of integrated HPS and cognitive and ecological history in order to understand better the ‘Cumbre Vieja’ episode. Particularly, a) to enquire how scientists dealt with that complex and unpredictable phenomenon: I will argue that Interdisciplinarity and the superposition of styles of thinking and doing were simultaneously shaping volcanological practice (Hacking, 2012; Ruphy, 2016); b) to shed light on the relations between Spanish scientists, policy makers and ordinary people during the eighty-five days of the volcanic activity.

 

 

Prediction in the Wild”
Alkistis Elliott-Graves

The traditional approach to prediction, established in 20th Century Philosophy of Science, does not work as well for disciplines such as ecology and climate science, which have a high frequency of applied predictions, that is, predictions aimed at intervening on the world so as to prevent, mitigate or solve a certain problem. Applied predictions tend to fall short of the criteria historically associated with predictive success, in the sense that they are not sufficiently accurate sufficiently often. The aim of this talk is to re-examine the historical account of prediction and how it relates to the context of applied science. I will examine the case of the kōkako, a landmark example of successful conservation in New Zealand and show that the scientists involved in this project were able to increase the predictive accuracy of their models by decreasing the level of precision. I will argue that this example shows that imprecise predictions can overcome two traditional criticisms levelled against them: (i) that they cannot aid in the model selection and improvement and (ii) that they cannot support effective interventions in practical decision making.

 

 

Scientific Uncertainty and Decision Making:
The Case of Earthquake Prediction”
Iraklis Katsaloulis

Scientists have put a lot of effort in order to mitigate earthquake hazard, and they have done so in various ways. One of those ways is earthquake prediction, a scientific practice with a rather recent history, commencing in the 1960s. In the six decades that earthquake prediction has been practiced, the interaction between scientists, politicians and the wider public has been tumultuous. The feasibility and utility of earthquake prediction have been heavily debated. A number of questions regarding earthquake prediction research and political decision making have been raised: How can decisions be taken in light of scientific uncertainty? What is the required accuracy for an earthquake prediction to be useful? Are there two distinct stages of decision making, one related to risk assessment and another related to risk management, or are the boundaries between these two blurred in real life? In the beginning of the 1980s a candidate method for short-term earthquake prediction was proposed by a group of Greek scientists. It soon became the subject of a fierce controversy which lasted for more than two decades, and unfolded in the scientific community and in the public sphere as well. Due to the public character of this debate numerous interventions of scientists, politicians, and policy makers in newspapers, TV and radio broadcasts have been recorded. I will use these sources to investigate historically the questions raised above. I will argue that the Greek case is particularly suited for understanding the difficulties involved in earthquake prediction policy.

 

 

 

 

The Perils and Promises of Prediction in Science and Science Policy – 2

 

 

The Role of Theoretical Predictions in Designing a Big Science Infrastructure: The Case of the Large Hadron Collider
Grigoris Panoutsopoulos


The LHC went down in history for its legendary discovery of the Higgs boson in 2012. Said discovery popularized the view that this enormous scientific machine was constructed for the purpose of testing the theoretical prediction of the Higgs particle. Was that truly the case, however? In this paper, by utilizing the technical surveys and the minutes of the meetings of the various councils and committees behind the implementation of the LHC project during the 1980s and 1990s, I will attempt to show that the initial design period of the collider delineated an independent trajectory, unbound by theoretical predictions. This was due to the fact that Big Science machines, such as the LHC, owing to both their cost as well as their scientific, technological and political complexities, demand long-term planning that can span up to 30 years. Across such a timescale there is a very high probability that, until a machine becomes operational, the majority of any pre-existing theoretical predictions will have been discarded, disproved or established. This is the reason why the LHC was not conceived as a machine for the validation of a particular theory, or for the discovery of a predicted particle, but as a multipurpose exploratory machine, a “logical” next step in the chain of HEP accelerators and colliders that had been aiming for increasingly higher energy frontiers. This case study will allow us to contemplate the role that predictions actually play in the construction of the HEP infrastructures and also in the formulation and execution of the science policy of organizations such as CERN.

 

Technological Predictions in Optics and Astrophysics”
Gauvain Leconte-Chevillard


Historians and philosophers of science have vividly debated the importance of some paradigmatic cases of successful novel predictions. For instance, the prediction of a bright spot at the center of a circular shadow, derived from Fresnel’s wave optics in 1818, has raised several questions: did this prediction have more weight in the acceptance of Fresnel’s theory than other evidence already known at the time? How was this successful prediction deduced from hypotheses now seen as false or highly idealized? Should we take this predictive success as a proxy of the truth of Fresnel’s theory? In this talk, I claim that we can answer these questions if we endorse Popper’s notion of “technological predictions”, i.e., “constructive [predictions], intimating the steps open to us if we want to achieve certain results”. First, using Woodward’s concept of intervention, I define technological predictions as schematic experimental protocols. Second, I argue that this notion sheds new light on the case of the bright spot: it highlights the role played by the physicist Arago, who translated the bright spot prediction into an experimental protocol and tested it. His work showed the experimental fruitfulness of Fresnel’s theory, which heavily contributed to its acceptance by contemporary French scientists. This new narrative also explains how this theory exhibited new causal links between theoretical properties and, therefore, why the bright spot prediction represents such a paradigmatic case for scientific realists. Finally, I argue that, contrary to Popper’s claim, one can also find cases of technological predictions in the recent history of astrophysics and cosmology, showing that those sciences have acquired an experimental character.

 

“A Minor Chemical Revolution”: Models, Predictions and Quantum Chemical Computations as a Primary Exploratory Tool in Chemistry
Stylianos Kampouridis

The emergence of quantum chemical computations is usually attributed to the growing speed and availability of electronic computers. However, this view distorts a complex historical trajectory that lasted many decades and entailed unprecedented transformations in the discipline of chemistry. If the introduction of physical-spectroscopical methods in chemistry in the post-World War II period can be described as a revolution, the spread of quantum chemical computations can also be so regarded. The cornerstone of chemical practice, experiment, was not only challenged as the sole source of genuine knowledge in chemistry but has been superseded by theoretical computations in a growing number of cases. The history of computational quantum chemistry can be divided into two periods. The first signalled the emergence of “ab-initio methods” and started in the 1950s with a predictive turn that raised ontological questions for the emerging field, and concluded in the 1980s with the acceptance of these methods by the broader chemical community. The second started in the early 1990s when density functional theory (DFT), a drastic simplification of computational problems in electronic structure, proved to be feasible. In less than a decade, DFT dominated the field and changed the dynamics between experiment and theoretical predictions. Computational chemists faced criticism from traditional chemists, while they engaged in internal controversies around the identity of their field. I will argue that the central issue of contention was the place of computational quantum chemistry on the map of chemical practices: Was it meant to replace experiment or to complement it?

 

Foresight in Science Policy:
Science, Prediction, and Neoliberalism in British Government
Jacob Ward

This paper explores the emergence of foresight in science policy in British government in the 1990s, contextualising this against the longer history of futurology and the rise of neoliberal modes of governance. The established narrative of futurology in this period is that its commercialization enabled governments to outsource prediction (Andersson 2014). This mirrors histories that depict managerial and policy outsourcing as a key neoliberal practice (Boltanski and Chiapello 2014; Jackson 2012). This perspective, however, frames the relationship between neoliberalism, prediction, and government purely in negative terms, taking functions away from the state. Neoliberalism, however, is best viewed as a process of state transformation that facilitates market creation and global capital flows. This involves shrinking the state in some ways, such as privatisation, but also expanding the state in other ways. In the case of Britain, this involved the creation in 1994 of a new centralised futures office, the Foresight Unit, attached to the Government Office for Science. Foresight had origins in Sussex University’s Science Policy Research Unit and the nascent field of STS. Foresight went on to become highly influential within British and other European governments as a way to build new forms of public-private collaboration for the future of science and science policy. This paper revisits the history of foresight and the Foresight Unit to explore how and why government re-centralised prediction in science policy in the 1990s and how this can help us better understand the mutually constitutive relationships between prediction, science policy, and neoliberal governance.