The conference brings together researchers from the sciences as well as from history, philosophy, social studies of science, and neighboring disciplines to discuss the practices and theories associated with large-scale experiments in science.
The conference is organized as a mixture of plenary sessions, multiple symposia, and sessions with contributed papers.
Confirmed plenary speakers
Hover over the grey text to expand the details, or check out the Programme page for details
The conference is organized by the interdisciplinary research unit "The Epistemology of the Large Hadron Collider".
Organizers: Sophia Haude (Bonn and RWTH Aachen), Rafaela Hillerbrand (KIT), Daria Jadreškić (AAU Klagenfurt), Marianne van Panhuys (KIT), Michael Schmidt (KIT), Michael Stöltzner (South Carolina), Christian Zeitnitz (BU Wuppertal)
Scientific breakthroughs are complicated phenomena that are difficult to predict. Yet, certain predictors such as the team structure or interactions within a team can be identified. This is particularly relevant in the context of big science since large collaborations require coordination and thorough team management. I will present the results of two studies in which we analyzed how team dynamics affect epistemic efficiency in high energy physics (HEP). The first one was focused on the team structure (Sikimić and Radovanović, 2022), while the other one was about the work conditions in HEP (Sikimić et al., 2022).
For the purpose of the first study, we conducted a data-driven analysis of 67 projects run in the HEP laboratory Fermilab between 1974 and 2003. More specifically, we used Data Envelopment Analysis to determine their efficiency based on the available parameters: citations per paper, project duration, number of subteams, and their size. The results suggest that projects divided into a smaller number of subteams perform better. Also a shorter duration of a project usually indicates a higher efficiency.
Studies of team structures could be useful when it comes to understanding and optimizing the epistemic utility of a complex project. However, they still do not give us answers about the inner team dynamics and epistemic motivation of researchers.
In order to address the later question, we conducted an empirical survey among physicists (N=122). We measured their general job satisfaction as well as their satisfaction with the academic system. In our sample, we had researchers working in large laboratories, universities, and independent institutes. The scale measuring satisfaction with the academic system was developed by us. It performed in a statistically satisfactory way and it can be represented in terms of three factors: experience of research autonomy, opportunities to use one's knowledge, and appreciation of the research by the general public. Some of the items that physicists evaluated were:
- At times, I do not know the exact amount I contribute to the final results or the way in which I do so.
- I could contribute substantially more to the design than I am presently contributing.
- The general public and political decision-makers have a poor understanding of scientific and engineering efforts.
Researchers expressed lower satisfaction with the academic system than with the general work conditions such as the payment and freedom at the workplace. It is interesting to note that we did not detect a significant effect of the type of the academic institution on overall job satisfaction. Participants also felt that the general public does not comprehend their work well which is yet another sign of frustration among researchers. Most importantly, both early-career researchers and female scientists scored lower on both scales.
Our findings highlight the necessity of creating an inclusive and supportive research environment for disadvantaged groups in physics. In order to improve epistemic efficiency, it is not sufficient to create optimally structured teams but it is also necessary to focus on the academic culture within them. Only with a shift in the academic culture, we can expect that the cognitive diversity of large teams will reach its full epistemic potential.
Sikimić, V., Damnjanović, K., & Perović, S. (2022). (Dis)satisfaction of Female and Early- Career Researchers with the Academic. Journal of Women and Minorities in Science and Engineering. https://doi.org/10.1615/JWomenMinorScienEng.2022038712.
Sikimić, V., & Radovanović, S. (2022). Machine learning in scientific grant review: algorithmically predicting project efficiency in high energy physics. European Journal for Philosophy of Science. https://doi.org/10.1007/s13194-022-00478-6.
Although physics has long been deemed an experimental science, the experiment has long been considered secondary to theory. This is because postpositivist ideas justified the theory-ladenness of experimental results and, thus, the subordination of the experimentalist to the theorist. However, since the 1970s, the context of a large scientific project has differed from the experiments of the past in that the scientific community divided into subcommunities of theorists, experimentalists, and instrumentalists. By the end of the 20th century, in large accelerator laboratories, it became possible to identify two large tiers of physicists: accelerator physicists and detector ones . Each of these tiers included its own theorists, experimentalists, and instrumentalists. But the role of experimentalists has changed most dramatically. Unlike in the past, when the experimentalist was, in part, both a theorist and an engineer, the contemporary experimentalist is no longer a phenomenal theorist. They are increasingly excluded from theoretical discourse, spending most of their time designing, modeling, and constructing their respective apparatuses, accelerators and detectors. From the end of the 20th century to the beginning of the 21st century, the actual experimental verification of theoretical models occupies a small part of the contemporary experimentalist’s career. Even though they work for high-energy physics (HEP) institutions, they work not on HEP problems directly but rather for HEP. It is the construction of complex detector (or accelerator) units that constitutes their main task and purpose of activity, although existential goals may be more global. In such a situation, the following question arises: should the experimentalist be considered an engineer? In this talk, I will discuss how such differences are related to certain forms of epistemic injustice . By analyzing the scientific practice of large laboratories, I found that the traditional definition of an engineer in the context of large-scale science is incomplete. If an experimentalist holding the status of “scientist” performs their research under the general supervision of a scientist of higher status (such as a university professor) or independently (often called undirected research), then an engineer is rarely accorded such a privilege; their effort is usually rigorously directed by scientists and engineering managers. The latter does not imply any significant research flexibility, such as participation in the analysis of experimental data. Thus, their difference largely comes down to the relationship between power and subordination as well as academic freedoms. The talk will delve into the problems of subaltern communities’ access to highly valuable epistemic practices in HEP. I will argue that the constrained access to epistemic practices violates the existential interests of the HEP community at large.
 Pronskikh, V. (2021). Blurred Engineering Identities in Megascience: Overcoming Epistemic Injustice. International Journal of Technoethics (IJT), 12(2), 35–47.
 Grasswick, H. (2017). Epistemic Injustice in Science. In I. J. Kidd, J. Medina, & G. Pohlhaus Jr., (Eds.), Routledge Handbook of Epistemic Injustice (pp. 13–26). Routledge.
In June 19, 2020 CERN announced the resolution to build a new and bigger particle accelerator: the Future Circular Collider (FCC). Its 100 km circumference and its 100 TeV range of energies are designed to probe the limits of the Standard Model, looking for deviations and anomalies that could hint at “new physics”. Some high-energy physicists and historians of science contend that the lack of clear theoretical predictions should not be an excuse to put the design of the FCC on hold. Rather, its experimental exploratory role “may assist in pulling the stuck wagon of High Energy Physics (HEP) out of the mire” (Panoutsopoulos, 2019).
But how pressing is to unstuck the wagon of HEP in the first place? And how sure are we that a new bigger particle collider is the best alternative to do it? Theoretical physicist Sabine Hossenfelder, an outspoken opponent of the FCC, contends that the arguments given by the CERN leadership are based on false promises and particle physicists’ grandiose rhetoric (Hossenfelder, 2020).
In this talk, I want to concentrate on the critical positions towards the FCC, such as Hossenfelder’s, and compare them to another episode in which similar attacks to the hegemony of particle and high-energy physics were raised: the case of the Superconducting Super Collider (SSC) in the United States. After almost ten years of ongoing scientific and political debates and nearly 5 billion spent on its construction, the particle accelerator was cancelled in 1993 (Hoddeson et al., 2015). While the FCC and the SSC have important dissimilarities, I find there are relevant lessons that could be drawn from the case of the SSC thirty years ago, that could be applicable to the debates concerning the necessity of the FCC nowadays. In particular, I want to address the historical debates of opposition to the hegemony of particle physics using gender as an analytical category.
Building upon gender studies on “hegemonic masculinity” (which designate that form of masculinity that is the most honoured way of being a man, and that establishes a hierarchy amongst different kinds of men, Connell and Messerschmidt (2005)) and on “scientific masculinities” (which refers to the ways in which being a scientist and being a man are identities that form in mutual relation with each other, Milam and Nye (2015)), I propose that high-energy physicists not only conform a type of scientific masculinity, but that compose the hegemonic type of scientific masculinity, those who are at the top of the hierarchy of the sciences, and of the scientists.
Physicist and feminist Barbara Whitten was the first to address the case of the SSC from a feminist perspective (Whitten, 1996). For her, the controversy around the SSC played an important role in questioning the hegemony of particle physicists. She advocates for a “less reductionist, more holistic, more human-centered feminist physics” that would replace the idea of a linear hierarchy of sciences by an interconnected web of disciplines, in which those at the centre are the more relevant to other sciences and society.
My aim with this talk is to re-engage with the feminist critique of physics, showing how debates of opposition to particle accelerators like the SSC thirty years ago, and the FCC nowadays, are enclaves in which the culture of physics can be analysed, questioned and potentially changed in favour of a less hierarchical discipline, more guided by human needs.
Investigators in a variety of research domains conduct large-scale experiments which directly involve large populations. Examples include studies that research influences to voter turnout, emotional contagion on social media platforms and real-time predictive policing experiments within public space. Such experiments may be conducted on a potentially dis-empowered population in terms of knowledge, control and benefits. Not all large-scale experiments directly involve human subjects, yet all have the capacity to unfold in an unpredictable manner and affect non-target populations.
In recent years, researchers have questioned to what extent existing ethical frameworks for human subject research can accurately capture ethical concerns that large-scale field experiments may bring about. This concern builds on the idea that the ethics that regulates human subject research is primarily designed to protect individual research participants. An aggregation of individual rights is believed to be unable to sufficiently capture the moral concerns that large-scale interventions might present to groups or populations. In response, political scientists have suggested ‘respect for societies’ as an action-guiding and protecting principle in the design and execution of large-scale experiments within society. This suggestion stands in a longer line of critical scholarship that has argued for a theoretical deficiency plaguing principlism; the applied ethics approach that advocates for the principles of beneficence, non-maleficence, justice and respect for persons.
The paper aims to theoretically clarify the principle of ‘respect for societies’. Currently, it remains unclear what would satisfy this principle, what it's exact scope would be and to what extent it would cover a theoretical deficiency. Large-scale experiments might bring about ethical concerns that are due to individuals having individual rights against a certain treatment, such as not consenting to research participation, or due to the experiment changing features of the world that individuals generally have rights against, such as the non-target population being exposed to increased risks to their welfare. However, such concerns can be covered existing principles. In these instances, large-scale experiments do not produce unique normative challenges. Instead, their ‘largeness’ simply aggravates the same risks and harms that subjects have rights against in "small-scale" experiments.
I will argue that a principle of ‘respect for societies’ should be specifically concerned with protecting groups against collective wrongs, that is morally wrong acts pertaining to the moral status of groups. Additionally, this paper points out future conceptual and practical challenges for a principle of 'respect for societies' such as demarcating it from concepts such as solidarity and communalism and the scope of groups that reasonably should be included in ethical reflection by investigators.
A personal perspective : an attempt to understand if we can improve our structures, way of working, societal impact through an analysis of how our field has reached the present status.
Since the late 1940s, the quest for ever-higher energies in particle physics has led to the development of ever-larger experiments and collaborations involving an ever-larger number of scientists. This evolution was accompanied by the advent of new methods and techniques, among which the Feynman diagrams notably played a central role. With them, virtual particles have become an essential element of the conceptual apparatus of quantum field theory, whose physics is tested at particle accelerators. Nevertheless, being short-lived, off-shell—i.e., not satisfying the energy-momentum relation—and consequently unobservable, virtual particles are entities that in many respects occupy a special position in modern physics. From an interpretative point of view, the questions they raise remain largely open. Thus, while in high-energy-physics practices the concept of virtual particles is commonly applied, there are strong differences of opinion on various issues related to its foundations. These divergences must be taken into consideration because it is not to be excluded that within a broad but strongly interconnected community, they can, in the long term, lead to important epistemic consequences.*
Virtual particles are notably the subject of a dispute over their ontological status. To give a simplified overview of the question, part of the scientific community defends that they are real entities since they are collectively responsible for different effects, while others argue that they are nothing more than mathematical tools or artifacts resulting from perturbative theory approximation processes. In the minds of many physicists, such a question is partly related to whether virtual particles conserve energy or not, the second option being very often an argument against realism. On this specific point, a plethora of divergent discourses reveals a confusing situation. To illustrate, one can curiously hear alternatively that the virtual particles are off-shell to satisfy the conservation of energy or that the property of off-shellness is nothing more than another way to express non-conservation of energy.
In this work, we propose to rely on the history of physics to clarify these debates and explain these divergences. The development of the concept of virtual particles since its beginnings at the end of the 1920s is characterized by episodes that have influenced its current perception. Among them are its formal introduction by Dirac in the late 1920s and the developments in the second half of the 1930s of the meson theory of nuclear forces by Yukawa. The introduction of Feynman diagrams at the end of the 1940s is another key moment. The analysis of these episodes then helps us to redefine the meaning taken by the notion of virtuality when applied to particles. It highlights the central role initially played by the non-conservation of energy in the formation of the concept of virtual particles, before its reinterpretation within the work of Feynman led to the blurring of such understanding and the progressive rise of ontological debates.
*One can recall here that the development of the field of quantum foundations has been boosted by various interpretation debates in quantum mechanics.
Large scale experiments are expensive. How scientists were able to garner necessary resources, and how their sponsors were willing to bear the cost are crucial in order to understand the social conditions that made large scale experiments possible. Japan is one of the few areas in the world where high-energy physics flourished during the twentieth century. This paper asks how this happened. According to Daniel Kevles (1995), the allocation of resources to this type of “Big Science” in the United States can only be understood in the context of the Cold War, when reserving a large group of scientific talents in a relevant field of research made sense as a preparation for a wartime crash program. This reasoning does not apply to Japan's development of high-energy physics, where any type of nuclear weaponization was politically and diplomatically unthinkable. Nor had the political situation in Japan any resemblance to the one in Europe, where the politics of European integration played a significant role in securing support for the creation and operation of the CERN (Herman et al., 1987-1996; Mobach and Felt, 2022). More generally, the Cold War context is sometimes considered as relevant to the emergence of Big Science because of its direct lineage from the Manhattan Project. Yet, this kind of explanation does not work well in the Japanese case; while there were wartime nuclear programs in Japan, and one of them was linked to early Japanese attempts to construct accelerators, these wartime nuclear programs were nothing comparable to the Manhattan Project, and their connections to the high energy physics in the 1970s were far from straightforward. This paper attempts to provide an alternative explanation to the emergence of high-energy physics in Japan by examining the political culture surrounding the Japanese academia in the late 1960s. As in other countries, student movements and left-wing activism were rampant in Japan in the late 1960s. Ever since the Allied occupation period, left-wing intellectuals had political and cultural influence in the academia. The government of Japan had been trying to decrease their influence over time. Emergence of high-energy physics provided an opportunity for the government to support and strengthen apolitical physicists to increase their influence in the Japanese physical community. The high cost of creating a high-energy physics laboratory and conducting large-scale experiments created a convenient venue for the government to intervene in the internal politics of a scientific community. Therefore, I argue that the rise of high energy physics in Japan was, at least partially, due to the Japanese government’s attempt to tame leftist political activism in the scientific community, by providing financial support to less political segments of the community and creating a dependency of researchers on the government-controlled funding. In this way, the Cold War context was highly relevant to the rise of high energy physics in Japan, but not in a quite different way from the United States or Europe.
Since the early 2010s, artificial intelligence (AI) research has been characterized by the rise of a subset of machine learning methods, called « deep learning », which relies on deep neural networks. This rise follows several decades of domination of so-called « symbolic » approaches, while the methods described as « connectionnist » (including machine learning and deep learning) were marginalized due to their opacity and lack of solid theoretical foundations (Cardon et al., 2018). The most prominent example of this paradigm shift is the victory of AlexNet, a deep convolutional neural network, at the ImageNet Large Scale Visual Recognition Challenge in September 2012 (Krizhevsky et al., 2012). AlexNet won with a top-5 error rate of 15.3 %, nearly 10% ahead of second place. This episode alone illustrates how the rise of deep learning methods has accentuated trends already present in AI research.
At the end of the first AI winter in the early 1980s, US government agencies such as the Advanced Research Projects Agency and the National Institute of Standards and Technology promoted the practice of benchmarking within AI research in order to measure the progress of performance of AI models and to focus funding on the most promising research avenues. Since then, the main measure of progress within the two major fields of AI research, natural language processing and computer vision, are performance metrics against which models are evaluated on language tasks (machine translation, sentiment analysis) or vision tasks (scene recognition, object detection) which are materialized by datasets acting as benchmarks. Obtaining the best performance on one of these benchmark datasets means reaching what AI researchers call the state of the art, and therefore having a good chance of publishing this experiment in a recognized journal or conference.
As illustrated by AlexNet's 2012 score on the ImageNet dataset, deep learning methods achieve significantly better SOTA performance than traditional symbolic AI approaches on the major NLP and CV benchmarks created in the 2010s and 2020s. Nevertheless, this presentation aims to show that the success of deep neural networks can be analyzed as a transformation of AI research into a Big Science. More specifically, we wish to highlight the fact that the paradigm shift from symbolic approaches to connectionist approaches corresponds to a change in what we call the political economy of the scalability of AI research. This change concerns three main technical aspects: the increase in the volume of datasets, the increase in the number of model parameters, the increase in the computing power required for the development and deployment of AI systems. Because of this trend towards scale up of datasets, models and compute, the research labs capable of achieving SOTA performance are primarily those of companies such as OpenAI, DeepMind or Meta.
Based on twenty semi-structured interviews with French AI researchers and a documentary and bibliographic analysis of the recent history of deep learning, this presentation aims to highlight the epistemic risks of such a transformation of the political economy of the scalability of AI research.
Cardon, D., Cointet, J., Mazières, A. & Carey Libbrecht, L. (2018). Neurons spike back: The invention of inductive machines and the artificial intelligence controversy. Réseaux, 211, 173-220. https://www.cairn-int.info/journal—2018-5-page-173.htm.
Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.
The cosmologist Pedro Ferreira describes fundamental physics as seemingly confronted by what he terms a “cosmic chasm:” the irreconcilability between verified physical models, and phenomena observed but not understood. Recent attempts to bridge this chasm have failed with the possibility that future experiments may only succeed in providing 'a much more precise statement about our ignorance but nothing more.' With no clear channels on how best to proceed, physicists appear left with little option but to repeat similar approaches– only bigger and at greater cost – in the hope that something new may be found. In this reading, the continued reliance on ever larger experiments is not indicative of a field of research in ascendancy, but one increasingly short on ideas.
In this paper, I report on my recent engagement with researchers in cosmology and astroparticle physics confronted by this chasm. Working from ethnographic fieldwork and interviews conducted while embedded in a research community involved in the large-scale effort to directly detect dark matter, I explore the sense of anxiety that pervades current research into dark matter as a particle. As I discuss, despite dark matter being largely assumed by researchers in astrophysics and cosmology, its existence remains thoroughly problematic for astroparticle physicists as recent experiments suggest their once favoured candidates are increasingly unlikely. This, I suggest, is symptomatic of a broader sense of crisis pervading fundamental physics, that following the success of completing the standard model of particle physics in 2012, that there is no clear indication of where else to go for a more fundamental theory.
Beginning from recent decolonial perspectives that seek the reanimation of indigenous cosmologies in what is called A World of Many Worlds –one in which divergent knowledges and practices make worlds– I argue that in addressing the future of fundamental physics the task may not in fact be to ask what place for the many practices, beings, and peoples excluded by a modern science. Instead, faced with the cosmic chasm, I argue we should inquire into the place for fundamental physicists in theirs, as physics appear to be increasingly starved of challenge from the empirical world it has faithfully attended to. In what I call the “end of the universe,” I explore the possibility of fundamental physics reaching its limits in its ongoing pursuit for unity and finality of explanation.
In response, I call for a renewed critical engagement with the practices of fundamental physics, one that aims to tell new stories about how problems come (in)to matter for the peoples, practices, and worlds that belong to them. Working from my fieldwork, I explore how such speculative, yet nevertheless empirical stories may be told. It is this approach that I conclude holds promise in the development of a “pluralistic” cosmology, that is a cosmology that admits a plurality of divergent knowledges and practices into its explanations, ensuring that physicists’ “end of the universe” need not imply the end of their world.
Scaling up Space Biology: The Concept of Reference Experiment
This paper examines the concept of reference experiment as a way to “scale up” biological research into microbes, plants, and animals in the space environment. We argue that the implementation of this concept would have significant epistemic and social benefits, particularly in contrast with the currently prevailing model of experimentation in space biology.
Over the last decade, the key site for space biology has been the International Space Station (ISS). All studies funded by NASA on this platform have been conducted with limited opportunities, operational constraints, and under the same model: hypothesis-driven experiments led by a single PI. While results are communicated in academic publications, the experimental data obtained has typically stayed with the researchers who conducted the studies. In 2018, responding to a US-government mandate requiring public access to data and publications from federally funded research, NASA implemented GeneLab, an open science data repository and analysis platform for spaceflight omics data. GeneLab’s original strategic plan was published in 2014 and included the idea of large-scale reference experiments: exploratory experiments on a wide variety of organisms, designed by multiple investigators, and with immediate release of data under the principles of open science. This idea was recently revisited in two White Papers by teams of GeneLab researchers, submitted to the upcoming Decadal Survey in Biological and Physical Sciences in Space 2023-2032.
In this paper, we provide a characterization of reference experiments and lay out the potential benefits and challenges of their implementation. Our focus is on four primary areas: First, we consider the various ways in which reference experiments would transform the process of experimental design. Of particular interest here are (i) the shift from primarily hypothesis-driven to exploratory experiments, (ii) the expansion of (types of) organisms under study, (iii) the standardization of sample collection practices, sample analysis, and specimen preservation. Second, we ask what the consequences of this mode of experimentation would be for shaping and expanding the theoretical expectations coming out of such experiments. So far, most experiments in this field have investigated gravity sensing mechanisms and gravity effects, yet there is increasing recognition that this focus might have led to a neglect of issues that ought to be regarded as equally central like radiation, CO2 concentration, and light and temperature cycles. Third, we discuss the institutional requirements for implementing this new model in terms of funding, open science infrastructures, and modes of ‘experimental ownership’. Lastly, we address the potential effects of reference experiments in diversifying the community of space biology researchers beyond the limited group of laboratories that have hitherto dominated funding in this field. We conclude that, despite significant challenges, scaling up space biology in this way has the potential to transform the research landscape in this field.
The current version of the standard cosmological model, ΛCDM, has included the hypothetical dark matter since the 1980s. Several candidates for dark matter have been proposed. The most famous of these are sterile neutrinos, axions and WIMPS (Weakly Interacting Massive Particles). Efforts for the independent, “direct” detection of these and other candidates have been active since 1996. After no successes during the first experiments, more and more sensitive detectors have been constructed in an attempt to confirm the standard model.
In this talk, I will overview the role that detection technology plays in these efforts. I will describe detection methods and the experimental setup, focusing on XENON experiments. The XENON Collaboration series of experiments began in 2006 with the installation of XENON10 containing only 25 kilograms of liquid xenon, whereas the detector built for XENONnT will contain an impressive 8 tons of liquid xenon.
Because of a complex, theoretically laden method, the “raw” data produced by the experiment requires such a degree of interpretation that the bar for an unambiguous result is extremely high. Even though the aim of the overall project of detecting dark matter has not yet been achieved, technology for detecting it has been advancing non-stop. Still, a justification for continuing such expensive detection experiments is needed.
One way of justifying XENON experiments is to appeal to the superior sensitivity of detectors, which is claimed to represent an impressive scientific and engineering achievement in its own right. Another way is to appeal to by-product results: eg. in 2019, the XENON collaboration reported the first observed two-neutrino double electron capture of a xenon-124 atom, an extremely rare event in the universe. The question that I ask is whether this indeed can justify continuing otherwise fruitless research. I argue that a fruitful approach to assessing this question should involve the consideration of possible epistemic risks arising from such large-scale scientific research.
The search for dark matter has been an important concern of experimental physicists for the last 20 years. WIMPs are expected to face a reckoning in the next round of experiments, including XENONnT. If WIMPs are eliminated as a candidate for dark matter, ALPs (axion-like particles) may be next in line for the chopping block. How many candidate particle models will be eliminated before the funding bodies of science decide to pull the plug? Overall, the future of dark matter research remains uncertain. It may turn out that all the patience and investments were warranted all along. There is precedent for this, as the decades of research that preceded the detection of the Higgs boson in 2012 clearly demonstrated. However, if the search for dark matter will turn out to be a wild-goose chase, this will surely lead not just to a revision of the standard model of cosmology, but may also impact the funding and research practices of physics and other scientific fields more broadly.
“Big Science” typically refers to large-scale, technologically-intensive, and centrally-managed research projects, supported by massive government funding and involving large numbers of technical and support staff. Existing scholarship focuses on astronomical observatories, national physical and biological laboratories, space extravaganzas, and secret military projects conducted at specially constructed, limited-access facilities. Meteorology and climatology, although large in scale and import, for the most part do not fit these categories. Climate engineering, however, does.
This presentation locates recent speculative proposals involving climate engineering within a long history of big meteorology and climatology. The atmosphere is big in scale, big in extent, and big in social import. It always has been. Atmospheric science and services have expanded from local observations to global coverage; in altitude from mountain tops to satellites and space probes; and conceptually from the whims of the gods and deductions of natural philosophers to ideas about fluid dynamics, chaos theory, and climate intervention. The state and behavior of the atmosphere is linked to the economy, governance, military action, and, above all, the everyday life of everyone, everywhere, always.
Since the second half of the 20th century, developments in technology -- specifically aviation and rocketry, satellite surveillance, cloud seeding, and computer modeling – combined with growing concerns about climate change and the health and future of the planet, have encouraged a small but vocal group of climate engineers to propose imaginative technological “fixes” for undesirable changes in the climate system. Most of their focus has been on techniques that might attenuate sunshine and sequester carbon dioxide sufficient to provide a degree or two of global cooling, with less attention given to issues involving fresh water resources, food security, and social justice.
In response to the question regarding climate engineering: “Can We Do It?,” many national and international studies have responded, “Perhaps,” but the uncertainties are daunting and the consequences of triggering an adverse response in the climate system would be devastating. The issue of “Should We Try?” raises the issue of governance of climate engineering, which, although more important, has garnered less attention.
It is clear that questions involving the manipulation of the planetary atmosphere cannot be left to a small cadre of engineers with no training or experience outside of their specialties. Continued research into climate geoengineering will provide policymakers with useful scientific information on the feasibility and safety of large scale projects. It remains necessary, however, to ensure that the decisions surrounding research and use (or prohibition) of such technologies be made with democratic input under the auspices of an international agency.
Recent international conventions and protocols represent geopolitical, if somewhat ineffective, interventions in the climate system. Many more policy initiatives are under way. Economics has also begun to play a role, as taxes and incentives are put in place to reduce unwanted emissions. Meanwhile, green social engineers are attempting to convince the general public to live sustainably, while "geoengineers" hold in reserve massive technical fixes for the climate system. Big and speculative issues indeed!
'Geoengineering' has come to refer to massive technological interventions into fundamental earth systems on a planetary scale, often with the aim of counteracting human-induced climate change. Despite a burgeoning literature, some ethical issues surrounding geoengineering remain under-analyzed, barely identified, or in effect ignored. In this paper, we explore one such issue, the threat of generationally parochial geoengineering (GPG) and identify some early warning signs in the current discourse, focusing on stratospheric sulfate injection, a form of solar radiation management. Our emphasis is on motivating the claim that generationally parochial geoengineering is a threat that should taken seriously at all levels of work on geoengineering, including research, development, and deployment. We also propose some initial guidelines and recommendations for future research and governance.
The Stratospheric Controlled Perturbation Experiment (SCoPEx) is the most recent effort of a group of researchers at Harvard University to conduct a field experiment connected to stratospheric aerosol injection (SAI) in Kiruna, Sweden. Although the envisioned experiment is small in scope and strictly speaking not a large-scale experiment, its connection to solar geoengineering with its global implications has triggered considerable controversy. After heavy resistance by the Saami council —an indigenous peoples’ organization— the SCoPEx Advisory Committee recommended to postpone the first test flight which was originally scheduled for June 2021. Currently the experiment is suspended, and it is unclear whether it will take place or not.
In my talk I will present findings from my master thesis in which I use document analysis and qualitative interviews to analyse how different stakeholders frame the SCoPEx-project and its implications. Positioning my empirical work within literature from science and technology studies, I argue that the SCoPEx experiment is a fascinating case-study in which the political dimensions of experiments, the worth and purpose of public engagement and global power relations are debated.
Big science budgets, at first glance, look among the more democratically accountable of science budgets. Big science projects draw the attention of elected politicians, and big science usually requires at least some supporters from elected politicians. While such accountability in the initial set up of big science is important and valuable, big science can be difficult to fairly evaluate and shut down once it is up and running. The local economic infusion from big science budgets is a key reason why some politicians support the projects, and once sited, big science is hard to let go of for the local politicians who benefit. This is made further complicated by the inherent conflict of interest among scientists whose work would be supported by big science funding, who understandably see ongoing support of big science facilities as central to the success of their students and research projects. The experts who know the project best also have the most to lose if a project is canceled. As a result of the combination of local political support and subfield specific scientific support, big science projects can get an inertia of their own, making it difficult to pull back from or redirect funding, even when there are clear reasons to do so. Reasons to cease or redirect funding big science can come from the discoveries made in the pursuit of big science, or from external reasons for big science, or both. Lessons learned in the practice of the big science pursuit can mean some big science projects should be shut down, such as the ill-fated Mohole project, which continued to waste money well after it was clear it would not be scientifically productive. Recognition of social injustices aggravated by big science projects (such as the conflict over indigenous rights on Mauna Kea) can generate reasons to shift how big science is pursued. And the combination of external conditions and scientific discovery can make some big science projects an unwise expenditure, such as the pursuit of the follow-up to the ITER fusion project in Europe given the timing of any practical assistance with power production (too late). The difficulties of redirecting scientific funding and effort in these cases is palpable, and requires a clearer mechanism for the assessment of ongoing big science. This talk will propose one such mechanism, a public science court, in which arguments for and against continued funding for a big science project are required to be made in an oppositional format, and decisions about whether to continue supporting such projects are made by a jury of non-conflicted citizens. Making decisions about closing down big science is practically and democratically challenging. We need new institutional structures in place to aid us with this, ones that recognize the complex combination of specific expertise and broader contextual knowledge needed, and that make such judgments public and democratically accountable.
When the Event Horizon Telescope EHT Collaboration revealed the first-ever ‘direct’ image of a black hole in April 2019, the general press designated it as a photograph, tacitly implying the slightly blurry image was a straightforward representation of a mysterious cosmic object. By contrast, in one their papers, the EHT team designated their initial images of a black hole as agnostic empirical results, enabling them to identify previously unexpected features of the thus visualised cosmic object. But far from being self-evident, these initially semantically opaque images had to be submitted to complex, multistage interpretational processes to yield new insights into the physical properties of black holes. Drawing on the German media theorist Ludwig Jäger’s concept of semantic transcriptivity, which he defines as a dynamic process of meaning ascription through media-specific operations, I will analyse how the interpretation of the EHT images required comparisons with simulated images derived from theoretical models of black holes. In doing so, I will argue that the ability of EHT images to produce new insights into the physical features of a black hole was constrained by the quality and complexity of these theoretical models.
With the advent of the LHC era, experimental high-energy physics has seen an unprecedented growth not
only in the size of machinery, but also in the membership of the research collaborations running detectors and
analysing collision data. Following the traditional model of collaborative high-energy physics research, technical
and analytical tasks, as well as the credit for new results, are shared among all collaboration members.
This emphasis on collective rather than individual achievements has been challenged by a tenfold increase in
membership, compared to earlier experiments, and publication-oriented academic career and reward systems.
In this talk, I will focus on the largest collaboration at the LHC, the ATLAS collaboration, and describe how the
challenges presented by the “sheer size” of the collaboration were met, over time, with subtle procedural transformations.
Based on an analysis of interviews and internal documents, I argue that these transformations
reflect efforts to sustain the traditional model of high-energy physics. In the absence of informal, face-to-face
relationships among researchers, formal rules and documentation have been introduced to increase mutual
accountability between the collaboration and its members, and ensure continuous contributions to the experimental
infrastructure. While often experienced by researchers as a constraint on individual creativity and
career prospects, and thus as hindering the advance of research, the procedural norms and practices within
ATLAS can be shown to sustain an ideal of communal epistemic success.
In the last decade, we can find a growing need in the philosophy of science to reflect on data science (e.g.
Leonelli 2018, McAllister 2011), which means some substantial changes for the nature of scientific research
(cf. Leonelli 2020). Currently, there is an effort among data scientists to subject the very process of scientific
research, across various scientific fields, to data analysis (Wu et al. 2019, Wang et al. 2013). This effort is most
comprehensively presented in The Science of Science project, whose ambition is to use the scientific methods
(of data science, network science and artificial intelligence) to study the science itself (Wang, Barabási 2021,
The Science of Science is supposed to represent the continuation of the efforts of traditional reflections of
science – philosophy, historiography, but also the sociology of science (e.g. Giere 2006, Collins, Evans 2017).
The general goal of the paper is to evaluate the representation of the Large-scale Scientific Research, which
can be found in the conclusions of The Science of Science project. The specific goals of the paper are:
(1) Evaluation of the applicability of Q-factor (“(…) a scientists sustained ability to systematically turn her
projects into high (or low) impact publications”, Wang, Barabási 2021, 65) in the case of scientists involved
in large-scale research. Clarification whether the creativity is mainly influenced by internal scientific team
collaboration or is substantially dependent on the type of collaboration between different scientific teams.
(2) Verification of the conjecture that the current growth of large teams (showing power-law distribution) has
a potential to threaten the emergence of new scientific breakthroughs (Wang, Barabási 2021, 125–133), due to
the disruption of the “large teams develop and small teams disrupt science” (Wu et al. 2019) balance.
(3) Assessment whether the metric of ultimate impact, which is supposed to serve to predict the future impact
of a scientific idea (Wang, Barabási 2019, 218), can correlate with the empirical adequacy of the presented
scientific idea (in the case of large-scale scientific research).
Collins, Harry, Evans, Robert (2017). Why Democracies Need Science. Cambridge: Polity Press.
Giere, Ronald (2006). Scientific Perspectivism. Chicago: University of Chicago Press.
Leonelli, Sabina, “Scientific Research and Big Data”, The Stanford Encyclopedia of Philosophy (Summer 2020
Edition), Edward N. Zalta (ed.), URL = https://plato.stanford.edu/archives/sum2020/entries/science-big-data/.
Leonelli, Sabina (2018). The Time of Data: Timescales of Data Use in the Life Sciences. In: Philosophy of
Science, 85, 5: 741–754.
McAllister, James (2011). What Do Patterns in Empirical Data Tell Us about the Structure of the World? In:
Synthese, 182, 1: 73–87.
Wang, Dashun, Barabási, Albert-Laszló (2021) The Science of Science, Cambridge: Cambridge University Press.
Wang, Dashun, Song, Chaoming, Barabási, Albert-Laszló (2013) Quantifying Long-term Scientific Impact. In:
Science, 342: 127-132.
Wu, Lingfei, Wang, Dashun, Evans, James (2019) Large Teams Develop and Small Teams Disrupt Science and
Technology. In: 566: 378–382.
I develop, by means of an idiosyncratic case study in contemporary physics, a paradigmatic characterization of
uncreativity in scientific research: wherein a genuine scientific possibility is not regarded as an epistemic possibility,
because it happens to be inconceivable in the course of ongoing surrounding research. I then consider
how uncreativity intersects with general concerns about the endogenous social dynamics of research communities.
To avoid uncreativity, I argue for the importance of sub-disciplinary ‘research strongholds’: tightly
clustered sub-communities, which are each associated with interestingly distinct conceptions of common,
overarching community aims. This would seem to present a special case of peer disagreement, which invites
norms of discourse and conduct within the community that are similar to ordinary political norms adopted in
pluralistic societies. The analogy is especially apt in the context of large-scale collaborations, where success
demands the community to fragment without fracture. By contrast, other research environments could allow
for fracture: where research strongholds silo as a mechanism to avoid conflict in the wake of anticipated disagreements
— creative flourishing on the whole, at the cost of proliferating new disciplines where previously
there was a united one. But creative flourishing within a large-scale collaboration necessitates reconciliatory
strategies: ways of remaining united, despite registering fundamental differences regarding conception of
The talk focuses on the practices of technical review for the particle detector hardware in the context of the ATLAS detector upgrade. Technical review is a variant of peer review applied to technology construction and a feature of internal governance of a scientific collaboration such as the ATLAS Collaboration at CERN’s LHC during the detector upgrade.
While peer review in journals or grant boards is sometimes understood as hindering creativity (Stanford 2019, Alvarez 1987), I show, in contrast, how technical review emerges as an arena for channeling creativity by collaboratively identifying, managing, and negotiating constraints. Since the upgrade design solutions are both path dependent (i.e. constrained by the initial design of the detector) and clearly goal oriented (towards the planned physics performance and objectives), the range of the upgrade design solutions is rather narrow, yet requires creativity in navigating the multiple requirements while finding the best solutions.
The assessment of the hardware specifications, construction methods and schedules, test results, and prototypes falls into the domain of a four-staged internal technical review: specifications review, preliminary design, final design, and production readiness review. These are meetings where the construction team meets a group of internal ATLAS reviewers to present, discuss, and get feedback and approval (or disapproval) of the status and plans for the development of a particular hardware component. The choice of reviewers reflects the need to identify constraints such as interfaces to other components or data quality concerns, while the presence of management representatives monitors and approves the review results based on a bird eye view alignment with the other ongoing upgrade activities and plans. The exchange that happens in these meetings portrays a picture of creativity facilitated by constraints rather than hindered by them, akin to what Pickering calls a process of resistance and accommodation (1995). The talk is based on semi-structured interviews with physicists and engineers involved in the ATLAS technical reviews.
Alvarez, L. W. (1987), The adventure of a physicist. New York: Basic Books.
Pickering, A. (1995), The Mangle of Practice. Time, Agency, and Science. University of Chicago Press.
Stanford, K. (2019), “Unconceived alternatives and conservativism in science: the impact of professionalisation, peer-review, and Big Science”, Synthese 196: 3915-32.
From Hardwig (1985) onwards, large-scale experiments have elicited philosophical and sociological interest
in the collective authorships of large teams, particularly in high-energy physics. This has included pondering
the nature and limits of the “knowing subject” in these enterprises (e.g. Knorr-Cetina 1999, Giere 2006,
Boyer-Kassem et al. 2017). But what could astronomers as members of a large collaboration possibly mean
when they say (as some do) that “a catalogue” – a table of the measured properties of celestial objects – “encodes
the collective knowledge of its makers”? What could we learn from a detailed ethnographic account of
astronomers’ practical reasoning as they achieve an agreement on such a form of data? And how could this
possibly contribute to philosophical discussions of large-scale experiments?
These are questions that puzzle me as I reflect on the two years of my ethnographic study of the MUWAGS
(Multi-Wavelength Galaxy Survey; pseudo-acronym) collaboration, a team of ca. 30 astronomers investigating
galaxy evolution and weak cosmological lensing observationally (but incidentally describing their work as an
“experiment”). When eventually published, their catalogue was central to their data release: a large table
(90,000 rows and 200 columns) of measurements of objects in a certain part of the sky. It was intended both
for the team’s own future work as well as for uses by other researchers.
Recent discussions of collective knowledge by Margaret Gilbert, Brad Wray and others have been largely
concerned with propositional accounts of knowledge. But this captures only in part what scientists – who are
more committed to shared practices than to shared beliefs (Rouse 2003, Chang 2017) – are after.
My paper develops three distinct moves from my ethnographic findings. The first is to take the materiality and
mediality of writing into account and point out how it matters for the fixation of data sets that are irreducible to
the work of individual team members. The second is to follow Gilbert Ryle (1949) and move from propositional
notions of knowledge to conceive of knowing as a ‘capacity’. The third is an invitation to engage the kinship
of Ryle’s work with praxeological approaches like ethnomethodology with the aim to make them fruitful for
philosophical studies of large projects in experimental and observational sciences.
Boyer-Kassem, Th., C. Mayo-Wilson and M. Weisberg (eds.) (2017). Scientific Collaboration and Collective
Knowledge: New Essays. Oxford: Oxford University Press.
Chang, Hasok (2017). Operational Coherence as the Source of Truth. Proceedings of the Aristotelian Society,
vol. 67, part 2, pp. 103 – 122.
Giere, Ronald (2006). Scientific Perspectivism. Chicago: University of Chicago Press.
Hardwig, John (1985). Epistemic Dependence. Journal of Philosophy 82 (7), pp. 335-349
Hoeppe, Götz (2021). Encoding Collective Knowledge, Instructing Data Reusers: The Collaborative Fixation
of a Digital Scientific Data Set. Computer Supported Collaborative Work 30 (4): 463 – 505.
Knorr-Cetina, Karin (1999). Epistemic Cultures: How the Sciences Make Knowledge. Cambridge, Mass.:
Harvard University Press.
Rouse, Joseph (2003). Kuhn’s Philosophy of Scientific Practice. In: Nickles, Thomas (ed.) Thomas Kuhn.
Cambridge: Cambridge University Press, pp. 101–121.
Ryle, Gilbert (1949). The Concept of Mind. London: Hutchinson.
Understanding the inferential structure and justification of scientific measurements has been a crucial problem for scientists and philosophers of science alike. Much recent and classical work on the epistemology of measurement has focussed on an epistemic circularity affecting indirect, theory-mediated measurements (Mach 1896; Chang 2004; van Fraassen 2008; Tal 2017). Conducting such measurement requires theoretical knowledge about their target, while our theoretical models of that target can lack evidence that is independent of these very measurements. As a consequence, philosophers have argued that justification takes the form of bi-directional 'problems of coordination'. Given the circularity of such problems, scientists need to modify measurements and theoretical models iteratively to account for prediction-measurement discrepancies. If they are successfully coordinated, measurements converge within the possible outcomes permitted by our theoretical model of their target.
However, virtually all canonical case studies of physical measurements are based on measurements of small and highly controlled target systems. In my talk, I challenge the existing epistemological accounts of justification in measurement through a case study on a central measurement problem in physical geodesy: the determination of the earth’s figure. Geodesists’ target system – the physical earth – proved much more heterogenous than its theoretically derivable models and was partially inaccessible to instrumentation, while the scale of geodesists’ measurements made it virtually impossible to effectively shield them from the resulting perturbations. As a consequence, the problem of measuring the earth’s figure was incredibly challenging, remaining unresolved for more than 200 years after its first discussion in book 3 of Newton’s Principia (Ohnesorge 2021).
In my talk, I defend two claims about the ways in which this and similar cases from physical geoscience force us to rethink existing accounts of justification in indirect measurement. First, I argue that problems of coordination are not merely a general predicament of indirect measurement but admit of variations in difficulty according to the degrees of theoretical predictability and experimental control of the target system. This has implications for the methodologies by which large-scale measurements in geoscience may be successfully pursued, which I turn to in my second claim. Drawing on novel historical research, I analyse the methodology by which geodesists eventually solved their (difficult) coordination problem and discuss the lessons it holds for similar cases. I characterise that methodology as 'operational pluralism', which aims at isolating and anticipating overlapping perturbations by repeatedly (i) introducing physically distinct measurement operations, (ii) comparing outcomes, and (iii) explaining discrepancies based on perturbations uniquely affecting specific indicators.
While my first claim draws on existing secondary literature on the development of theories and measures of planetary figures throughout the eighteenth and nineteenth centuries (Smith 2014; Torge 2017; Ohnesorge 2021; forthcoming), the second claim is based on an entirely novel historiographical study of geodetic, astronomical, and geophysical research between 1880 and 1924.
This research takes up the Landsat Image Mosaic of Antarctica (LIMA) as an example of international cooperation—and interference—in the composition of 'global' images. LIMA was completed during the International Polar Year 2007-8, though attempts at composing Antarctic photomosaics had been made since the 1970s. Yet the southern regions continued to serve as a site of multiple kinds of resistence to the construction of coherent, continuous images from satellite data: environmental resistance by way of snow and cloud that frequently obscured landmasses; technical resistence in the form of difficult storage and analysis of data from this region; and (indirect) political resistence, via the deprioritization of the Antarctic relative to other user bases and perceived global problems. In the process, the edges of individual satellite images became the sites of decisions about how and what a satellite image should represent, as the edges of the continent itself likewise became the focus of attention about how maps should represent shifting realities. What was intended as a project of observation became an experiment in representation. This intensified after the collapse of the Larsen B ice shelf, and the corresponding rise to prominence of questions about global warming, centering around the question of the future behavior of the ice sheets.
The siting of large-scale precise instrumentation is gaining increasing importance as astrophysical instrumentation is becoming increasingly sensitivity. Yet, historians and philosophers of science have yet to explore the siting of such instruments, the implications of location on the the design and function of the experiment, and the impact of siting on the data analysis. This talk fills that gap by focusing on LIGO's over one decade endeavor from 1981 to 1994 to find locations for its two twin laser interferometers that would provide the ability to detect faint gravitational waves produced from cataclysmic events in our universe such as the collision of two black holes. I will focus on approaches used by LIGO physicists to locate and investigate candidate locations and advocate for stillness in the face of existing conflicting land uses to ensure achieving the baseline characteristics of the experiment in locations of overlapping land uses.