Research

I specialize in the philosophy of science and technology. My work focuses on the interplay between instruments and knowledge claims in the sciences, on the roles epistemic and social values play in scientific inquiry, and on the relationships between error, uncertainty and risk. My recent studies focus on the philosophy of measurement, an area within philosophy of science that deals with the concepts and problems involved in designing, operating and interpreting measurement procedures in the natural and social sciences.

I am currently developing a model-based epistemology of measurement that highlights the roles of idealization, abstraction and prediction in establishing measurement outcomes. This account is informed by the history and current practice of the relevant scientific disciplines, especially metrology (the science of measurement) and psychometrics.

I am also interested in the ethical and social implications of big data and machine learning, and in the possibility of addressing challenges in data ethics from a measurement-theoretic perspective.

Current research projects

Data Ethics and Responsible Measurement

Data ethics is dedicated to studying moral problems raised by (i) the availability of large amounts of data in areas ranging from biomedicine to psychology to criminal justice, and (ii) the increasing reliance on algorithms to analyze such data and inform high-stakes decision-making. The impact of such algorithms on a growing number of aspects of human life makes data ethics a central concern for legal scholars, policy makers, data scientists, technology companies, and the general public. I am at the early stages of developing a novel approach to problems in data ethics centered around a notion of ‘responsible measurement’. The novelty of this approach consists in the recognition that some of the most perplexing problems of data ethics belong neither to ethics nor to data science proper, but to measurement design. Big-data algorithms are measuring instruments of a special sort: they measure e.g., an individual’s risk of recidivism, their personality traits, or their risk of suffering from a specific disease. Failures of fairness, accountability and transparency in big data and AI are not usually the result of faulty software, nor do they arise from a deficient understanding of ethical concepts. Rather, these problems often stem from a mismatch between the values that guide the collection and processing of data, and the ethical concerns that arise when such data are used to score, rank and classify humans.

Responsible measurement is an approach to the design of measurement systems that combines the virtues of epistemic responsibility (commitment to epistemic values, e.g., accuracy) and moral responsibility (commitment to ethical and social values, e.g., well-being and justice). On this approach, the reliability of systems of scoring, ranking and classification is inseparable from their ability to bring about positive social change. The theory of responsible measurement will articulate the requirements of fit between designer and stakeholder values in the processing and use of big data. A responsible algorithm needs to measure the intended attribute, rather than the attribute most readily suggested by the data: for example, measure the risk of recidivism an individual would have in an ideal society devoid of racial bias, rather than in actual society. Additionally, a responsible algorithm must be embedded in a network of institutions that safeguard against measuring unintended attributes that may be used for purposes that are not in the best interests of stakeholders: e.g., using social media ‘likes’ to measure personality traits, which may be exploited for targeted political advertising.

The Conceptual Foundations of Psychometrics

Psychometric tests and questionnaires provide evidence for decision making on a variety of human characteristics such as visual function, reading comprehension, anxiety, self-esteem, well-being, and pain. An important methodological requirement when developing a new questionnaire or psychological test is to check whether the instrument measures the property it is intended to. Psychometricians call this sort of checking 'validation'. There is little agreement in the psychometric literature as to what validity means or about the appropriate methods for determining whether an instrument is valid. This disagreement results in a staggering variety of validity concepts: content, construct and criterion validity; substantive, structural, generalizable, external, and consequential validity; and dozens of others. At the root of the disagreement is the fact that the properties being measured are not directly observable and must be inferred from test scores and other available data.

Despite its ubiquity and impact on society, the conceptual foundations of psychometrics have been largely ignored in the philosophy of science. This project aims to develop a theoretical framework that sheds light on fundamental concepts in psychometrics by utilizing insights gained from recent work in the philosophy of measurement. Specifically, model-based conceptions of measurement suggest that physical and behavioural measurement have much more in common in terms of their inferential structure and validation methodology than previously supposed. Much like model-based measurement in physics, psychometrics involves the idealized representation of multiple empirical procedures (e.g. questionnaires) in terms of a single abstract construct (e.g. reading comprehension), and the establishment of coherence among the consequences of such representations. The exploration of this hitherto unnoticed similarity in inferential structure has the potential to produce exciting new cross-disciplinary insights into the epistemological foundations of both physical and behavioural measurement, and to shed new light on methodological problems associated with the design and validation of psychometric instruments.

Economies of Uncertainty and the 2018 Metric Reform

In 2018 the General Conference on Weights and Measures redefined four of the base units of the International System (SI) – the kilogram, ampere, mole and kelvin – by fixing the numerical values of four fundamental constants: the Planck constant, the electron charge, the Avogadro constant and the Boltzmann constant, respectively. This change was meant to release the uncertainties of metric measurements from their dependence on the idiosyncrasies of particular material artefacts. The reform of the metric system provides a valuable and timely opportunity to study the methods, concepts and controversies of metrology in real-time, and in so doing to shed new light on the epistemology of standardization. Particularly, the reform raises the following four questions:

  • Does standardization produce new empirical knowledge, and if so how? Specifically, how can fixing values through stipulation constitute scientific progress?
  • What roles does theory play in standardization? How can a fundamental constant – a parameter in an equation representing a law of nature – replace a material measurement standard?
  • Is the metric reform implicitly changing the meaning of measurement? Does it still make sense to view measurement as the observation of empirical relations among objects, or should measurement be reconceptualised as a more abstract activity?

The key to answering these questions, I argue, is to recognize that the planned redefinition of SI units implicitly promotes a new economy of uncertainty in the physical sciences, i.e. a new set of principles for the management of scientific uncertainty that treats measurement as the approximation of ideal theoretical relations. This shift does not directly generate new knowledge, but instead affords scientists new kinds of ignorance. In particular, it allows experimenters to ignore some of the instabilities involved in maintaining and comparing material instruments in exchange for ‘cheaper’ background knowledge involved in modelling those instruments theoretically. I explore some of the counterintuitive epistemological consequences of this shift.

The Epistemology of Measurement

Metrology, the science of measurement and it application, became a topic of philosophical interest only in the last two decades. Philosophical accounts of measurement have traditionally focused on the mathematical properties of units and scales, and neglect the complex procedures involved in standardizing and calibrating measuring instruments. Metrologists, i.e. experts who work at national and international institutes of standards, are responsible for pushing the limits of available technology towards more accurate and precise measurement. This is achieved by constructing a host of theoretical and statistical models that are used to assess and minimize uncertainties in measurement.

My recent work offers a novel epistemology of measurement in the physical sciences, engaging extensively with the historical and material dimensions of scientific practice. In particular, I address the following three questions:

  • How is it possible to tell whether an instrument measures the quantity it is intended to?
  • What do claims to measurement accuracy amount to, and how might such claims be justified?
  • When is disagreement among instruments a sign of error, and when does it imply that instruments measure different quantities?

Borrowing insights from metrology as well as the philosophy of modelling and experimentation, I analyze the conditions under which claims to measurement, accuracy and error are justified. I argue that it is only against the background of an idealized model of the measuring apparatus that measurement outcomes can be inferred from instrument readings. Specifically, I show that measurement claims are grounded in model-based predictions concerning the behaviour of instruments, and that measurement uncertainty amounts to the uncertainty of these predictions. My conclusion challenges the widespread supposition that measurement and prediction are distinct epistemic activities.