Matthew Sample is a philosopher. He works at the intersection of tech governance,(bio/AI/neuro)ethics, philosophy of science, and STS. (CV)
For past research, see below. For industry focused updates, visit linkedin.
|| selected publications
- (draft) “Three Challenges for the Cosmopolitan Governance of Technoscience.” (Preprint)
Abstract↓
Promising new solutions or risking unprecedented harms, science and its technological affordances are increasingly portrayed as matters of global concern, requiring in-kind responses. In a wide range of recent discourses and global initiatives, from the International Summits on Human Gene Editing to the Intergovernmental Panel on Climate Change, experts and policymakers routinely invoke cosmopolitan aims. The common rhetoric of a shared human future or of one humanity, however, does not always correspond to practice. Global inequality and a lack of accountability within most institutional contexts of international governance render these cosmopolitan proclamations of ‘one human community’ incoherent and even harmful. More generally, there exists no shared normative standard for the cosmopolitan governance of science, with which such global initiatives could be evaluated. Taking a broadly philosophical perspective, the present paper aims to better understand this problem situation, identifying three high-level challenges global governance of technoscience: problematic ideals of technology and science, the unjust formation of “global” concerns, and the limitations of cosmopolitan theory. By holistically engaging these jointly empirical and normative sites of inquiry, scholars can better support humanity’s re-imagination of technoscientific practices within and beyond the nation-state.
- (2023) “Critical Contextual Empiricism and the Politics of Knowledge.” Theory of Science. (PDF)
Abstract↓
What are philosophers doing when they prescribe a particular epistemology for science? According to science and technology studies, the answer to this question implicates both knowledge and politics, even when the latter is hidden. Exploring this dynamic via a specific case, I argue that Longino’s “critical contextual empiricism” ultimately relies on a form of political liberalism. Her choice to nevertheless foreground epistemological concerns can be clarified by considering historical relationships between science and society, as well as the culture of academic philosophy. This example, I conclude, challenges philosophers of science to consider the political ideals and accountability entailed by their prescribed knowledge practices.
- (2023) “Science, Responsibility, and the Philosophical Imagination.” Synthese. (PDF)
Abstract↓
If we cannot define science using only analysis or description, then we must rely on imagination to provide us with suitable objects of philosophical inquiry. This process ties our intellectual findings to the particular ways in which we philosophers think about scientific practice and carve out a cognitive space between real world practice and conceptual abstraction. As an example, I consider Heather Douglas’s work on the responsibilities of scientists and document her implicit ideal of science, defined primarily as an epistemic practice. I then contrast her idealization of science with an alternative: “technoscience,” a heuristic concept used to describe nanotechnology, synthetic biology, and similar “Mode 2” forms of research. This comparison reveals that one’s preferred imaginary of science, even when inspired by real practices, has significant implications for the distribution of responsibility. Douglas’s account attributes moral obligations to scientists, while the imaginaries associated with “technoscience” and “Mode 2 science” spread responsibility across the network of practice. This dynamic between mind and social order, I argue, demands an ethics of imagination in which philosophers of science hold themselves accountable for their imaginaries. Extending analogous challenges from feminist philosophy and Mills’s “Ideal Theory’ as Ideology,” I conclude that we ought to reflect on the idiosyncrasy of the philosophical imagination and consider how our idealizations of science, if widely held, would affect our communities and broader society.
- (2022) with Wren Boehlen et al. “Brain-Computer Interfaces, Inclusive Innovation, and the Promise of Restoration: A Mixed-Methods Study with Rehabilitation Professionals.” EngagingSTS. (PDF)
Abstract↓
Over the last two decades, researchers have promised “neuroprosthetics” for use in physical rehabilitation and to treat patients with paralysis. Fulfilling this promise is not merely a technical challenge but is accompanied by consequential practical, ethical, and social implications that warrant sociological investigation and careful deliberation. In response, this paper explores how rehabilitation professionals evaluate the development and application of BCIs. It thereby also asks how the BCIs come to be seen as desirable or not, and implicitly, what types of persons, rights, and responsibilities are assumed in this discourse. To this end, we conducted a web-based survey (N=135) and follow-up interviews (N=15) with Canadian professionals in physical therapy, occupational therapy, and speech-language pathology. We find that rehabilitation professionals, like other publics, express hope and enthusiasm regarding the use of BCIs for assistive purposes. They envision BCI devices as powerful means to reintegrate patients and disabled people into social life but also express practical and ethical reservations about the technology, positioning themselves as uniquely qualified to inform responsible BCI design and implementation. These results further illustrate the nascent “co-production” of neural technologies and social order. More immediately, they also pose a serious challenge for implementing frameworks of responsible innovation; merely prescribing more inclusive technology development may not counteract technocratic processes and widely held ableist views about the need to augment certain bodies using technology.
- (2021) with Eric Racine. “Pragmatism for a Digital Society: The (In)Significance of Artificial Intelligence and Neural Technology.” Clinical Neurotechnology meets Artificial Intelligence. DOI: 10.1007/978-3-030-64590-8_7. (Preprint)
Abstract↓
Headlines in 2019 are inundated with claims about the “digital society,” making sweeping
assertions of societal benefits and dangers caused by range of technologies. This situation would
seem an ideal motivation for ethics research, and indeed much research on this topic is published,
with more every day. However, ethics researchers may feel a sense of déjà vu, as they recall
decades of other heavily promoted technological platforms, from genomics and nanotechnology
to machine learning. How should ethics researchers respond to the waves of rhetoric and
accompanying ethics research? What makes the digital society significant for ethics research? In
this paper, we consider two examples of digital technologies (artificial intelligence and neural
technologies), showing the pattern of societal and academic resources dedicated to them. This
pattern, we argue, reveals the jointly sociological and ethical character of significance attributed
to emerging technologies. By attending to insights from pragmatism and science and technology
studies, ethics researchers can better understand how these features of significance effect their
work and adjust their methods accordingly. In short, we argue that the significance driving ethics
research should be grounded in public engagement, critical analysis of technology’s “vanguard
visions”, and in a personal attitude of reflexivity.
- (2019) with Sattler et. al. “Do Publics Share Experts’ Concerns about Brain–Computer Interfaces? A Trinational Survey on the Ethics of Neural Technology.” Science, Technology, and Human Values. (Preprint)
Abstract↓
Since the 1960s, scientists, engineers, and healthcare professionals have developed brain–computer interface (BCI) technologies, connecting the user’s brain activity to communication or motor devices. This new technology has also captured the imagination of publics, industry, and ethicists. Academic ethics has highlighted the ethical challenges of BCIs, although these conclusions often rely on speculative or conceptual methods rather than empirical evidence or public engagement. From a social science or empirical ethics perspective, this tendency could be considered problematic and even technocratic because of its disconnect from publics. In response, our trinational survey (Germany, Canada, and Spain) reports public attitudes toward BCIs (N = 1,403) on ethical issues that were carefully derived from academic ethics literature. The results show moderately high levels of concern toward agent-related issues (e.g., changing the user’s self) and consequence-related issues (e.g., new forms of hacking). Both facets of concern were higher among respondents who reported as female or as religious, while education, age, own and peer disability, and country of residence were associated with either agent-related or consequence-related concerns. These findings provide a first look at BCI attitudes across three national contexts, suggesting that the language and content of academic BCI ethics may resonate with some publics and their values.
- (2019) “Brain-Computer Interfaces and Personhood: Interdisciplinary Deliberations on Neural Technology.” Journal of Neural Engineering. (Preprint)
Abstract↓
Scientists, engineers, and healthcare professionals are currently developing a variety of new devices under the category of brain-computer interfaces (BCIs). Current and future applications are both medical/assistive (e.g., for communication) and non-medical (e.g., for gaming). This array of possibilities comes with ethical challenges for all stakeholders. As a result, BCIs have been an object of both hope and concern in various media. We argue that these conflicting sentiments can be productively understood in terms of personhood, specifically the impact of BCIs on what it means to be a person and to be recognized as such by others. To understand the dynamics of personhood in the context of BCI use and investigate whether ethical guidance is required, a meeting entitled BCIs and Personhood: A Deliberative Workshop was held in May 2018. In this article, we describe how BCIs raise important questions about personhood and propose recommendations for BCI development and governance.
- (2019) with Boulicault et. al. “Multi-Cellular Engineered Living Systems: Building a Community around Responsible Research on Emergence.” Biofabrication, 11(4), 043001. (Preprint)
Abstract↓
Ranging from miniaturized biological robots to organoids, Multi-Cellular Engineered Living Systems (M-CELS) pose complex ethical and societal challenges. Some of these challenges, such as how to best distribute risks and benefits, are likely to arise in the development of any new technology. Other challenges arise specifically because of the particular characteristics of M-CELS. For example, as an engineered living system becomes increasingly complex, it may provoke societal debate about its moral considerability, perhaps necessitating protection from harm or recognition of positive moral and legal rights, particularly if derived from cells of human origin. The use of emergence-based principles in M-CELS development may also create unique challenges, making the technology difficult to fully control or predict in the laboratory as well as in applied medical or environmental settings. In response to these challenges, we argue that the M-CELS community has an obligation to systematically address the ethical and societal aspects of research and to seek input from and accountability to a broad range of stakeholders and publics. As a newly developing field, M-CELS has a significant opportunity to integrate ethically responsible norms and standards into its research and development practices from the start. With the aim of seizing this opportunity, we identify two general kinds of salient ethical issues arising from M-CELS research, and then present a set of commitments to and strategies for addressing these issues. If adopted, these commitments and strategies would help define M-CELS as not only an innovative field, but also as a model for responsible research and engineering.
- (2019) with Wren Boehlen. “Rehabilitation Culture and Its Impact on Technology: Unpacking Practical Conditions for Ultrabilitation.” Disability and Rehabilitation. (Preprint)
Abstract↓
Purpose: It has been proposed that rehabilitation practice should expand its aims beyond recovery to “ultrabilitation”, but only if certain biological, technological, and psychosocial conditions are met. There is thus an opportunity to connect ultrabilitation, as a concept, to adjacent literature on assistive technology and sociotechnical systems.
Method: We draw on insights from sociology of technology and responsible innovation, as well as concrete examples of neural devices and the culture of rehabilitation practice, to further refine our understanding of the conditions of possibility for ultrabilitation.
Results: “Assistive” technologies can indeed be re-imagined as “ultrabilitative”, but this shift is both psychosocial and technological in nature, such that rehabilitation professionals will likely play a key role in this shift. There is not, however, sufficient evidence to suggest whether they will support or hinder ultrabilitative uses of technology.
Conclusion: Advancing the idea and project of ultrabilitation must be grounded in a nuanced understanding of actual rehabilitation practice and the norms of broader society, which can be gained from engaging with adjacent literatures and by conducting further research on technology use in rehabilitation contexts.
- (2018) with Eric Racine. “Two problematic foundations of neuroethics and pragmatist reconstructions.” Cambridge Quarterly of Healthcare Ethics, 27(4), 566-577. (Preprint)
Abstract↓
Common understandings of neuroethics, i.e., of its distinctive nature, are premised on two distinct sets of claims: (1) neuroscience can change views about the nature of ethics itself and neuroethics is dedicated to reaping such an understanding of ethics; (2) neuroscience poses challenges distinct from other areas of medicine and science and neuroethics tackles those issues. Critiques have rightfully challenged both claims, stressing how the first may lead to problematic forms of reductionism while the second relies on debatable assumptions about the nature of bioethics specialization and development. Informed by philosophical pragmatism and our experience in neuroethics, we argue that these claims are ill-founded and should give way to pragmatist reconstructions. Namely, neuroscience, much like other areas of empirical research on morality, can provide useful information about the nature of morally problematic situations but it does not need to promise radical and sweeping changes to ethics based on neuroscientism. Furthermore, the rationale for the development of neuroethics as a specialized field need not to be premised on the distinctive nature of the issues it tackles or of neurotechnologies. Rather, it can espouse an understanding of neuroethics as both a scholarly and a practical endeavor dedicated to resolving a series of problematic situations raised by neurological and psychiatric conditions.
- (2017) “Silent Performances: Are Repertoires Really Post-Kuhnian?” Studies in History and Philosophy of Science Part A. (Preprint)
Abstract↓
Ankeny and Leonelli (2016) propose “repertoires” as a new way to understand the stability of certain research programs as well as scientific change in general. By bringing a more complete range of social, material, and epistemic elements into one framework, they position their work as a correction for the Kuhnian impulse in philosophy of science and other areas of science studies. I argue that this “post-Kuhnian” move is not complete, and that repertoires maintain an internalist perspective. Comparison with an alternative framework, the “sociotechnical imaginaries” of Jasanoff and Kim (2015), illustrates precisely which elements of practice are externalized by Ankeny and Leonelli. Specifically, repertoires discount the role of audience, without whom the repertoires of science are unintelligible, and lack an explicit place for ethical and political imagination, which provide meaning for otherwise mechanical promotion of particular research programs. This comparison reveals, I suggest, two distinct modes of scholarship, one internalist and the other critical. While repertoires can be modified to meet the needs of critical STS scholars and to completely reject Kuhn's internalism, whether or not we do so depends on what we want our scholarship to achieve.
- (2015) “Stanford’s Unconceived Alternatives from the Perspective of Epistemic Obligations.” Philosophy of Science. (Preprint)(Link)
Promising new solutions or risking unprecedented harms, science and its technological affordances are increasingly portrayed as matters of global concern, requiring in-kind responses. In a wide range of recent discourses and global initiatives, from the International Summits on Human Gene Editing to the Intergovernmental Panel on Climate Change, experts and policymakers routinely invoke cosmopolitan aims. The common rhetoric of a shared human future or of one humanity, however, does not always correspond to practice. Global inequality and a lack of accountability within most institutional contexts of international governance render these cosmopolitan proclamations of ‘one human community’ incoherent and even harmful. More generally, there exists no shared normative standard for the cosmopolitan governance of science, with which such global initiatives could be evaluated. Taking a broadly philosophical perspective, the present paper aims to better understand this problem situation, identifying three high-level challenges global governance of technoscience: problematic ideals of technology and science, the unjust formation of “global” concerns, and the limitations of cosmopolitan theory. By holistically engaging these jointly empirical and normative sites of inquiry, scholars can better support humanity’s re-imagination of technoscientific practices within and beyond the nation-state.
What are philosophers doing when they prescribe a particular epistemology for science? According to science and technology studies, the answer to this question implicates both knowledge and politics, even when the latter is hidden. Exploring this dynamic via a specific case, I argue that Longino’s “critical contextual empiricism” ultimately relies on a form of political liberalism. Her choice to nevertheless foreground epistemological concerns can be clarified by considering historical relationships between science and society, as well as the culture of academic philosophy. This example, I conclude, challenges philosophers of science to consider the political ideals and accountability entailed by their prescribed knowledge practices.
If we cannot define science using only analysis or description, then we must rely on imagination to provide us with suitable objects of philosophical inquiry. This process ties our intellectual findings to the particular ways in which we philosophers think about scientific practice and carve out a cognitive space between real world practice and conceptual abstraction. As an example, I consider Heather Douglas’s work on the responsibilities of scientists and document her implicit ideal of science, defined primarily as an epistemic practice. I then contrast her idealization of science with an alternative: “technoscience,” a heuristic concept used to describe nanotechnology, synthetic biology, and similar “Mode 2” forms of research. This comparison reveals that one’s preferred imaginary of science, even when inspired by real practices, has significant implications for the distribution of responsibility. Douglas’s account attributes moral obligations to scientists, while the imaginaries associated with “technoscience” and “Mode 2 science” spread responsibility across the network of practice. This dynamic between mind and social order, I argue, demands an ethics of imagination in which philosophers of science hold themselves accountable for their imaginaries. Extending analogous challenges from feminist philosophy and Mills’s “Ideal Theory’ as Ideology,” I conclude that we ought to reflect on the idiosyncrasy of the philosophical imagination and consider how our idealizations of science, if widely held, would affect our communities and broader society.
Over the last two decades, researchers have promised “neuroprosthetics” for use in physical rehabilitation and to treat patients with paralysis. Fulfilling this promise is not merely a technical challenge but is accompanied by consequential practical, ethical, and social implications that warrant sociological investigation and careful deliberation. In response, this paper explores how rehabilitation professionals evaluate the development and application of BCIs. It thereby also asks how the BCIs come to be seen as desirable or not, and implicitly, what types of persons, rights, and responsibilities are assumed in this discourse. To this end, we conducted a web-based survey (N=135) and follow-up interviews (N=15) with Canadian professionals in physical therapy, occupational therapy, and speech-language pathology. We find that rehabilitation professionals, like other publics, express hope and enthusiasm regarding the use of BCIs for assistive purposes. They envision BCI devices as powerful means to reintegrate patients and disabled people into social life but also express practical and ethical reservations about the technology, positioning themselves as uniquely qualified to inform responsible BCI design and implementation. These results further illustrate the nascent “co-production” of neural technologies and social order. More immediately, they also pose a serious challenge for implementing frameworks of responsible innovation; merely prescribing more inclusive technology development may not counteract technocratic processes and widely held ableist views about the need to augment certain bodies using technology.
Headlines in 2019 are inundated with claims about the “digital society,” making sweeping assertions of societal benefits and dangers caused by range of technologies. This situation would seem an ideal motivation for ethics research, and indeed much research on this topic is published, with more every day. However, ethics researchers may feel a sense of déjà vu, as they recall decades of other heavily promoted technological platforms, from genomics and nanotechnology to machine learning. How should ethics researchers respond to the waves of rhetoric and accompanying ethics research? What makes the digital society significant for ethics research? In this paper, we consider two examples of digital technologies (artificial intelligence and neural technologies), showing the pattern of societal and academic resources dedicated to them. This pattern, we argue, reveals the jointly sociological and ethical character of significance attributed to emerging technologies. By attending to insights from pragmatism and science and technology studies, ethics researchers can better understand how these features of significance effect their work and adjust their methods accordingly. In short, we argue that the significance driving ethics research should be grounded in public engagement, critical analysis of technology’s “vanguard visions”, and in a personal attitude of reflexivity.
Since the 1960s, scientists, engineers, and healthcare professionals have developed brain–computer interface (BCI) technologies, connecting the user’s brain activity to communication or motor devices. This new technology has also captured the imagination of publics, industry, and ethicists. Academic ethics has highlighted the ethical challenges of BCIs, although these conclusions often rely on speculative or conceptual methods rather than empirical evidence or public engagement. From a social science or empirical ethics perspective, this tendency could be considered problematic and even technocratic because of its disconnect from publics. In response, our trinational survey (Germany, Canada, and Spain) reports public attitudes toward BCIs (N = 1,403) on ethical issues that were carefully derived from academic ethics literature. The results show moderately high levels of concern toward agent-related issues (e.g., changing the user’s self) and consequence-related issues (e.g., new forms of hacking). Both facets of concern were higher among respondents who reported as female or as religious, while education, age, own and peer disability, and country of residence were associated with either agent-related or consequence-related concerns. These findings provide a first look at BCI attitudes across three national contexts, suggesting that the language and content of academic BCI ethics may resonate with some publics and their values.
Scientists, engineers, and healthcare professionals are currently developing a variety of new devices under the category of brain-computer interfaces (BCIs). Current and future applications are both medical/assistive (e.g., for communication) and non-medical (e.g., for gaming). This array of possibilities comes with ethical challenges for all stakeholders. As a result, BCIs have been an object of both hope and concern in various media. We argue that these conflicting sentiments can be productively understood in terms of personhood, specifically the impact of BCIs on what it means to be a person and to be recognized as such by others. To understand the dynamics of personhood in the context of BCI use and investigate whether ethical guidance is required, a meeting entitled BCIs and Personhood: A Deliberative Workshop was held in May 2018. In this article, we describe how BCIs raise important questions about personhood and propose recommendations for BCI development and governance.
Ranging from miniaturized biological robots to organoids, Multi-Cellular Engineered Living Systems (M-CELS) pose complex ethical and societal challenges. Some of these challenges, such as how to best distribute risks and benefits, are likely to arise in the development of any new technology. Other challenges arise specifically because of the particular characteristics of M-CELS. For example, as an engineered living system becomes increasingly complex, it may provoke societal debate about its moral considerability, perhaps necessitating protection from harm or recognition of positive moral and legal rights, particularly if derived from cells of human origin. The use of emergence-based principles in M-CELS development may also create unique challenges, making the technology difficult to fully control or predict in the laboratory as well as in applied medical or environmental settings. In response to these challenges, we argue that the M-CELS community has an obligation to systematically address the ethical and societal aspects of research and to seek input from and accountability to a broad range of stakeholders and publics. As a newly developing field, M-CELS has a significant opportunity to integrate ethically responsible norms and standards into its research and development practices from the start. With the aim of seizing this opportunity, we identify two general kinds of salient ethical issues arising from M-CELS research, and then present a set of commitments to and strategies for addressing these issues. If adopted, these commitments and strategies would help define M-CELS as not only an innovative field, but also as a model for responsible research and engineering.
Purpose: It has been proposed that rehabilitation practice should expand its aims beyond recovery to “ultrabilitation”, but only if certain biological, technological, and psychosocial conditions are met. There is thus an opportunity to connect ultrabilitation, as a concept, to adjacent literature on assistive technology and sociotechnical systems. Method: We draw on insights from sociology of technology and responsible innovation, as well as concrete examples of neural devices and the culture of rehabilitation practice, to further refine our understanding of the conditions of possibility for ultrabilitation. Results: “Assistive” technologies can indeed be re-imagined as “ultrabilitative”, but this shift is both psychosocial and technological in nature, such that rehabilitation professionals will likely play a key role in this shift. There is not, however, sufficient evidence to suggest whether they will support or hinder ultrabilitative uses of technology. Conclusion: Advancing the idea and project of ultrabilitation must be grounded in a nuanced understanding of actual rehabilitation practice and the norms of broader society, which can be gained from engaging with adjacent literatures and by conducting further research on technology use in rehabilitation contexts.
Common understandings of neuroethics, i.e., of its distinctive nature, are premised on two distinct sets of claims: (1) neuroscience can change views about the nature of ethics itself and neuroethics is dedicated to reaping such an understanding of ethics; (2) neuroscience poses challenges distinct from other areas of medicine and science and neuroethics tackles those issues. Critiques have rightfully challenged both claims, stressing how the first may lead to problematic forms of reductionism while the second relies on debatable assumptions about the nature of bioethics specialization and development. Informed by philosophical pragmatism and our experience in neuroethics, we argue that these claims are ill-founded and should give way to pragmatist reconstructions. Namely, neuroscience, much like other areas of empirical research on morality, can provide useful information about the nature of morally problematic situations but it does not need to promise radical and sweeping changes to ethics based on neuroscientism. Furthermore, the rationale for the development of neuroethics as a specialized field need not to be premised on the distinctive nature of the issues it tackles or of neurotechnologies. Rather, it can espouse an understanding of neuroethics as both a scholarly and a practical endeavor dedicated to resolving a series of problematic situations raised by neurological and psychiatric conditions.
Ankeny and Leonelli (2016) propose “repertoires” as a new way to understand the stability of certain research programs as well as scientific change in general. By bringing a more complete range of social, material, and epistemic elements into one framework, they position their work as a correction for the Kuhnian impulse in philosophy of science and other areas of science studies. I argue that this “post-Kuhnian” move is not complete, and that repertoires maintain an internalist perspective. Comparison with an alternative framework, the “sociotechnical imaginaries” of Jasanoff and Kim (2015), illustrates precisely which elements of practice are externalized by Ankeny and Leonelli. Specifically, repertoires discount the role of audience, without whom the repertoires of science are unintelligible, and lack an explicit place for ethical and political imagination, which provide meaning for otherwise mechanical promotion of particular research programs. This comparison reveals, I suggest, two distinct modes of scholarship, one internalist and the other critical. While repertoires can be modified to meet the needs of critical STS scholars and to completely reject Kuhn's internalism, whether or not we do so depends on what we want our scholarship to achieve.