Publications
- Discourse on Measurement
with Arthur Paul Pedersen, David Kellen, and others.
Proceedings of the National Academy of Sciences. Vol. 122, No. 5. 2025.
Measurement literacy is required for strong scientific reasoning, effective experimental design, conceptual and empirical validation of measurement quantities, and the intelligible interpretation of error in theory construction. This discourse examines how issues in measurement are posed and resolved and addresses potential misunderstandings. Examples drawn from across the sciences are used to show that measurement literacy promotes the goals of scientific discourse and provides the necessary foundation for carving out perspectives and carrying out interventions in science.
@article{Pedersen et. al. (2025),
title = {Discourse on Measurement},
author = {Pedersen, Arthur Paul, David Kellen, Conor Mayo-Wilson, Clintin P. Davis-Stober, John C. Dunn, M. Ali Khan, Maxwell B. Stinchcombe, Michael L. Kalish, Katya Tentori, and Julia Haaf.},
journal = {Proceedings of the National Academy of Sciences},
volume = {122},
year = {2025}
issue = {5},
doi = {https://doi.org/10.1073/pnas.2401229121}}
- Banboozled by Bonferroni
Philosophy of Science. Vol. 91, No. 5. December 2024. pp. 1498-1508.
When many statistical hypotheses are evaluated simultaneously, statisticians often recommend adjusting (or "correcting") standard hypothesis tests. In this article, I (1) distinguish two senses of adjustment, (2) investigate the prudential and epistemic goals that adjustment might achieve, and (3) identify conditions under which a researcher should not adjust for multiplicity in the two senses I identify. I tentatively conclude that the goals of scientists and the public may be misaligned with the decision criteria used to evaluate multiple-testing regimes.
@article{Mayo-Wilson (2024),
title = {Bamboozled by Bonferroni},
author = {Conor Mayo-Wilson},
journal = {Philosophy of Science},
volume = {91},
year = {2024}
issue = {5},
pages = {1498--1508},
doi = {https://doi.org/10.1017/psa.2024.13}
}
- Success Concepts in Causal Discovery
with Konstantin Genin
Behaviormetrika. 2024. Vol. 51, pp. 515-538.
Existing causal discovery algorithms are often evaluated using two success criteria, one that is typically unachievable and the other which is too weak for practical purposes. The unachievable criterion -uniform consistency - requires that a discovery algorithm identify the correct causal structure at a known sample size. The weak but achievable criterion - pointwise consistency - requires only that one identify the correct causal structure in the limit. We investigate two intermediate success criteria - decidability and progressive solvability - that are stricter than mere consistency but weaker than uniform consistency. To do so, we review several topological theorems characterizing which discovery problems are decidable and/or progressively solvable. These theorems apply to any problem of statistical model selection, but in this paper, we apply the theorems only to selection of causal models. We show, under several common modeling assumptions, that there is no uniformly consistent procedure for identifying the direction of a causal edge, but there are statistical decision procedures and progressive solutions. We focus on linear models in which the error terms are either non-Gaussian or contain no Gaussian components; the latter modeling assumption is novel to this paper. We focus especially on which success criteria remain feasible when confounders are present.
@article{Genin and Mayo-Wilson (2024),
title = {Success Concepts in Causal Discovery},
author = {Konstantin Genin and Conor Mayo-Wilson},
journal = {Behaviormetrika},
year = {2024},
volume = {51},
pages = {515-538} }
- Evidence in Classical Statistics
with Samuel Fletcher
The Routledge Handbook for the Philosophy of Evidence. 2023. Eds. Maria Lasonen-Aarnio and Clayton Littlejohn. pp. 515-527.The dominance of classical statistics raises a puzzle for epistemologists. On one hand, science is a paradigmatic source of good evidence, with quantitative experimental science often described in classical statistical terms. On the other, the hybrid of Fisherian and Neyman-Pearsonian techniques is generally rejected by philosophers, statisticians, and scientists who study the foundations of statistics. So why is the use of classical statistics in empirical science so epistemically successful? Do classical "measures" of evidence actually measure anything epistemically important? This chapter provides some positive answers to these questions.
@incollection{Fletcher and Mayo-Wilson (2023),
title = {Evidence in Classical Statistics},
author = {Samuel Fletcher and Conor Mayo-Wilson},
editor = {Maria Lasonen-Aarnio, Clayton Littlejohn},
booktitle = {The Routledege Handbook for the Philosophy of Evidence},
publisher = {Routledge},
year = {2023},
pages = {515-527} }
- The Computational Philosophy
with Kevin Zollman
Synthese. Vol. 199. 2021. pp. 3647--3673.
Modeling and computer simulations, we claim, should be considered core philosophical methods. More precisely, we will defend two theses. First, philosophers should use simulations for many of the same reasons we currently use thought experiments. In fact, simulations are superior to thought experiments in achieving some philosophical goals. Second, devising and coding computational models instill good philosophical habits of mind. Throughout the paper, we respond to the often implicit objection that computer modeling is "not philosophical."
@article{Mayo-Wilson, Zollman (2021),
title = {The Computational Philosophy},
author = {Conor Mayo-Wilson and Kevin Zollman},
journal = {Synthese},
volume = {199},
year = {2021},
pages = {3647-3673},
doi = {https://doi.org/10.1007/s11229-020-02950-3} }
- Statistical decidability in linear, non-Gaussian models
with Konstantin Genin
NeurIPS 2020: Workshop on Causal Discovery and Causality-Inspired Machine Learning 2020.
The main result of this paper is to show that, without any further assumptions, the direction of any causal edge in a LiNGAM is what we call statistically decidable (Genin, 2018). Statistical decidability is a reliability concept that is, in a sense, intermediate between the familiar notions of consistency and uniform consistency. A set of models is statistically decidable if, for any r>0, there is a consistent procedure that, at every sample size, hypothesizes a false model with chance less than r. Such procedures may exist even when uniformly consistent ones do not. Uniform consistency requires that one be able to determine the sample size a priori at which one's chances of identifying the true model are at least 1-r; statistical decidability requires no such pre-experimental guarantees. It is trivial to show that there is no uniformly consistent algorithm for determining the direction of a causal edge in LiNGAMs. Thus, our main result illuminates how the notions of uniform consistency and statistical decidability come apart. Our main result also illustrates how discovery of LiNGAMs differs from their Gaussian counterparts. As sample size increases, consistent discovery algorithms for (the Markov equivalence class) of Gaussian models can be forced to "flip" their judgments about whether X causes Y or vice versa, no matter how strong the effect of X on Y (Kelly and Mayo-Wilson, 2010). Our main result, in contrast, shows that consistent discovery algorithms for LiNGAMs can avoid such flipping; whether existing algorithms do avoid flipping is a matter for further investigation
@article{Genin, Mayo-Wilson, Zollman (2020),
title = {Statistical decidability in linear, non-Gaussian models},
author = {Konstantin Genin and Conor Mayo-Wilson},
journal = {NeurIPS 2020: Workshop on Causal Discovery and Causality-Inspired Machine Learning},
year = {2020},
doi = {https://doi.org/10.1007/s11229-020-02950-3} }
- Causal Identifiability and Piecemeal Experimentation
Synthese. 2019. Vol. 196. pp. pp. 3029-3065.
In medicine and the social sciences, researchers often measure only a handful of variables simultaneously. The underlying assumption behind this methodology is that combining the results of dozens of smaller studies can, in principle, yield as much information as one large study, in which dozens of variables are measured simultaneously. Mayo-Wilson (2012, 2013) shows that assumption is false when causal theories are inferred from observational data. This paper extends Mayo-Wilson's results to cases in which experimental data is available. I prove several new theorems that show that, as the number of variables under investigation grows, experiments do not improve, in the worst-case, one's ability to identify the true causal model if one can measure only a few variables at a time. However, stronger statistical assumptions (e.g., Gaussianity) significantly aid causal discovery in piecemeal inquiry, even if such assumptions are unhelpful when all variables can be measured simultaneously.
@article{Mayo-Wilson (2018),
title = {Causal Identifiability and Piecemeal Experimentation},
author = {Conor Mayo-Wilson},
journal = {Synthese},
volume = {196},
year = {2019},
pages = {3029-3065},
doi = {10.1007/s11229-018-1826-4} }
- Epistemic Closure in Science
Philosophical Review. Volume 127. Issue 1. January 2018. pp. 73-114.
Epistemic closure (EC) is the thesis that knowledge is closed under known entailment. Although several theories of knowledge violate EC, failures of EC seem rare in science. I argue that, surprisingly, there are genuine violations of EC according to theories of knowledge widely endorsed in the sciences.
@article{Mayo-Wilson (2018),
title = {Epistemic Closure in Science},
author = {Conor Mayo-Wilson},
journal = {Philosophical Review},
year = {2018},
volume = {127},
issue = {1},
pages = {73-114},
doi = {10.1215/00318108-4230067} }
- Scoring Imprecise Credences: A Mildly Immodest Proposal.
with Gregory Wheeler
Philosophy and Phenomenological Research. Volume 93, Issue 1, July 2016. pp. 55-78.
Jim Joyce argues for two amendments to probabilism. The first is the doctrine that credences are rational, or not, in virtue of their accuracy or "closeness to the truth." The second is a shift from a numerically precise model of belief to an imprecise model represented by a set of probability functions (2010). We argue that both amendments cannot be satisfied simultaneously. To do so, we employ a (slightly-generalized) impossibility theorem of Seidenfeld, Schervish, and Kadane (2012), who show that there is no strictly proper scoring rule for imprecise probabilities. The question then is what should give way. Joyce, who is well aware of this no-go result, thinks that a quantifiability constraint on epistemic accuracy should be relaxed to accommodate imprecision. We argue instead that another Joycean assumption - called strict immodesty - should be rejected, and we prove a representation theorem that characterizes all "mildly" immodest measures of inaccuracy.
@article{Mayo-Wilson, Wheeler (2016),
title = {Scoring Imprecise Credences},
author = {Conor Mayo-Wilson and Gregory Wheeler},
journal = {Philosophy and Phenomenological Research},
year = {2014}
volume = {93},
issue = {1},
pages = {55?78},
doi = {doi: 10.1111/phpr.12256}
}
- Structural Chaos
Philosophy of Science. Vol. 82, No. 5. December 2015. pp. 1236-1247.
Philosophers often distinguish between parameter error and model error. Frigg et al. [2014] argue that the distinction is important because although there are methods for making predictions given parameter error and chaos, there are no methods for dealing with model error and "structural chaos." However, Frigg et al. [2014] neither define "structural chaos" nor explain the relationship between it and chaos (simpliciter). I propose a definition of "structural chaos", and I explain two new theorems that show that if a set of models contains a chaotic function, then the set is structurally chaotic. Finally, I discuss the relationship between my results and structural stability.
@article{Mayo-Wilson (2015),
title = {Structural Chaos},
author = {Conor Mayo-Wilson},
journal = {Philosophy of Science}
volume = {82},
year = {2014}
issue = {5},
pages = {1236-1247},
doi = {doi: 10.1086/684086}
}
- The Reliability of Testimonial Norms in Scientific Communities
Synthese. Vol. 191, Issue 1. January 2014. pp 55-78.
Several contemporary debates in the epistemology of testimony are (at least implicitly) motivated by concerns about the reliability of various rules for changing one's beliefs in light of others' claims. Call such rules testimonial norms. To date, epistemologists have neither (i) characterized those features of epistemic communities that influence the reliability of different testimonial norms, nor (ii) evaluated the reliability of different testimonial norms as those features vary. This is the aim of this paper. Employing a formal model of communal scientific inquiry, I argue that miscommunication and the "social structure" of scientific communities can strongly influence reliability of different testimonial norms, where reliability can be made precise in at least four different ways.
@article{Mayo-Wilson (2014),
title = {The Reliability of Testimonial Norms in Scientific Communities},
author = {Conor Mayo-Wilson},
journal = {Synthese},
year = {2014},
volume = {191},
issue = {1},
pages = {55-78},
doi = {10.1007/s11229-013-0320-2 } }
- The Limits of Piecemeal Causal Inference
The British Journal for the Philosophy of Science. Vol. 65. Issue 2. June 2014. pp. 213-249.
In medicine and the social sciences, researchers must frequently integrate the findings of many observational studies, which measure overlapping collections of variables. For instance, learning how to prevent obesity requires combining studies that investigate obesity and diet with others that investigate obesity and exercise. Recently developed causal discovery algorithms provide techniques for integrating many studies, but little is known about what can be learned from such algorithms. This paper argues that there are causal facts that one could learn by conducting a large study but which could not be learned by combining many smaller studies. Moreover, I characterize the frequency with which combining many studies increases underdetermination and exactly how much information is lost.
@article{Mayo-Wilson (2014),
title = {The Limits of Piecemeal Causal Inference},
author = {Conor Mayo-Wilson},
journal = {The British Journal for the Philosophy of Science},
year = {2014},
volume = {65},
issue = {2},
pages = {213-249},
doi = {10.1093/bjps/axs030} }
- Wisdom of the Crowds vs. Groupthink: Learning in Groups and Isolation
with Kevin Zollman and David Danks
The International Journal of Game Theory. Vol. 42. Issue 3. August 2013. pp 695-723.
New York Times Converage
We evaluate the asymptotic performance of boundedly-rational strategies in multi-armed bandit problems, where performance is measured in terms of the tendency (in the limit) to play optimal actions in either (i) isolation or (ii) networks of other learners. We show that, for many strategies commonly employed in economics, psychology, and machine learning, performance in isolation and performance in networks are essentially unrelated. Our results suggest that the appropriateness of various, common boundedly-rational strategies depends crucially upon the social context (if any) in which such strategies are to be employed.
@article{Mayo-Wilson, Zollman, Danks (2013),
title = {Wisdom of the Crowds vs. Groupthink: Learning in Groups and Isolation},
author = {Conor Mayo-Wilson and Kevin Zollman and David Danks},
journal = {International Journal of Game Theory},
year = {2013},
volume = {42},
issue = {3},
pages = {695?723},
doi = {10.1007/s00182-012-0329-7} }
- The Problem of Piecemeal Induction
Philosophy of Science. Vol. 78. Issue 5. December, 2011. pp. 864-874.
It is common to assume that the problem of induction arises only because of small sample sizes or unreliable data. In this paper, I argue that the piecemeal collection of data can also lead to underdetermination of theories by evidence, even if arbitrarily large amounts of completely reliable experimental and observational data are collected. Specifically, I focus on the construction of causal theories from the results of many studies (perhaps hundreds), including randomized controlled trials and observational studies, where the studies focus on overlapping, but not identical, sets of variables. Two theorems reveal that, for any collection of variables V, there exist fundamentally different causal theories over V that cannot be distinguished unless all variables are simultaneously measured. Underdetermination can result from piecemeal measurement, regardless of the quantity and quality of the data. Moreover, I generalize these results to show that, a priori, it is impossible to choose a series of small (in terms of number of variables) observational studies that will be most informative with respect to the causal theory describing the variables under investigation. This final result suggests that scientific institutions may need to play a larger role in coordinating differing research programs during inquiry.
@article{Mayo-Wilson (2011),
title = {The Problem of Piecemeal Induction},
author = {Conor Mayo-Wilson},
journal = {Philosophy of Science},
volume = {79},
year = {2011}
pages = {864-874},
doi = {10.1086/662564}}
- The Independence Thesis: When Epistemic Norms for Individuals and Groups Diverge
with Kevin Zollman and David Danks
Philosophy of Science. Vo. 78, No. 4. October, 2011. pp. 653-657.
In the latter half of the twentieth century, philosophers of science have argued (implicitly and explicitly) that epistemically rational individuals might compose epistemically irrational groups and that, conversely, epistemically rational groups might be composed of epistemically irrational individuals. We call the conjunction of these two claims the Independence Thesis, as they together imply that methodological prescriptions for scientific communities and those for individual scientists might be logically independent of one another. We develop a formal model of scientific inquiry, define four criteria for individual and group epistemic rationality, and then prove that the four definitions diverge, in the sense that individuals will be judged rational when groups are not and vice versa. We conclude by explaining implications of the inconsistency thesis for (i) descriptive history and sociology of science and (ii) normative prescriptions for scientific communities.
@article{Mayo-Wilson, Zollman, Danks (2011),
title = {The Independence Thesis: When Epistemic Norms for Individuals and Groups Diverge},
author = {Conor Mayo-Wilson and Kevin Zollman and David Danks},
journal = {Philosophy of Science},
volume = {78},
number = {4},
year = {2011},
pages = {653--677},
doi = {10.1086/661777}}
- Russell's Logicism and Theory of Coherence
Russell: The Journal of Bertrand Russell Studies. Vol. 31. Summer, 2011. pp. 89-106.
According to Quine, Charles Parsons, Mark Steiner, and others, Russell's logicist project is important because, if successful, it would show that mathematical theorems possess desirable epistemic properties often attributed to logical theorems, such as a prioricity, necessity, and certainty. Unfortunately, Russell never attributed such importance to logicism, and such a thesis contradicts Russell's explicitly stated views on the relationship between logic and mathematics. This raises the question: what did Russell understand to be the philosophical importance of logicism? Building on recent work by Andrew Irvine and Martin Godwyn, I argue that Russell thought a systematic reduction of mathematics increases the certainty of known mathematical theorems (even basic arithmetical facts) by showing mathematical knowledge to be coherently organized. The paper outlines Russell's theory of coherence, and discusses its relevance to logicism and the certainty attributed to mathematics.
@article{Mayo-Wilson (2011),
title = {Russell's Logicism and Theory of Coherence},
author = {Conor Mayo-Wilson},
journal = {Russell: The Journal of Bertrand Russell Studies},
volume = {31},
year = {2011},
pages = {89--106}
- Ockham Efficiency Theorem for Random Empirical Methods
with Kevin Kelly
Journal for Philosophical Logic. Vol. 39, No 6. 2010. pp. 679-712.
Ockham's razor is the principle that, all other things being equal, scientists ought to prefer simpler theories. In recent years, philosophers have argued that simpler theories make better predictions, possess theoretical virtues like explanatory power, and have other pragmatic virtues like computational tractability. However, such arguments fail to explain how and why a preference for simplicity can help one find true theories in scientific inquiry, unless one already assumes that the truth is simple. One new solution to that problem is the Ockham efficiency theorem, which states that scientists who heed Ockham's razor retract their opinions less often and sooner than do their non-Ockham competitors. The theorem neglects, however, to consider competitors following random ("mixed") strategies and in many applications random strategies are known to achieve better worst-case loss than deterministic strategies. In this paper, we describe two ways to extend the result to a very general class of random, empirical strategies. The first extension concerns expected retractions, retraction times, and errors and the second extension concerns retractions in chance, times of retractions in chance, and chances of errors.
@article{Kelly, Mayo-Wilson (2010b),
title = {Ockham Efficiency Theorem for Random Empirical Methods},
author = {Kevin Kelly and Conor Mayo-Wilson},
journal = {Journal of Philosophical Logic},
volume = {39},
number = {6},
year = {2010},
pages = {679--712}
- Causal Conclusions That Flip Repeatedly and Their Justification
with Kevin Kelly
Proceedings of the Twenty Sixth Conference on Uncertainty in Artificial Intelligence. 2010. Eds. Peter Grunwald and Peter Spirtes. pp. 277-286
Over the past two decades, several consistent procedures have been designed to infer causal conclusions from observational data. We prove that if the true causal network might be an arbitrary, linear Gaussian network or a discrete Bayes network, then every unambiguous causal conclusion produced by a consistent method from non-experimental data is subject to reversal as the sample size increases any finite number of times. That result, called the causal flipping theorem, ex- tends prior results to the effect that causal discovery cannot be reliable on a given sample size. We argue that since repeated flipping of causal conclusions is unavoidable in principle for consistent methods, the best possible discovery methods are consistent methods that retract their earlier conclusions no more than necessary. A series of simulations of various methods across a wide range of sample sizes illustrates concretely both the theorem and the principle of com- paring methods in terms of retractions.
@article{Kelly, Mayo-Wilson (2010a),
title = {Causal Conclusions That Flip Repeatedly and Their Justification},
author = {Kevin Kelly and Conor Mayo-Wilson},
journal = {Proceedings of the Twenty Sixth Conference on Uncertainty in Artificial Intelligence},
year = {2010},
pages = {277--286}
Books and Edited Volumes
- Scientific Collaboration and Collective Knowledge
Editors: Thomas Boyer, Conor Mayo-Wilson, and Michael Weisberg.
Despite its growing prevalence and importance, there is relatively little philosophical work analyzing collaborative research in the sciences. What are the benefits and costs of such collaborations, and are current practices for encouraging collaborations optimal? How should credit for discovery and responsibility for error be attributed to large collaborative groups of scientists? How ought collaborating scientists summarize their findings if they disagree about the interpretation of their results? The edited volume will contain contributions from an internationally-recognized group of philosophers, dedicated to such questions about scientific collaboration.
@book{boyer-kassem_scientific_2017,
address = {Oxford, New York},
title = {Scientific {Collaboration} and {Collective} {Knowledge}: {New} {Essays}}, isbn = {978-0-19-068053-4},
shorttitle = {Scientific {Collaboration} and {Collective} {Knowledge}},
publisher = {Oxford University Press},
editor = {Boyer-Kassem, Thomas and Mayo-Wilson, Conor and Weisberg, Michael},
year = {2017}, }
- Functions and Computations
with Wilfried Sieg, Michael Warren, and Dawn McLaughlin. Available via the Open Learning Initiative
An online, interactive textbook for introductory set theory and computability theory. Set theoretic topics include: (1) an axiomatic, logically rigorous presentation of ZF, (2) set theoretic development of the natural numbers and Dedekind's recursion theorem, (3) Cantor's theorem, (4) the Cantor Bernstein Theorem, and (5) the cumulative hierarchy. Recursion theoretic topics include: (1) Goedel's Equational Calculus, (2) recursive functions, (3) the Church-Turing thesis, (4) the halting problem, and (5) the decision problem for first order predicate logic.
@book{Sieg:Functions,
title = {Functions and Computations},
author = {Wilfried Sieg and Conor Mayo-Wilson and Michael Warren and Dawn McLaughlin},
year = {2010}
Book Reviews
- Review of Deborah Mayo's Statistical Inference as Severe Testing
@article{Mayo-Wilson2020:ReviewMayoStatisticalInference,
title = {Review of Deborah Mayo's Statistical Inference as Severe Testing},
author = {Conor Mayo-Wilson},
journal = {Forthcoming in Philosophical Review},
year = {2020},
- Review of Gilbert Harman's and Sanjeev Kulkarni's Reliable Reasoning
with Kevin Kelly. Notre Dame Philosophical Review. March 19th, 2008.
@article{Mayo-Wilson_Kelly:ReliableReasoningReview,
title = {Review of Gilbert Harman's and Sanjeev Kulkarni's Reliable Reasoning},
author = {Kevin Kelly and Conor Mayo-Wilson},
year = {2008},
Working Papers
- Robust Bayesianism and the Likelihood Principle
We argue that the likelihood principle (LP) and weak law of likelihood (LL) generalize naturally to settings in which experimenters are justified only in making comparative, non-numerical judgments of the form ``A given B is more likely than C given D.'' To do so, we first formulate qualitative analogs of those theses. Then, using a framework for qualitative conditional probability, just as the characterizes when all Bayesians (regardless of prior) agree that two pieces of evidence are equivalent, so a qualitative/non-numerical version of LP provides sufficient conditions for agreement among experimenters' whose degrees of belief satisfy only very weak ``coherence'' constraints. We prove a similar result for LL. We conclude by discussing the relevance of results to stopping rules.
@article{Mayo-Wilson_Saraf_2020,
title = {Qualitative Robust Bayesianism and the Likelihood Principle},
author = {Mayo-Wilson, Conor, and Saraf, Aditya},
journal = {Unpublished Manuscript},
year = {2020} }
- A Qualitative Generalization of Birnbaum's Theorem
We prove a generalization of Birnbaum's theorem, which states that the sufficiency and conditionality principles together entail the likelihood principle. Birnbaum's theorem poses a dilemma for frequentists, who typically accept versions of the former two principles but reject the third. Our generalization of Birnbaum's theorem relies only on axioms for qualitative/comparative, conditional probability.
@article{Mayo-Wilson_2020,
title = {Qualitative Robust Bayesianism and the Likelihood Principle},
author = {Mayo-Wilson, Conor},
journal = {Unpublished Manuscript},
year = {2020} }
Old Papers (Not currently under revision)
- Peirce and Brouwer
Comments are welcome. Please do not cite without my permission.Although C.S. Peirce's logic has been studied extensively, few have noticed the remarkable resemblance between his ideas on continuity and those of L.E.J. Brouwer. This oversight is especially surprising because Peirce explicitly denies that the law of excluded middle holds for propositions concerning real numbers. This paper provides a detailed comparison of C.S. Peirce and L.E.J. Brouwer's concepts of continuity and the logic of real numbers. I will trace three major themes in their respective work, which highlight the striking similarities in their views about the creation, composition, and logic of the continuum.
@unpublished{Mayo-Wilson2011,
title = {Peirce and Brouwer},
author = {Conor Mayo-Wilson}, }