# Research

## Publications

(2023). An Algorithmic Impossible-Worlds Model of Belief and Knowledge. The Review of Symbolic Logic. DOI: 10.1017/S1755020323000059.

In this paper, I develop an algorithmic impossible-worlds model of belief and knowledge that provides a middle ground between models that entail that everyone is logically omniscient and those that are compatible with even the most egregious kinds of logical incompetence. In outline, the model entails that an agent believes (knows) 𝜙 just in case she can easily (and correctly) compute that 𝜙 is true and thus has the capacity to make her actions depend on whether 𝜙. The model thereby captures the standard view that belief and knowledge ground or are constitutively connected to dispositions to act. As I explain, the model improves upon standard algorithmic models developed by Parikh, Halpern et al., and Duc, among other ways, by integrating them into an impossible-worlds framework. The model also avoids some important disadvantages of recent candidate middle-ground models based on dynamic epistemic logic or step logic, and it can subsume their most important advantages.

(forthcoming). A Carnapian Solution to the Puzzle of Extrinsic Justifications. In Sophia Arbeiter & Juliette Kennedy (Eds.), Outstanding Contributions to Logic: Penelope Maddy. Springer.

Set theorists commonly give extrinsic justifications for axioms, i.e., they present the fruitfulness of axioms with respect to certain mathematical goals as evidence for their truth. The puzzle of extrinsic justifications is the question of how the fruitfulness of an axiom can be evidence for its truth. In this paper, I first discuss some problems for Penelope Maddy’s solution to the puzzle of extrinsic justifications. I then propose an alternative, Carnapian solution according to which the fruitfulness of an axiom can be evidence for its truth because there are analytic truths to the effect that sets are fruitful with respect to certain mathematical goals. I argue that the Carnapian solution is independently plausible, that it avoids the problems faced by Maddy’s solution, and that it neatly aligns with Maddy’s epistemological and metaphysical views concerning set theory, as well as with her methodological commitments.

(2023). A Metalinguistic and Computational Approach to the Problem of Mathematical Omniscience. Philosophy and Phenomenological Research 106, 455–474.

In this paper, I defend the metalinguistic solution to the problem of mathematical omniscience for the possible-worlds account of propositions by combining it with a computational model of knowledge and belief. The metalinguistic solution states that the objects of belief and ignorance in mathematics are relations between mathematical sentences and what they express. The most pressing problem for the metalinguistic strategy is that it still ascribes too much mathematical knowledge under the standard possible-worlds model of knowledge and belief on which these are closed under entailment. I first argue that Stalnaker’s fragmentation strategy is insufficient to solve this problem. I then develop an alternative, computational strategy: I propose a model of mathematical knowledge and belief adapted from the algorithmic model of Halpern et al. which, when combined with the metalinguistic strategy, entails that mathematical knowledge and belief require computational abilities to access metalinguistic information, and thus aren’t closed under entailment. As I explain, the computational model generalizes beyond mathematics to a version of the functionalist theory of knowledge and belief that motivates the possible-worlds account in the first place. I conclude that the metalinguistic and computational strategies yield an attractive functionalist, possible-worlds account of mathematical content, knowledge, and inquiry.

(2022). The Role of Questions, Circumstances, and Algorithms in Belief (with Jens Kipper and Alexander W. Kocurek). In Marco Degano, et al. (Eds.), Proceedings of the 23rd Amsterdam Colloquium, 181–187.

A recent approach to the problem of logical omniscience holds that belief is question-sensitive: what an agent believes depends on what question they try to answer (Pérez Carballo, 2016; Yalcin, 2018; Hoek, 2022). While the question-sensitive approach can avoid some logical omniscience problems, we argue that it suffers from nearby problems. First, these accounts all validate closure principles that are just as implausible as the ones it was designed to avoid. Second, question-sensitivity by itself isn’t suitable for explaining many kinds of failures of logical omniscience. Recognizing the flaws of this approach, however, naturally leads to a more promising solution. Our account generalizes the question-sensitive approach by appealing to (1) the defeasible nature of dispositions toward action associated with belief and (2) the algorithms an agent uses to make decisions. On our view, then, believing that 𝜙 means being disposed to employ an algorithm which outputs acting on the information that 𝜙 in normal circumstances associated with 𝜙. We argue that this account naturally generalizes the question-sensitive accounts while avoiding their faults.

In this paper, we offer a novel defense of descriptivism about reference. Our argument is based on principles about the relevance of speaker intentions to reference that are shared by many opponents of descriptivism, including Saul Kripke. We first show that two such principles that are plausibly endorsed by Kripke and other prominent externalists in fact entail descriptivism. The first principle states that when certain kinds of speaker intentions are present, they suffice to determine and explain reference. According to the second principle, certain speaker intentions must be present whenever something determines or explains reference. We then go on to make these principles more precise and argue that it would be costly to deny either of them. Since on the more precise understanding we suggest, the conjunction of these principles still entails descriptivism, we conclude that opponents of descriptivism have to give up some highly plausible assumption about the relation between speaker intentions and reference.

(2021). Are Scrutability Conditionals Rationally Deniable? (with Jens Kipper). Analysis 81(3), 452–461.

Chalmers has used Bayesian considerations to argue that some sentences—in particular, scrutability conditionals—aren’t rationally revisable without meaning change. He believes that Bayesianism thus provides support for the existence of a priori truths. However, as we argue, Chalmers’s arguments leave open that every sentence is rationally deniable without meaning change. If this were the case, this would not only undermine Chalmers’s case for the a priori, but it would be devastating for large parts of his philosophical program, including his scrutability theses and his epistemic two-dimensionalism. We suggest that Chalmers’s best option is to hold that well-known convergence theorems apply to his framework, which would mean that ideally rational subjects converge on the truth of scrutability conditionals. However, our discussion reveals that showing that these theorems apply in effect requires assuming scrutability. Consequently, Bayesianism doesn’t conflict with Chalmers’s scrutability framework, but it doesn’t support it, either.

In this paper, I argue from a metasemantic principle to the existence of analytic sentences. According to the metasemantic principle, an external feature is relevant to determining which concept one expresses with an expression only if one is disposed to treat this feature as relevant. This entails that if one isn’t disposed to treat external features as relevant to determining which concept one expresses, and one still expresses a given concept, then something other than external features must determine that one does. I argue that, in such cases, what determines that one expresses the concept also puts one in a position to know that certain sentences are true—these sentences are thus analytic relative to this determination basis. Finally, I argue that there are such cases: some sentences are analytic relative to what determines that we express certain key concepts, and these sentences include ones that have always been thought to be the best candidates for being analytic, namely, stipulative truths, and first principles of mathematics.

(2020). Descriptivism about the Reference of Set-Theoretic Expressions: Revisiting Putnam’s Model-Theoretic Arguments. The Monist 103(4), 442–454.

Putnam’s model-theoretic arguments for the indeterminacy of reference have been taken to pose a special problem for mathematical languages. In this paper, I argue that if one accepts that there are theory-external constraints on the reference of at least some expressions of ordinary language, then Putnam’s model-theoretic arguments for mathematical languages don’t go through. In particular, I argue for a kind of descriptivism about mathematical expressions according to which their reference is “anchored” in the reference of expressions of ordinary language. These anchors add enough to the content of mathematical expressions to forestall the radical kind of indeterminacy that model-theoretic arguments are purported to show, while still leaving room for a plausible, moderate kind of indeterminacy.

According to the iterative conception of sets, standardly formalized by ZFC, there is no set of all sets. But why is there no set of all sets? A simple-minded, though unpopular, “minimal” explanation for why there is no set of all sets is that the supposition that there is contradicts some axioms of ZFC. In this paper, I ﬁrst explain the core complaint against the minimal explanation, and then argue against the two main alternative answers to the guiding question. I conclude the paper by outlining a close alternative to the minimal explanation, the conception-based explanation, that avoids the core complaint against the minimal explanation.

(2019). Truth in Journalism. In James E. Katz & Kate Mays (Eds.), Journalism and Truth in an Age of Social Media, 103–116. Oxford University Press.

In order to fulfill their role in society, professional journalists must deliver truths. But truth-telling is not the only requirement of the goal of journalism. What is more, some of the other requirements of journalism can make it difficult for journalists to deliver truths, and may even force them to depart from truth in certain ways. In this paper, I make the requirements of the goal of journalism explicit, and I explain how conflicts between them can arise. I then make some suggestions for balancing these requirements that could help journalists regain the trust of the public.

In this paper, I introduce and defend a notion of analyticity for formal languages. I ﬁrst uncover a crucial ﬂaw in Timothy Williamson’s famous argument template against analyticity, when it is applied to sentences of formal mathematical languages. Williamson’s argument targets the popular idea that a necessary condition for analyticity is that whoever understands an analytic sentence assents to it. Williamson argues that for any given candidate analytic sentence, there can be people who understand that sentence and yet who fail to assent to it. I argue that, on the most natural understanding of the notion of assent when it is applied to sentences of formal mathematical languages, Williamson’s argument fails. Formal analyticity is the notion of analyticity that is based on this natural understanding of assent. I go on to develop the notion of formal analyticity and defend the claim that there are formally analytic sentences and rules of inference. I conclude by showing the potential payoffs of recognizing formal analyticity.

(2018). Leibniz’s Formal Theory of Contingency (with Jeffrey McDonough). History of Philosophy & Logical Analysis 21(1), 17–43.

This essay argues that, with his much-maligned “infinite analysis” theory of contingency, Leibniz is onto something deep and important—a tangle of issues that wouldn’t be sorted out properly for centuries to come, and then only by some of the greatest minds of the twentieth century. The first section places Leibniz’s theory in its proper historical context and draws a distinction between Leibniz’s logical and meta-logical discoveries. The second section argues that Leibniz’s logical insights initially make his “infinite analysis” theory of contingency more rather than less perplexing. The last two sections argue that Leibniz’s meta-logical insights, however, point the way towards a better appreciation of (what we should regard as) his formal theory of contingency, and its correlative, his formal theory of necessity.

(2016). Leibniz’s Formal Theory of Contingency Extended (with Jeffrey McDonough). In Ute Beckmann, et al. (Eds.), “Für unser Glück oder das Glück anderer”: Vorträge des X. Internationalen Leibniz-Kongresses, 451–466. Georg Olms Verlag.

This essay develops our meta-logical interpretation of Leibniz’s formal theory of contingency by taking up two additional issues not fully addressed in our earlier efforts. The first issue concerns the relationship between Leibniz’s formal theory of contingency and his views on species and essentialism. The second issue concerns the relationship between Leibniz’s formal theory of contingency and the modal status of the actual world.