Published / Forthcoming

Journal Articles

Conciliatory views, higher-order disagreements, and defeasible logic (2022), Synthese 200(2), 173: 1–23.

Abstract: Conciliatory views of disagreement say, roughly, that it’s rational for you to become less confident in your take on an issue in case you find out that an epistemic peer’s take on it is the opposite. Their intuitive appeal notwithstanding, there are well-known worries about the behavior of conciliatory views in scenarios involving higher-order disagreements, which include disagreements over these views themselves and disagreements over the peer status of alleged epistemic peers. This paper does two things. First, it explains how the core idea behind conciliatory views can be expressed in a defeasible logic framework. The result is a formal model that’s particularly useful for thinking about the behavior of conciliatory views in cases involving higher-order disagreements. And second, the paper uses this model to resolve three paradoxes associated with disagreements over epistemic peerhood.

Conciliatory reasoning, self-defeat, and abstract argumentation (2021), The Review of Symbolic Logic, First View, 1–48.

Abstract: According to conciliatory views on the significance of disagreement, it’s rational for you to become less confident in your take on an issue in case your epistemic peer’s take on it is different. These views are intuitively appealing, but they also face a powerful objection: in scenarios that involve disagreements over their own correctness, conciliatory views appear to self-defeat and, thereby, issue inconsistent recommendations. This paper provides a response to this objection. Drawing on the work from defeasible logics paradigm and abstract argumentation, it develops a formal model of conciliatory reasoning and explores its behavior in the troubling scenarios. The model suggests that the recommendations that conciliatory views issue in such scenarios are perfectly reasonable—even if outwardly they may look odd.

Misleading higher-order evidence, conflicting ideals, and defeasible logic (2021), Ergo 8(6): 141–74.

Abstract: Thinking about misleading higher-order evidence naturally leads to a puzzle about epistemic rationality: If one’s total evidence can be radically misleading regarding itself, then two widely-accepted requirements of rationality come into conflict, suggesting that there are rational dilemmas. This paper focuses on an often misunderstood and underexplored response to this (and similar) puzzles, the so-called conflicting-ideals view. Drawing on work from defeasible logic, I propose understanding this view as a move away from the default metaepistemological position according to which rationality requirements are strict and governed by a strong, but never explicitly stated logic, toward the more unconventional view, according to which requirements are defeasible and governed by a comparatively weak logic. When understood this way, the response is not committed to dilemmas.

A curious dialogical logic and its composition problem (2014), with Sara Uckelman and Jesse Alama, Journal of Philosophical Logic 43(6): 1065–100.

Abstract: Dialogue semantics for logic are two-player logic games between a Proponent who puts forward a logical formula φ as valid or true and an Opponent who disputes this. An advantage of the dialogical approach is that it is a uniform framework from which different logics can be obtained through only small variations of the basic rules. We introduce the composition problem for dialogue games as the problem of resolving, for a set S of rules for dialogue games, whether the set of S-dialogically valid formulas is closed under modus ponens. Solving the composition problem is fundamental for the dialogical approach to logic; despite its simplicity, it often requires an indirect solution with the help of significant logical machinery such as cut-elimination. Direct solutions to the composition problem can, however, sometimes be had. As an example, we give a set N of dialogue rules which is well-justified from the dialogical point of view, but whose set N of dialogically valid formulas is both non-trivial and non-standard. We prove that the composition problem for N can be solved directly, and introduce a tableaux system for N.

Chapters, Proceeding, and Technical Reports

XAI and philosophical work on explanation: A roadmap (2022), with Thomas Raleigh, in Boella G. et al. (eds.) Proceedings of the 1st Workshop on Bias, Ethical AI, Explainability, and the Role of Logic and Logic Programming (BEWARE), co-located with AIxIA 2022. CEUR-WS.org/Vol-3319: 101–6.

Abstract: What Deep Neural Networks (DNNs) can do is highly impressive, yet they are notoriously opaque. Responding to the worries associated with this opaqueness, the flourishing field of XAI has produced a plethora of methods purporting to explain the workings of DNNs. Unsurprisingly, a whole host of questions revolves around the notion of explanation central to this field. This note surveys the recent work in which these questions are tackled from the perspective of philosophical ideas on explanations and models in science.

Moral principles: Contributory, hedged, mixed (2021), in Liu F. et al. (eds.) Deontic Logic and Normative Systems: 15th International Conference (DEON2020/21, Munich). London: College Publications: 272–90.

Abstract: It’s natural to think that the principles expressed by the statements “Promises ought to be kept” and “We ought to help those in need” are defeasible. But how are we to make sense of this defeasibility? On one proposal, moral principles have hedges or built-in unless clauses specifying the conditions under which the principle doesn’t apply. On another, such principles are contributory and, thus, do not specify which actions ought to be carried out, but only what counts in favor or against them. Drawing on a defeasible logic framework, this paper sets up three models: one model for each proposal, as well as a third model capturing a mixed view on principles that combines them. It then explores the structural connections between the three models and establishes some equivalence results, suggesting that the seemingly different views captured by the models are closer than standardly thought.

Deliberating between backward and forward induction reasoning: First steps (2015), with Eric Pacuit, in Ramanujam R. (ed.) Proceedings of the 15th Conference on the Theoretical Aspects of Rationality and Knowledge (TARK XV): 153–61.

Abstract: Backward and forward induction can be viewed as two styles of reasoning in dynamic games. Since each prescribes taking a different attitude towards the past moves of the other player(s), the strategies they identify as rational are sometimes incompatible. Our goal is to study players who are able to deliberate between backward and forward induction, as well as conditions under which one is superior to the other. This extended abstract is our first step towards this goal. We present an extension of Stalnaker’s game models, in which the players can make “trembling hand” mistakes. This means that when a player observes an unexpected move, she has to figure out whether it is a result of a deliberate choice or a mistake, thereby committing herself to one of the two styles of reasoning.

Logic in Latvia (2013), with Jurģis Šķilters, in Schumann A. (ed.) Logic in Central and Eastern Europe: History, Science and Discourse, University of America Press.

Dialogue games in classical logic (2011), with Jesse Alama and Sara Uckelman, in Giese M. and Kuznets R. (eds.) TABLEAUX 2011: Workshops, Tutorials, and Short Papers, Technical Report IAM-11-002, Universität Bern: 82–6.