Home Research Curriculum Vitae

Jonathan Vandenburgh

My research focuses on expressions of uncertainty and modality in language and factors influencing the formation of beliefs. My dissertation develops a model of how relevant alternatives factor into reasoning, applying this model to the meaning and epistemology of conditionals, the theory of knowledge, and the epistemology of stereotyping. You can read more about my dissertation here. I am currently developing projects on utility theory as a model of reasoning and on the epistemology of classification.

Below, you can find a list of papers. For papers not available online, please contact me if you would like to see a draft.

Conditional Learning Through Causal Models

Available in Synthese: https://rdcu.be/b7VNz

Conditional learning, where agents learn a conditional sentence 'If A, then B,' is difficult to incorporate into existing Bayesian models of learning. This is because conditional learning is not uniform: in some cases, learning a conditional requires decreasing the probability of the antecedent, while in other cases, the antecedent probability stays constant or increases. I argue that how one learns a conditional depends on the causal structure relating the antecedent and the consequent, leading to a causal model of conditional learning. This model extends traditional Bayesian learning by incorporating causal models into agents' epistemic states. On this theory, conditional learning proceeds in two steps. First, an agent learns a new causal model with the appropriate relationship between the antecedent and the consequent. Then, the agent narrows down the set of possible worlds to include only those which make the conditional proposition true. This model of learning can incorporate both standard cases of Bayesian learning and the non-uniform learning required to learn conditional information.

Causal Models and the Logic of Counterfactuals

Available on PhilPapers: https://philpapers.org/archive/VANCMA-6.pdf

Causal models provide a framework for making counterfactual predictions, making them useful for evaluating the truth conditions of counterfactual sentences. However, current causal models for counterfactual semantics face logical limitations compared to the alternative similarity-based approaches: they only apply to a limited subset of counterfactuals and the connection to counterfactual logic is not straightforward. This paper offers a causal framework for the semantics of counterfactuals which improves upon these logical issues. It extends the causal approach to counterfactuals to handle more complex counterfactuals, including backtracking counterfactuals and those with logically complex antecedents. It also uses the notion of causal worlds to define a selection function and shows that this selection function satisfies familiar logical properties. While some limitations still arise, especially regarding counterfactuals which require breaking the laws of the causal model, this model improves upon many of the existing logical limitations of causal models.

Causal Models and the Relevant Alternatives Theory of Knowledge

Available on PhilPapers: https://philpapers.org/rec/VANCMA-7.pdf

One approach to knowledge, termed the relevant alternatives theory, stipulates that a belief amounts to knowledge if one can eliminate all relevant alternatives to the belief in the epistemic situation. This paper uses causal graphical models to formalize the relevant alternatives approach to knowledge. On this theory, an epistemic situation is encoded through the causal relationships between propositions, which determine which alternatives are relevant and irrelevant. This formalization entails that statistical evidence is not sufficient for knowledge, provides a simple way to incorporate epistemic contextualism, and can rule out many Gettier cases from knowledge. The interpretation in terms of causal models offers more precise predictions for the relevant alternatives theory, strengthening the case for it as a theory of knowledge.

Triviality Results, Conditional Probability, and Restrictor Conditionals

Available on PhilPapers: https://philpapers.org/rec/VANTRC-4.pdf

Conditional probability is often used to represent the probability of the conditional. However, triviality results suggest that the thesis that the probability of the conditional always equals conditional probability leads to untenable conclusions. In this paper, I offer an interpretation of this thesis in a possible worlds framework, arguing that the triviality results make assumptions at odds with the use of conditional probability. I argue that these assumptions come from a theory called the operator theory and that the rival restrictor theory can avoid these problematic assumptions. In doing so, I argue that recent extensions of the triviality arguments to restrictor conditionals fail, making assumptions which are only justified on the operator theory.

Learning as Hypothesis Testing: Learning Conditional and Probabilistic Information

Available on PhilPapers: https://philpapers.org/rec/VANLAH-5.pdf

The history of science is often conceptualized through 'paradigm shifts,' where the accumulation of evidence leads to abrupt changes in scientific theories. Experimental evidence suggests that this kind of hypothesis revision occurs in more mundane circumstances, such as when children learn concepts and when adults engage in strategic behavior. In this paper, I argue that the model of hypothesis testing can explain how people learn certain complex, theory-laden propositions such as conditional sentences ('If A, then B') and probabilistic constraints ('The probability that A is p'). Theories are formalized as probability distributions over a set of possible outcomes and theory change is triggered by a constraint which is incompatible with the initial theory. This leads agents to consult a higher order probability function, or a 'prior over priors,' to choose the most likely alternative theory which satisfies the constraint. The hypothesis testing model is applied to three examples: a simple probabilistic constraint involving coin bias, the sundowners problem for conditional learning, and the Judy Benjamin problem for learning conditional probability constraints. The model of hypothesis testing is contrasted with the more conservative learning theory of relative information minimization, which dominates current approaches to learning conditional and probabilistic information.