About Me / CV

ResearchGate Twitter

Dr Stephen Dewitt

I am currently a Senior Teaching Fellow at University College London. I teach both qualitative and quantitative research methods, incorporating a Bayesian and pragmatic perspective.

I conduct research on how people reason in the real, complex and uncertain world we live in. I use a range of mixed methods approaches as appropriate to serve the primary aim of uncovering my participants' cognitive processes / reasoning. I also previously conducted research within behaviour change, examining the most effective methods for people to introduce healthier habits into their working day.

I enjoy giving public talks on my work and recently spoke at 'Skeptics in the Pub' events in Brighton and Worthing.

Current research questions

  • How do humans update their estimates of propensities (e.g. for a medical patient to relapse) in light of (uncertain) new information?

  • How do humans make inferences based on a single observation? What is the role of prior knowledge?

  • Is confirmation bias an inevitable feature of reasoning in an uncertain world?

  • Do the findings of classic judgement and decision making findings apply under 'second order' uncertainty?

Key previous work

Propensities and second order uncertainty: A modified taxi cab problem

The study of people's ability to engage in causal probabilistic reasoning has typically used fixed-point estimates for key figures. For example, in the classic taxi-cab problem, where a witness provides evidence on which of two cab companies (the more common 'green' / less common 'blue') were responsible for a hit and run incident, solvers are told the witness's ability to judge cab colour is 80%. In reality, there is likely to be some uncertainty around this estimate (perhaps we tested the witness and they were correct 4/5 times), known as second-order uncertainty, producing a distribution rather than a fixed probability. While generally more closely matching real world reasoning, a further important ramification of this is that our best estimate of the witness' accuracy can and should change when the witness makes the claim that the cab was blue. We present a Bayesian Network model of this problem, and show that, while the witness's report does increase our probability of the cab being blue, it simultaneously decreases our estimate of their future accuracy (because blue cabs are less common). We presented this version of the problem to 131 participants, requiring them to update their estimates of both the probability the cab involved was blue, as well as the witness's accuracy, after they claim it was blue. We also required participants to explain their reasoning process and provided follow up questions to probe various aspects of their reasoning. While some participants responded normatively, the majority self-reported 'assuming' one of the probabilities was a certainty. Around a quarter assumed the cab was green, and thus the witness was wrong, decreasing their estimate of their accuracy. Another quarter assumed the witness was correct and actually increased their estimate of their accuracy, showing a circular logic similar to that seen in the confirmation bias / belief polarisation literature. Around half of participants refused to make any change, with convergent evidence suggesting that these participants do not see the relevance of the witness's report to their accuracy before we know for certain whether they are correct or incorrect.

Updating Prior Beliefs Based on Ambiguous Evidence

In this paper, presented as a talk at CogSci 2018, I examined how we update our estimates of people's propensities e.g. for a child to be naughty, when we get some uncertain evidence that they've e.g. been naughty again (for example you come home and a vase is broken, but it could have been your child or the cat). Whether they broke the vase this time, and whether you should consider them more naughty than you did before are separate questions, and I'm interested in the latter. Further work building on and extending this is currently under review.


This paper investigates a problem where the solver must firstly determine which of two possible causes are the source of an effect where one cause has a historically higher propensity to cause that effect. Secondly, they must update the propensity of the two causes to produce the effect in light of the observation. Firstly, we find an error commensurate with the 'double updating' error observed within the polarisation literature: individuals appear to first use their prior beliefs to interpret the evidence, then use the interpreted form of the evidence, rather than the raw form, when updating. Secondly, we find an error where individuals convert from a probabilistic representation of the evidence to a categorical one and use this representation when updating. Both errors have the effect of exaggerating the evidence in favour of the solver's prior belief and could lead to confirmation bias and polarisation.

Nested Sets and Natural Frequencies

In the below paper (left, presented as a poster (lower right) at CogSci 2019, I used mixed methods research to contribute to decade-long standing questions about the cognitive process of participants undertaking Bayesian word problems.


Is the nested sets approach to improving accuracy on Bayesian word problems simply a way of prompting a natural frequencies solution, as its critics claim? Conversely, is it in fact, as its advocates claim, a more fundamental explanation of why the natural frequency approach itself works? Following recent calls, we use a process-focused approach to contribute to answering these long-debated questions. We also argue for a third, pragmatic way of looking at these two approaches and argue that they reveal different truths about human Bayesian reasoning. Using a think aloud methodology we show that while the nested sets approach does appear in part to work via the mechanisms theorised by advocates (by encouraging a nested sets representation), it also encourages conversion of the problem to frequencies, as its critics claim. The ramifications of these findings, as well as ways to further enhance the nested sets approach and train individuals to deal with standard probability problems are discussed.


Poster version

Office workers' experiences of attempts to reduce sitting-time: an exploratory, mixed-methods uncontrolled intervention pilot study

In this 2019 behaviour change work, I conducted interviews over 1 year with individuals trying to implement better physical health regimes at work, primarily through the use of a sit-stand workstation. We used quantative and qualitative methods to uncover a rich and nuanced understanding of the cognitive barriers and facilitators involved in making changes to unhealthy habitual behaviour.