Confidence reflects a noisy decision estimate

Published in Social Sciences
Confidence reflects a noisy decision estimate
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

When making decisions, we have a sense of which choices are better or worse. This is our sense of confidence. However, while confidence generally tracks the quality of decisions, the correlation is not perfect. How do we assess the quality of decisions? And what causes the sense of confidence to be better or worse? 

These questions have intrigued psychologists who have long sought ways to quantify the quality of confidence judgments (termed ``metacognitive ability"). They have invited people into the laboratory to perform cognitive or perceptual decision-making tasks and to rate their confidence in each decision. These decisions are often binary and always have a correct answer. Confidence reports are typically mapped onto a numerical scale. A popular approach to analyze these choice-confidence data has been to develop descriptive statistics of metacognitive ability that quantify the quality of confidence reports. However, this approach does not provide a mechanistic understanding of how confidence is computed and why it sometimes fails to track decision quality. For this we need a mathematical process model. Such models instantiate a specific hypothesis about how sensory inputs are used to form decisions and the associated confidence judgments. 

Process models are powerful tools which can be leveraged to interrogate hypotheses about the computations underlying our mental operations, to predetermine the effectiveness of various experimental designs, and to generate new predictions which can then be tested in new experiments. 

In our recent work (Boundy-Singer et al 2022), we built a process model of decision confidence which implements the hypothesis of ``Confidence AS A Noisy Decision Reliability Estimate" (CASANDRE). This model builds on and extends a classic, extremely successful process model of the decision-making process, signal detection theory. Consider a task in which you must judge whether there are more red or green marbles in a jar. When the amount of marbles of each color is nearly equal, your choices will vary across repeated trials (that is, presentations of same jar). The same sensory evidence gives rise to variable choices. Signal detection theory provides an explanation. The same sensory evidence leads to a variable internal representation of that evidence. We call this internal representation a ``decision variable". This estimate is the information an observer can access when making a decision. The decision variable in a particular trial is well described as a random draw from a probability distribution. In signal detection theory, this distribution is typically assumed to be a Gaussian characterized by its mean corresponding to the average evidence estimate, and its width corresponding to the uncertainty of the estimate. The wider the width, the more variable the evidence across trials. A decision is formed by comparing the decision variable to a static decision boundary.

The CASANDRE model adds a second stage of processing which explains how confidence in this primary decision is computed. This second stage specifies the transformation of the decision variable into a confidence variable. Analogous to the first stage, the confidence variable is compared to a fixed confidence boundary leading to a confidence rating. But how is the confidence variable constructed?  

Others have recognized that confidence is not an assessment of decision accuracy but an assessment of decision reliability. While this distinction is subtle, this framing allows one to ask what are the factors that make decisions more reliable (``I believe that I would make the same choice again") versus less reliable (``I am not sure that I would make the same choice again"). Decision reliability is determined by two factors: The stimulus interpretation and the uncertainty of that interpretation. We propose that confidence arises from a computation which takes these both into account by normalizing a stimulus interpretation by an estimate of this interpretation's uncertainty. This computation to estimate decision reliability naturally explains why similar sensory measurements, for example the speed of an oncoming car when you are deciding whether to cross the road, can lead to different confidence in daylight versus dusk when signals are less certain.  

But why does confidence not perfectly track decision reliability? Whenever humans try to estimate anything our measurements are imperfect. This is why the same stimulus gives rise to variable interpretations. We reasoned that because the proposed confidence computation requires an estimate of uncertainty, this estimate will also be imperfect. Thus, the confidence computation is the normalisation of a noisy stimulus estimate by a noisy uncertainty estimate. The factor which limits decision-making ability is variability in stimulus estimation. It follows, the factor which limits confidence judgments is variability in uncertainty estimation. We term this variability in the uncertainty estimate ``meta-uncertainty". When someone is skillful at assessing the quality of incoming information they have low meta-uncertainty and thus a good ability to distinguish reliable from unreliable decisions.

We tested whether the CASANDRE process model can explain perceptual and cognitive choices and the associated confidence reports across a variety of laboratory tasks. Data from these previously published behavioral studies came from an open data repository called the Confidence Database, enabling us to test the model on a variety of tasks with a large number of participants. Even though these tasks differed in many details—for example, whether the stimulus was visual or vestibular, or whether the task was perceptual or cognitive in nature—we found that the CASANDRE model provides a good description of all the observed choice-confidence data.

Finally we took the first step in leveraging the power of a process model to study confidence and metacognitive ability. In the CASANDRE model, metacognitive ability is determined by the level of meta-uncertainty. We first ensured that meta-uncertainty could be stably estimated from choice-confidence data and that it's value was not overly influenced by other quantities, like a subject's accuracy in the decision-making task. We were then able to make a new prediction about the nature of metacognitive ability. Specifically, it is more difficult to estimate uncertainty when that uncertainty is varying a lot from one trial to the next. The CASANDRE model therefore predicts that metacognitive ability will decrease with the number of randomly varying uncertainty levels. For example, one task may require subjects to judge a visual stimulus, but the visibility of that stimulus is randomly varied across six different contrasts that lead to different levels of uncertainty. In contrast, another task may hold the visibility of the stimulus constant through the whole experiment. Using data from a number of tasks with large numbers of subjects, we were able to test this new prediction. Indeed, we found that meta-uncertainty estimates increased lawfully with the number of uncertainty levels.

Our finding provides further support for the key insight of the CASANDRE model: Our sense of confidence comes partly from estimating uncertainty and that this estimate is imperfect. Therefore, what limits metacognitive ability is our ability to estimate our own uncertainty. This idea raises many new questions. We showed that meta-uncertainty can be manipulated in one way, but this finding needs further investigation and extension. Can meta-uncertainty be manipulated in other ways as well? For example, over thousands of trials, we can learn to make better decisions. Does meta-uncertainty also change with learning? Finally, the CASANDRE model describes a static decision making process. What is the relationship between confidence, uncertainty estimation, and time? How long does it take to make a good uncertainty estimate? Extensions of the CASANDRE model can be employed to tackle all of these, and many more, questions.

Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in