A Toy Belief Model
Or, Don't Let Me Eat Those Red Berries
Beliefs encode human experience – the assessed consequences of choices. Consider a solitary Homo sapiens sapiens interacting with nature absent sociality. Hungry, he comes across attractive red berries and eats them. It makes him slightly ill. His wise brain – a remarkable pattern matching engine – associates the displeasing consequence (nausea) of his preceding action (eat red berries). In the future, put in the same context (hungry with available red berries), he will be less likely to make the same choice (eat red berries) and more likely to make the dichotomously opposed one (don’t eat red berries.) Given repeated interactions with nature in this context, our imagined ancestor would stochastically adapt, learning not to eat that which poisoned him. If the subsequent experience was very unpleasant or very pleasant (magnitude of reward), he learns more quickly; otherwise, he learns relatively slowly.See: Thorndike’s Law of Effect
Elaborated slightly, beliefs encode human experience as a measure of confidence in which of two inherently opposed actions produces a more pleasing outcome. In an abstract sense, this encoding takes the shape of a continuum. The position in such a space represents the degree of belief that action \(a = 1\) (i.e., \(a_1\)) is better \(a = 0\) (i.e., \(a_0\)). This representation begs a critical question: how does the belief translate into an action, assuming the agent finds itself in the relevant context?
Consider a belief that implicity encodes an 80% confidence that action \(a = 1\) is the correct action.
Naively, one might expect constant selection of that \(a_1\). That is, if it were possible to observe the actions associated with this particular belief, a naive expected trace would look like the following.
But, this expectation is wrong. Beliefs are latent, generative structures.
To researchers wishing to draw conclusions from human beliefs, this is patently obvious. Opinions and actions are observables; beliefs are not.For a canonical text in public opinion, see Zaller’s The Nature and Origins of Mass Opinion But, it’s more than that. Beliefs are largely beyond even introspection because they don’t merely record experience – they affect perception. To lend structure to a stunningly large world beyond human comprehension, beliefs stochastically induce “controlled hallucinations” of regularity.
The validity of this remarkable claim takes a mere moment to comprehend. The Necker Cube drawn below is not a three dimensional object. Yet, absent specific and atypical cognitive disparities, the reader immediately perceives a three dimensional cube. But, the perception is un- or multi-stable. While there is a slight bias to perceiving the view-from-above orientation, it is easy to imagine – and truly recognize! – alternative ones (e.g. a cube from below).
Thus, observing the actions associated with the example belief – and, momentarily assuming that there was no intervening integration of information (i.e., learning or adaptation) – the portrayal below shows another possible realization. Most of the time, the action taken corresponds to the one associated with higher confidence (\(t_1, t_2, t_3, t_5\)); occasionally, there is a deviation (\(t_4\)).Generally, the hypothetical forager matches its actions to some affine transformation of the ratio between the rewards associated with each action. See various matching laws. Critically, when this deviation occurs – when the imagined holder of this belief takes \(a=0\) in spite of its disagreement with the latent best action – there is no perceived discrepancy. Whether by means of perception as controlled hallucination or post-hoc yet automatic rationalization, during that episode, the \(a=0\) action was the correct choice and there was no other correct choice.In a broader sense, the research program of Lodge and Taber follows this principle for more complex arguments. That is, people are not rational in decision-making and behavior; they are post-hoc rationalizers.
Now, revisit the introductory example of a human being having repeatedly experienced the consequences of eating poisonous red berries. To make reading easier, call that person Bob. Endowing Bob with a proper, human repertoire requires sociality. That is, Bob can express what he believes and integrate the expressions of other human beings into his beliefs. Imagine Bob rejoins his band and wishes to convey his beliefs about red berries to his kin, Alice. How would he do so? His latent belief is inaccessible, even to himself. So he could not simply share it as a mathematical object, as a contemporary statistician would. Instead, the previous trace hints at a simple, viable mechanism. Given a conveyable and recognizable context – one that specifies two particular actions as a dichotomy – Bob emits an opinion. That is, he expresses a sampled but unexecuted action as a message receivable and decodable by Alice.
This brings up a new issue: how does Alice integrate the information conveyed by Bob? Assuming all expressions are sincere, Alice could have chosen to blindly integrate the information offered by Bob expressed through his opinion. Sincerity implies the absence of deception as a strategy. However, even absent such behavior, Bob may be unreliable. For example, Bob’s latent belief – which captured his experiences – may encode no confidence (\(p=0.5\)) that one is better than the other. But, the opinion – which collapses the continuous space into a discrete, binary one – does not convey that uncertainty. In this particular case, Bob’s utterance would be maximally unreliable, something Alice wants to guard against.See Von Neumann’s lectures on Probabilistic Logics and the Synthesis of Reliable Organisms from Unreliable Components if you are a person who enjoys fun..
Restated, the question of integration implicitly demands an answer to: does Alice trust Bob? Absent deception, trust means something simple: does Alice believe Bob is reliable? Said different: do Bob’s beliefs generally and accurately reflect the correct actions to take when interacting with Nature?And, if social games dominate in terms of the rewards, is it the “correct” opinion…
If Alice trusts Bob, then she expects that the experience encoded in her belief benefits from integrating his (implicitly sincere) expression. Otherwise – reusing the same structure of anticipated dichotomy associated with the direct interrogation of nature – she assumes she would regret taking his prescribed action and expects that her belief moves towards a better representation of reality by integrating the converse of his expression.
Sketching the general interaction pattern, upon recognizing the belief’s context, Alice prepares to receive Bob’s opinion by synthesizing her own expression, sampling the action she would have taken but without actually taking it. If she trusts Bob, then when her simulated experience matches Bob’s expression, it is pleasing; otherwise, it is displeasing. In most interactive contexts, Alice also relays her sampled action as an opinion back to Bob, so that he may integrate it using the same procedure.
Left unspecified thus far is the mechanism for determining whether or not Alice trusts Bob – the social context of trust or distrust. Abstractly, it works as another belief but with one sampled action representing trust and the other representing distrust. At the conclusion of a social interaction, Alice would increase the likelihood of future trust if her expression matched Bob’s, a pleasing outcome. She would increase the likelihood of future distrust if she anticipated distrust before observing a mismatch between her expression and Bob’s, an experience pleasing in its confirmation. Otherwise, she experiences unmet expectations and chooses to reinforce the complement of the context she sampled.Bob uses the same process to update his beliefs given Alice’s response, but uses the already sampled opinion as his guide. Thus, trust is a belief construct for capturing experiential counterpart reliability.
For social learning under tame constraints, this is adaptive. Reliable and observable traits often do betray different environments and exposures, social and otherwise. When constructing our beliefs, we’re more interested in what will work for us subjectively, than what works for people in dissimilar circumstances. Yet, almost by means of optical illusion, belief systems strip context. Agents lose track of where information came from: direct interrogation of nature, or social exchange. As a consequence, sociality and stochasticity conspire to induce noise in a way that – depending on context scheduling – may be socially-patterned and durable. And, by means of trust, these deviations escape the confines of one particular context.