Jason Lillis Ph.D.

The Surprising Effects of Wishful Thinking

If someone asked you what you weigh, would you tell the truth.

Posted August 29, 2014

wishful thinking in critical thinking

Let’s start with a socially inappropriate question: How much do you weigh?

Now that you have a number in your head, let me ask you another question. How much would you say you weighed if an acquaintance rudely asked you? How about if a family member asked you politely? How about a researcher? Would you perhaps give yourself the benefit of, say, 5 or 10 pounds? Who knows—maybe you haven’t weighed yourself in a while, and maybe you do weigh less than you think. But the tendency to present ourselves in a more favorable light is instinctual. It can also be a problem, but perhaps for different reasons than you might assume.

Research has shown, unsurprisingly, that we tend to report weighing less than we actually do. So if I weigh 175 pounds, I might say that I weigh 168 when asked. This tendency holds particularly true for people who are overweight and tend to restrict food intake ( diet ) often.

One obvious interpretation of such studies is that many people simply don’t know what they weigh and give themselves the benefit of the doubt. But while there is a grain of truth to this, more people than not seem to have a very good idea of their current weight—indeed, studies show that when people know their weight will be verified by a scale after being questioned, they are much more accurate in reporting their weight.

So what is really going on here?

A recent paper examined this question, reviewing a substantial number of studies related to the topic, and found some interesting trends: You might think that overweight individuals are more likely to under-report their weight because they are concerned with how others see them. But this does not appear to be the case. The evidence supports the idea that under-reporting weight instead serves a self-protective function. By thinking of yourself as thinner, it may be possible to avoid some of the self-judgment and shame we might feel related to our appearance.

This makes sense when considered in the context of our stigmatizing society. Body shape is, inexplicably, still fair game for many people to comment on and make fun of. Just tune into the late-night talk show monologues. Or peruse the internet, full of fat- shaming memes and tweets. This pervasive negativity towards larger body shapes, and the resulting discrimination and shame that people experience, becomes internalized—to varying degrees for different people, of course. Body weight can come to elicit a variety of negative thoughts and feelings. Trying to convince yourself that you weigh less can protect you from that, in the short term. In other words, if asked about my weight right now, if I shoot for lower, even knowing somewhere deep inside that it’s not true, I can stave off some momentary internal shame and judgment.

It’s not lying ; it seems to work on a more subconscious level. And it makes sense. Why wouldn’t you protect yourself in any subtle way possible? The issue is that the act of protecting yourself might in fact be part of the problem to begin with.

Under-reporting weight seems to provide temporary relief or escape from a threat of negative thoughts and feelings (the result of having to report your weight). Emotional and binge eating operate in a similar manner. Eating tasty, high-calorie foods can provide comfort or relief in the short-term, at the expense of long-term health and well-being. It’s this core relationship between eating and comfort that partially contributes to our expanding waistlines.

We need to mount an agenda to massively reduce weight stigmatization. In this day and age, it is incredible that so much hate is met with so little outrage. Technological advances have provided new ways to stigmatize more effectively and often with anonymity, and so in many respects we are worse off now than we were 10 years ago.

However, given that change is slow, it is also on us to examine how we relate to the effects of a history of body-shape stigmatization. If you imagine or report weighing less than you do, there seems little harm in that. But if that is part of a general tendency to seek comfort and relief from unwanted thoughts and feelings, it could be indicative of a larger pattern of behavior contributing to worsening health and well-being.

In short, when it comes to our own thoughts and feelings, we could all benefit from becoming a little more comfortable being uncomfortable. That doesn’t mean we buy into negative self-judgment and resign ourselves to feeling down, but rather that we allow ourselves to have an ebb and flow of positive and negative thoughts and emotions without having to intervene via unhealthy behavior.

Doing so would go a long way towards creating and sustaining a healthier lifestyle.

Source Article: http://www.ncbi.nlm.nih.gov/pubmed/25053217

Dr. Jason Lillis is author of The Diet Trap: Feed Your Psychological Needs and End the Weight Loss Struggle available on Amazon and where all books are sold.

Jason Lillis Ph.D.

Jason Lillis, Ph.D., is assistant professor of research at the Alpert Medical School of Brown University.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Teletherapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Therapy Center NEW
  • Diagnosis Dictionary
  • Types of Therapy

March 2024 magazine cover

Understanding what emotional intelligence looks like and the steps needed to improve it could light a path to a more emotionally adept world.

  • Coronavirus Disease 2019
  • Affective Forecasting
  • Neuroscience

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Self-Deception

Virtually every aspect of self-deception, including its definition and paradigmatic cases, is a matter of controversy among philosophers. Minimally, self-deception involves a person who (a)  as a consequence of some motivation or emotion, seems to acquire and maintain some false belief despite evidence to the contrary and (b) who may display behavior suggesting some awareness of the truth. Beyond this, philosophers divide over whether self-deception is intentional, whether it involves belief or some other sub- or non-doxastic attitude, whether self-deceivers are morally responsible for their self-deception, whether self-deception is morally problematic (and if it is, in what ways and under what circumstances), whether self-deception is beneficial or harmful, whether and in what sense collectives can be self-deceived (and how, if they can be self-deceived, this might affect individuals within such collectives), and whether our penchant for self-deception might be socially, psychologically or biologically adaptive or merely an accidental byproduct of our evolutionary history.

The discussion of self-deception and its associated puzzles sheds light on the ways motivation affects belief acquisition and retention and other belief-like cognitive attitudes; it also prompts us to scrutinize the notions of belief, intention, and the limits of such folk psychological concepts to adequately explain phenomena of this sort. Self-deception also requires careful consideration of the cognitive architecture that might accommodate this apparent irrationality regarding our beliefs.

Self-deception isn’t merely a philosophically interesting puzzle but a problem of existential concern. It raises the distinct possibility that we live with distorted views that may make us strangers to ourselves and blind to the nature of our morally significant engagements.

1. Paradoxes, Puzzles, and Problems of Self-Deception

2.1 temporal partitioning, 2.2 psychological partitioning, 3.1 revision of intention: non-intentionalist and deflationary approaches, 3.2 revision of belief: adjustment of attitude or content, 4. twisted or negative self-deception, 5.1 moral responsibility for self-deception, 5.2 the morality of self-deception, 6. origin of self-deception: adaptation or spandrel, 7.1 summative collective self-deception: self-deception across a collective, 7.2 non-summative collective self-deception: self-deception of a collective entity, other internet resources, related entries.

“What is self-deception?” sounds like a straightforward question, but the more philosophers have sought to answer it, the more puzzling it has become. Traditionally, self-deception has been modeled on interpersonal deception, where A intentionally gets B to believe some proposition p , all the while knowing or believing truly that ~ p . Such deception is intentional and requires the deceiver to know or believe that ~ p and the deceived to believe that p . One reason for thinking self-deception is analogous to interpersonal deception of this sort is that it helps us to distinguish self-deception from mere error since the acquisition and maintenance of the false belief are intentional, not accidental. It also helps to explain why we think self-deceivers are responsible for and open to the evaluation of their self-deception. If self-deception is properly modeled on interpersonal deception, self-deceivers intentionally get themselves to believe that p , all the while knowing or believing truly that ~ p . On this traditional model, then, self-deceivers apparently must (1) hold contradictory beliefs—the dual-belief requirement—and (2) intentionally get themselves to hold a belief they know or believe truly to be false.

The traditional model of self-deception, however, has been thought to raise two paradoxes: One concerns the self-deceiver’s state of mind—the so-called static paradox. How can a person simultaneously hold contradictory beliefs? The other concerns the process or dynamics of self-deception—the so-called dynamic or strategic paradox. How can a person intend to deceive herself without rendering her intentions ineffective? (Mele 1987a; 2001)

The dual-belief requirement raises the static paradox since it seems to pose an impossible state of mind, namely, consciously and simultaneously believing that p and ~ p . As deceiver, she believes that ~ p , and, as deceived, she believes that p . Accordingly, the self-deceiver consciously believes that p and ~ p . But if believing both a proposition and its negation in full awareness is an impossible state of mind, self-deception, as it has traditionally been understood, seems impossible as well. Static paradoxes also arise regarding motivation, intention, emotion, and the like insofar as self-deceivers seem to harbor psychological states of these kinds that seem deeply incompatible (Funkhouser 2019).

The requirement that the self-deceiver intentionally gets herself to hold a belief she knows to be false raises the dynamic or strategic paradox since it seems to involve the self-deceiver in an impossible project, namely, both deploying and being duped by some deceitful strategy. As deceiver, she must be aware that she’s deploying a deceitful strategy; but, as deceived, she must be unaware of this strategy for it to be effective. And yet it’s difficult to see how the self-deceiver could fail to be aware of her intention to deceive. A strategy known to be deceitful seems bound to fail. How could I be taken in by your efforts to get me to believe something false if I know what you’re up to? But if it’s impossible to be taken in by a strategy one knows is deceitful, then, again, self-deception, as it has traditionally been understood, seems to be impossible as well.

These paradoxes have led a minority of philosophers to be skeptical that self-deception is conceptually possible or even coherent (Paluch 1967; Haight 1980; Kipp 1980). Borge (2003) contends that accounts of self-deception inevitably give up central elements of our folk-psychological notions of “self” or “deception” to avoid paradox, leaving us to wonder whether this framework itself is what gets in the way of explaining the phenomenon. Such skepticism toward the concept may seem warranted, given the obvious paradoxes involved. Most philosophers, however, have sought some resolution to these paradoxes instead of giving up on the notion itself, not only because empirical evidence suggests that self-deception is not only possible but pervasive (Sahdra and Thagard 2003) but also because the concept does seem to pick out a distinct kind of motivated irrationality.

Philosophical accounts of self-deception roughly fall into two main groups: those that maintain that the paradigmatic cases of self-deception are intentional and those that deny this. Call these approaches intentionalist and revisionist , respectively. Intentionalists find the model of intentional interpersonal deception apt since it helps to explain the selectivity of self-deception and the apparent responsibility of self-deceivers, as well as provide a clear way of distinguishing self-deception from other sorts of motivated belief, such as wishful thinking. To avoid paradox, these approaches introduce a variety of divisions that shield the deceiving from the deceived mind. Revisionists are skeptical of these divisions and the ‘psychological exotica’ (Mele 2001) apparently needed to avoid the static and dynamic paradoxes. Instead, they argue that revision of the intention requirement, the belief requirement, or both offers a simpler account of self-deception that avoids the paradoxes raised by modeling it on intentional interpersonal deception.

2. Intentionalist Approaches

The chief problem facing intentional models of self-deception is the dynamic paradox, namely, that it seems impossible to form an intention to get oneself to believe what one currently disbelieves or believes is false. For one to carry out an intention to deceive oneself, one must know what one is doing; to succeed, one must be ignorant of this same fact. Intentionalists agree that self-deception is intentional or, at least, robustly purposive, but divide over whether it requires holding contradictory beliefs and, thus, over the specific content of the alleged intention involved (see §3.2 Revision of Belief ). Insofar as even the bare intention to acquire the belief that p for reasons having nothing to do with one’s evidence for p seems unlikely to succeed if directly known, most intentionalists introduce some sort of temporal or psychological partition to insulate self-deceivers from their deceptive stratagems. When self-deceivers are not consciously aware of what they truly believe or intend, it’s easier to see how they can play the role of the deceiver and the deceived. By dividing the mind into parts, temporally or psychologically, these approaches seek to show that self-deception does not involve paradox.

Some intentionalists argue that self-deception is a complex, temporally extended process during which a self-deceiver can consciously set out to deceive herself that p , knowing or believing that ~ p , and along the way lose her belief that ~ p , either forgetting her original deceptive intention entirely or regarding it as having, albeit accidentally, brought about the true belief she would have arrived at anyway (Sorensen 1985; Bermúdez 2000). So, for instance, an official involved in some illegal behavior might destroy any records of this behavior and create evidence that would cover it up (diary entries, emails, and the like), knowing that she will likely forget having done these things over the next few months. When her activities are investigated a year later, she has forgotten her tampering efforts and, based upon her falsified evidence, comes to believe falsely that she was not involved in the illegal activities of which she is accused. Here, the self-deceiver need never simultaneously hold contradictory beliefs even though she intends to bring it about that she believes that p , which she regards as false at the outset of the process of deceiving herself and true at its completion.

The self-deceiver need not even forget her original intention to deceive. Take an atheist who sets out to get herself to believe in God because it seems the best bet if God turns out to exist. She might well remember such an intention at the end of the process and deem that by God’s grace even this misguided path led her to the truth. What enables the intention to succeed in such cases is the operation of what Johnston (1988) terms ‘autonomous means’ (e.g., the normal degradation of memory, the tendency to believe what one practices, etc.), not the continued awareness of the intention, hinting that the process may be subintentional (See §3.1 Revision of Intention ).

While such temporal partitioning accounts appear to avoid the static and dynamic paradoxes, many, if not most, cases of self-deception aren’t of this temporally extended type. Regularly, self-deception seems to occur instantaneously (Jordan 2022), as when a philosopher self-deceives that her article is high quality even while reading the substantive and accurate criticisms in the rejection letter from the prestigious peer-reviewed journal she submitted it to (example due to Mele 2001). Additionally, many of these temporally extended cases lack the distinctive opacity, indirection, and tension associated with garden-variety cases of self-deception (Levy 2004).

Another strategy employed by intentionalists is the division of the self into psychological parts that play the role of the deceiver and deceived, respectively. These strategies range from positing strong division in the self, where the deceiving part is a relatively autonomous subagency capable of belief, desire, and intention (Rorty 1988), to more moderate division, where the deceiving part still constitutes a separate center of agency (Pears 1984, 1986; 1991), to the relatively modest division of Davidson (1982, 1986), where there need only be a boundary between conflicting attitudes and intentions.

Such divisions are prompted in large part by the acceptance of the dual-belief requirement. It isn’t simply that self-deceivers hold contradictory beliefs, which though strange, isn’t impossible since one can believe that p and believe that ~ p without believing that p & ~ p . The problem such theorists face stems from the appearance that the belief that ~ p motivates and thus forms a part of the intention to bring it about that one acquires and maintains the false belief that p (Davidson 1986). So, for example, the Nazi official’s recognition that his actions implicate him in serious evil motivates him to implement a strategy to deceive himself into believing he is not so involved; he can’t intend to bring it about that he holds such a false belief if he doesn’t recognize it is false, and he wouldn’t want to bring such a belief about if he didn’t recognize the evidence to the contrary. So long as this is the case, the deceiving self, whether it constitutes a separate center of agency or something less robust, must be hidden from the conscious self being deceived if the self-deceptive intention is to succeed.

While these psychological partitioning approaches seem to resolve the static and dynamic puzzles, they do so by introducing a picture of the mind that raises puzzles of its own. On this point, there appears to be consensus even among intentionalists that self-deception can and should be accounted for without invoking speculative or stipulative divisions not already used to explain non-self-deceptive behavior, what Talbott (1995) calls ‘innocent’ divisions. That said, recent, if controversial, research (e.g., Bargh and Morsella 2008; Hassin, Bargh, and Zimmerman 2009; Huang and Bargh 2014) seems to support the possibility of the sort of robust unconscious but flexible goal pursuit that could explain the way self-deceivers are able to pursue their deceptive goal while retaining the beliefs necessary for navigating the shifting evidential terrain (see Funkhouser and Barrett 2016, 2017 and Doody 2016 for skepticism about the applicability of this research to self-deception). If this kind of research shows that no ‘psychological exotica’ are necessary to explain self-deception, there is less pressure to deflate the phenomenon in ways that minimize the active, strategic role self-deceivers play in the process.

3. Revisionist Approaches

A number of philosophers have moved away from modeling self-deception directly on intentional interpersonal deception, opting instead to revise either the intention or the belief requirement traditional intentionalist models assume. Those revising the intention requirement typically treat self-deception as a species of motivationally biased belief, thus avoiding the problems involved with intentionally deceiving oneself. Call these non-intentionalist and deflationary approaches .

Those revising the belief requirement do so in a variety of ways. Some posit other, non-doxastic or quasi-doxastic attitudes toward the proposition involved (‘misrepresentation’ Jordan 2020, 2022; ‘hope,’ ‘suspicion,’ ‘doubt,’ ‘anxiety’ Archer 2013; ‘besires’ Egan 2009; ‘pretense’ Gendler 2007; ‘imagination’ Lazar 1999). Others alter the content of the proposition believed (Holton 2001; Funkhouser 2005; Fernández 2013), while others suggest the doxastic attitudes involved are indeterminate, somehow ‘in-between believing’ (Schwitzgebel 2001; Funkhouser 2009) or subject to shifting degrees of credulity throughout the process of self-deception (Chan and Rowbottom 2019). Call these revision of belief approaches.

Deflationary approaches focus primarily on the process of self-deception, while the revision of belief approaches focus on the product. A revision of either of these aspects, of course, has ramifications for the other. For example, if self-deception doesn’t involve belief, but some other non-doxastic attitude (product), then one may well be able to intentionally enter that state without paradox (process). This section considers non-intentional and deflationary approaches and the worries such approaches raise ( §3.1 ). It also considers revision of belief approaches ( §3.2 ).

Non-intentionalists argue that most ‘garden-variety’ cases of self-deception can be explained without adverting to subagents, or unconscious beliefs and intentions, which, even if they resolve the static and dynamic puzzles of self-deception, raise puzzles of their own. If such non-exotic explanations are available, intentionalist explanations seem unwarranted and unnecessary.

Since the central paradoxes of self-deception arise from modeling self-deception on intentional interpersonal deception, non-intentionalists suggest this model be jettisoned in favor of one that takes ‘to be deceived’ to be nothing more than believing falsely or being mistaken in believing something (Johnston 1988; Mele 2001). For instance, Sam mishears that it will be a sunny day and relays this misinformation to Joan with the result that she believes it will be a sunny day. Joan is deceived into believing it will be sunny, and Sam has deceived her, albeit unintentionally. Initially, such a model may not appear promising for self-deception since simply being mistaken about p or accidentally causing oneself to be mistaken about p doesn’t seem to be self-deception at all but some sort of innocent error. Non-intentionalists, however, argue that in cases of self-deception, the false belief is not accidental but, rather, motivated by desire (Mele 2001), emotion (Lazar 1999), anxiety (Johnston 1988; Barnes 1997), or some other attitude regarding p or related to p . So, for instance, when Allison believes, against the preponderance of evidence available to her, that her daughter is not having learning difficulties, non-intentionalists will explain the various ways she misreads the evidence by pointing to such things as her desire that her daughter not have learning difficulties, her fear that she has such difficulties, or anxiety over this possibility. In such cases, Allison’s self-deceptive belief that her daughter is not having learning difficulties fulfills her desire, quells her fear, or reduces her anxiety, and it’s this function—not an intention—that explains why her belief formation process is biased. Allison’s false belief is not an innocent mistake but a consequence of her motivational states.

Non-intentionalists divide over the dual-belief requirement. Some accept the requirement, seeing the persistent efforts to resist the conscious recognition of the unwelcome truth or to reduce the anxiety generated by this recognition as characteristic of self-deception (Bach 1981; Johnston 1988). So, in Allison’s case, her belief that her daughter is having learning difficulties, along with her desire that this not be the case, motivates her to employ means to avoid this thought and to believe the opposite.

Others, however, argue the needed motivation can as easily be supplied by uncertainty or ignorance whether p , or suspicion that ~ p (Mele 2001; Barnes 1997). Thus, Allison need not hold any opinion regarding her daughter’s having learning difficulties for her false belief to count as self-deception since it’s her regarding evidence in a motivationally biased way in the face of evidence to the contrary, not her recognition of this evidence, that makes her belief self-deceptive. Accordingly, Allison needn’t intend to deceive herself nor believe at any point that her daughter, in fact, has learning difficulties. If we think someone like Allison is self-deceived, then self-deception requires neither contradictory beliefs nor intentions regarding the acquisition or retention of the self-deceptive belief. Such approaches are ‘deflationary’ in the sense that they take self-deception to be explicable without reaching for exotic cognitive architecture since neither intentions nor dual-beliefs are required to account for self-deception. (For more on the dual-belief requirement, see §3.2 Revision of Belief .)

Mele (2001, 2012) has offered the most fully articulated deflationary account, and his view has been the target of the most scrutiny, so it is worth stating what he takes to be the jointly sufficient conditions for entering self-deception:

  • The belief that p , which S acquires, is false
  • S treats data relevant, or at least seemingly relevant, to the truth value of p in a motivationally biased way
  • This biased treatment is a nondeviant cause of S’ s acquiring the belief that p
  • The body of data possessed by S at the time provides greater warrant for ~ p than for p
  • S consciously believes at the time that there is a significant chance that ~ p
  • S ’s acquiring the belief that p is a product of “reflective, critical reasoning,” and S is wrong in regarding that reasoning as properly directed

Mele (2012) added the last two conditions to clarify his account in view of some of the criticisms of this kind of approach addressed in the next section.

To support non-intentionalism, some have looked to the purposive mechanisms of deception operating in non-human organisms as a model (Smith 2014), while others have focused on neurobiological mechanisms triggered by affect to explain the peculiarly purposive responses to evidence involved in self-deception (Lauria et al . 2016).

Deflationary Worries and Modifications

Critics contend these deflationary accounts do not adequately distinguish self-deception from other sorts of motivated believing (such as wishful thinking), nor can they explain the peculiar selectivity associated with self-deception, its characteristic ‘tension,’ the way it involves a failure of self-knowledge, or the agency of the self-deceiver.

Self-Deception and Wishful Thinking : What distinguishes wishful thinking from self-deception, according to intentionalists, just is that the latter is intentional while the former is not (Bermúdez 2000). Specifically, wishful thinking does not seem ‘deceptive’ in the requisite sense. Non-intentionalists respond that what distinguishes wishful thinking from self-deception is that self-deceivers recognize evidence against their self-deceptive belief whereas wishful thinkers do not (Bach 1981; Johnston 1988) or that they merely possess, without recognizing it, greater counterevidence than wishful thinkers (Mele 2001). In either case, self-deceivers exert more agency than wishful thinkers over their belief formation. In wishful thinking, motivation triggers a belief formation process in which the person does not play an active, conscious role, “while in self-deception the subject is a willing participant in directing cognition towards the doxastic embrace of the favored proposition” (Scott-Kakures 2002; see also Szabados 1973). While the precise relationship between wishful thinking and self-deception is clearly a matter of debate, non-intentionalists offer plausible ways of distinguishing the two that do not invoke the intention to deceive.

Self-Deception and Selectivity : Another worry—termed the ‘selectivity problem’—originally raised by Bermúdez (1997, 2000) is that deflationary accounts don’t seem to be able to explain the selective nature of self-deception (i.e., why motivation seems only selectively to produce bias). Why is it, such intentionalists ask, that we are not rendered biased in favor of the belief that p in many cases where we have a very strong desire that p (or anxiety or some other motivation related to p )? Intentionalists argue that an intention to get oneself to acquire the belief that p offers the most straightforward answer to this question.

Others, following Mele (2001, 2012, 2020), contend that selectivity may be explained in terms of the agent’s assessment of the relative costs of erroneously believing that p or ~ p (see Friedrich (1993) and Trope and Lieberman (1996) on lay hypothesis testing). Essentially, this approach suggests that the minimization of costly errors is the central principle guiding hypothesis testing. So, for example, Josh would be happier believing falsely that the gourmet chocolate he finds so delicious isn’t produced by exploited farmers than falsely believing that it is since he desires that it not be so produced. Because Josh considers the cost of erroneously believing his favorite chocolate is tainted by exploitation to be very high—no other chocolate gives him the same pleasure—it takes a great deal more evidence to convince him that his chocolate is so tainted than it does to convince him otherwise. It’s the low subjective cost of falsely believing the chocolate is not tainted that facilitates Josh’s self-deception. But we can imagine Josh having the same strong desire that his chocolate not be tainted by exploitation and yet having a different assessment of the cost of falsely believing it’s not tainted. Say, for example, he works for an organization promoting fair trade and non-exploitive labor practices among chocolate producers and believes he has an obligation to accurately represent the labor practices of the producer of his favorite chocolate and would, furthermore, lose credibility if the chocolate he himself consumes is tainted by exploitation. In these circumstances, Josh is more sensitive to evidence that his favorite chocolate is tainted—despite his desire that it not be—since the subjective cost of being wrong is higher for him than it was before. It is the relative, subjective costs of falsely believing p and ~ p that explain why desire or other motivation biases belief in some circumstances and not others.

While error-cost accounts offer some explanation of selectivity, one might still complain that even when these conditions are met, self-deception needn’t follow (Bermúdez 2000, 2017). Some non-intentionalist respond that these conditions are as complete an explanation of self-deception as is possible. Given the complexity of the factors affecting belief formation and the lack of a complete account of its etiology, we shouldn’t expect a complete explanation in each case (Funkhouser 2019; Mele 2020). Others argue that attention to the role of emotion in assessing and filtering evidence sheds further light on the process. According to such approaches, affect plays a key role in triggering the conditions under which motivation leads to self-deception (Galeotti 2016a; Lauria et al. 2016; Lauria and Preissmann 2018). Our emotionally-loaded appraisal of evidence, in combination with evidential ambiguity and our potential to cope with the threatening reality, helps to explain why motivation tips us toward self-deception. Research on the role of dopamine regulation and negative somatic markers provides some empirical support for this sort of affective model (Lauria et al. 2016; Lauria and Preissmann 2018).

Non-intentionalists also point out that intentionalists have selectivity problems of their own since it isn’t clear why intentions are formed in some cases rather than others (Jurjako 2013) or why some intentions to acquire a self-deceptive belief succeed while others do not (Mele 2001, 2020) See Bermúdez (2017) for a response to these worries.

Self-Deception and Tension : A number of philosophers have complained that deflationary accounts fail to explain certain ‘tensions’ or conflicts supposed to be present in cases of genuine self-deception (Audi 1997; Bach 1997; Nelkin 2002; Funkhouser 2005; Fernández 2013; Jordan 2022). Take Ellen, who says she is doing well in her biology class but systematically avoids looking at the results on her quizzes and tests. She says she doesn’t need to look; she knows she didn’t miss anything. When her teacher tries to catch her after class to discuss her poor grades, she rushes off. Similarly, when she sees an email from her teacher with the subject line “your class performance,” she ignores it. The prospect of looking at the test results, talking with her teacher, or reading her email sends a flash of dread through her and a pit to her stomach, even as she projects calm and confidence. Ellen’s behavior and affective responses suggest to these critics that she knows she isn’t doing well in her biology class, despite her avowals to the contrary. Ellen’s case highlights a variety of tensions that may arise in the self-deceived. Philosophers have focused on tensions that arise with respect to evidence (since there is a mismatch between what the evidence warrants and what is believed or avowed); to unconscious doxastic attitudes (since they are at variance with those consciously held); to self-knowledge (since one has a false second-order belief about what one believes); to the various components of belief—behavioral, verbal, emotional, physical—(since what one does, says, feels, or experiences can come apart in a number of ways) (Funkhouser 2019); and to authorship (since one may be aware of authoring her self-deception and presenting her self-deception to herself as not having been authored, i.e., as the truth) (Jordan 2022). These tensions seem to be rooted in the self-deceiver’s awareness of the truth. But, since deflationary accounts deny the dual-belief requirement, specifically, that self-deceivers must hold the true belief that ~p , it’s not clear why self-deceivers would experience tension or display behaviors in conflict with their self-deceptive belief p .

Deflationary theorists counter that suspicion that ~ p , thinking there’s a significant possibility that ~ p, may suffice to explain these kinds of tensions (Mele 2001, 2009, 2010, 2012). Clearly, a person who self-deceptively believes that p and suspects that ~p may experience tension; moreover, such attitudes combined with a desire that p might account for the behaviors highlighted by critics.

While these attitudes may explain some of the tension in self-deception, a number of critics think they are inadequate to explain deep-conflict cases in which what self-deceivers say about p is seriously at odds with non-verbal behavior that justifies the attribution of the belief that ~p to them (Audi, 1997; Patten 2003; Funkhouser 2005; Gendler 2007; Fernández 2013). While some propose these cases are just a type of self-deception that deflationary approaches cannot explain (Funkhouser 2009, 2019; Fernández 2013), others go further, suggesting these cases show that deflationary approaches aren’t accounts of self-deception at all but of self-delusion since deflationary self-deceivers seem single-mindedly to hold the false belief (Audi 2007; Funkhouser 2005; Gendler 2007).

Some defenders of deflation acknowledge that the significant difference between what deflationary accounts have in view (namely, people who do not believe the unwelcome truth that ~p , having a motivation-driven, unwarranted skepticism toward it) and what deep-conflict theorists do (namely, people who know the unwelcome truth that ~p and avoid reflecting on it or encountering evidence for it) warrants questioning whether these phenomena belong to the same psychological kind, but argue that it’s the deep-conflict cases that represent something other than self-deception. Those who unconsciously hold the warranted belief and merely say or pretend they hold the unwarranted one (Audi 1997; Gendler 2007) hardly seem deceived (Lynch 2012). They seem more to resemble what Longeway (1990) calls escapism , the avoidance of thinking about what we believe to escape reality (See Lynch 2012).

Funkhouser (2019) suggests that this dispute over the depth of conflict and intensity of tension involved in self-deception is, in part, a dispute over which cases are central, typical, and most interesting. Whether deep-conflict cases constitute a distinct psychological kind or whether they reflect people’s pre-theoretical understanding of self-deception remains unclear, but deflationary approaches seem to be capable of explaining at least some of the behavior such theorists insist justifies attributing an unconscious belief that ~p . Deep-conflict theorists need to explain why we should think when one avows that p , one does not also believe it to some degree, and why the behavior in question cannot be explained by nearby proxies like suspicion or doubt that p (Mele 2010, 2012). Some deflationary theorists contend that a degree of belief model might render deep-conflict cases more sensible (see Shifting Degrees of Belief ).

Self-Deception and Self-Knowledge : Several theorists have argued that deflationary approaches miss certain failures of self-knowledge involved in cases of self-deception. Self-deceivers, these critics argue, must hold false beliefs about their own belief formation process (Holton 2001; Scott-Kakures 2002), about what beliefs they actually hold (Funkhouser 2005; Fernández 2013), or both. Holton (2001), for instance, argues that Mele’s conditions for being self-deceived are not sufficient because they do not require self-deceivers to hold false beliefs about themselves. It seems possible for a person to acquire a false belief that p as a consequence of treating data relevant to p in a motivationally biased way when the data available to her provides greater warrant for p and still retain accurate self-knowledge. Such a person would readily admit to ignoring certain data because it would undermine a belief she cherishes. She makes no mistakes about herself, her beliefs, or her belief formation process. Such a person, Holton argues, would be willfully ignorant but not self-deceived. If, however, her strategy was sufficiently opaque to her, she would be apt to deny she was ignoring relevant evidence and even affirm her belief was the result of what Scott-Kakures (2002) calls “reflective, critical reasoning.” These erroneous beliefs represent a failure of self-knowledge that seems, according to these critics, essential to self-deception, and they distinguish it from wishful thinking (see above), willful blindness, and other nearby phenomena.

In response to such criticisms, Mele (2009, 2012) has offered the following sufficient condition: S ’s acquiring the belief that p is a product of “reflective, critical reasoning,” and S is wrong in regarding that reasoning as properly directed. Some worry that meeting this condition requires a degree of awareness about one’s reasons for believing that would rule out those who do not engage in reflection on their reasons for belief (Fernández 2013) and fails to capture errors about what one believes that seem essential for dealing with deep-conflict cases (Fernández 2013; Funkhouser 2005). Whether Mele’s (2009) proposed condition requires too much sophistication from self-deceivers is debatable but suggests a way of accounting for the intuition that self-deceivers fail to know themselves without requiring them to harbor hidden beliefs or intentions.

Self-Deception and Agency: Some worry that deflationary explanations render self-deceivers victims of their own motivations; they don’t seem to be agents with respect to their self-deception but unwitting patients. But, in denying self-deceivers engage in intentional activities for the purpose of deceiving themselves, non-intentionalists needn’t deny self-deceivers engage in any intentional actions. It’s open to them to accept what Lynch terms agentism : “Self-deceivers end up with their unwarranted belief as a result of their own actions motivated by the desire that p ” (Lynch 2017). According to agentism, motivation affects belief by means of intentional actions, not simply by triggering biasing mechanisms. Self-deceivers can act with intentions like “find any problem with unwelcome evidence” or “find any p -supporting evidence” with a view of determining whether p is true. These kinds of intentions may explain the agency of self-deceivers and how they could be responsible for self-deception and not merely victims of their own motivations. It isn’t perfectly clear whether Mele-style deflationists are committed to agentism, but even if they are, questions remain about whether such unwittingly deceptive intentional actions are enough to render self-deceivers true agents of their deception since they think that they are engaging in actions to determine the truth, not to deceive themselves.

Approaches that focus on revising the notion that self-deception requires holding that p and ~p , the dual-belief requirement implied by traditional intentionalism, either introduce some “doxastic proxy” (Baghramian and Nicholson 2013) to replace one or both beliefs or alter the content of the self-deceiver’s belief in a way that preserves tension without involving outright conflict. These approaches resolve the doxastic paradox either by denying that self-deceivers hold the unwelcome but warranted belief ~p (Talbott 1995; Barnes 1997; Burmúdez 2000; Mele 2001), denying they hold the welcome but unwarranted belief p (Audi 1982, 1988; Funkhouser 2005; Gendler 2007; Fernández 2013; Jordan 2020, 2022), denying they hold either belief p or ~p (Archer 2013; Porcher 2012), or contending they have shifting degrees of belief regarding p (Chan and Rowbottom 2019). Lauria et al. (2016) argue for an integrative approach that accommodates all these products of self-deception on the basis of empirical research on the role affect plays in assessments of evidence.

Denying the Unwelcome Belief : Both intentionalists and non-intentionalists may question whether self-deceivers must hold the unwelcome but warranted belief. For intentionalists, what’s necessary is some intention to form the target belief p , and this is compatible with having no views at all regarding p (lacking any evidence for or against p) or believing p is merely possible (possessing evidence too weak to warrant belief that p or ~p ) (Bermúdez 2000). Moreover, rejecting this requirement relieves the pressure to introduce deep divisions (Talbott 1995). For non-intentionalists, the focus is on how the false belief is acquired, not whether a person believes it’s contradictory. For them, it suffices that self-deceivers acquire the unwarranted false belief that p in a motivated way (Mele 2001). The selectivity and tension typical of self-deception can be explained without attributing ~p since nearby proxies like suspicion that ~ p can do the same work. Citing Rorty’s (1988) case of Dr. Androvna, a cancer specialist who believes she does not have cancer but who draws up a detailed will and writes uncharacteristically effusive letters suggesting her impending departure, Mele (2009) points out that Androvna’s behavior might easily be explained by her holding that there’s a significant chance I have cancer. And this belief is compatible with Androvna’s self-deceptive belief that she does not, in fact, have cancer.

Denying the Welcome Belief : Another strand of revision of belief approaches focuses on the welcome belief that p , proposing alternatives to this belief that function in ways that explain what self-deceivers typically say and do. Self-deceivers display ambiguous behavior that not only falls short of what one would expect from a person who believes that p but seems to justify the attribution of the belief that ~p . For instance, Androvna’s letter-writing and will-preparation might be taken as reasons for attributing to her the belief that she won’t recover, despite her verbal assertions to the contrary. To explain the special pattern of behavior displayed by self-deceivers, some of these theorists propose proxies for full belief, such as sincere avowal (Audi 1982, 1988); pretense (Gendler 2007); an intermediate state between belief and desire, or ‘besire’ (Egan 2009); some other less-than-full belief state akin to imaginations or fantasies (Lazar 1999); or simply ‘misrepresentation’ (Jordan 2020, 2022). Such states may guide and motivate action in many, though not all, circumstances while being relatively less sensitive to evidence than beliefs.

Others substitute a higher-order belief to explain the behavior of self-deceivers as another kind of proxy for the belief that p (Funkhouser 2005; Fernández 2013). On such approaches, self-deceivers don’t believe p ; they believe that they believe that p , and this false second-order belief—“I think that I believe that p ”—underlies and underwrites their sincere avowal that p as well as their ability to entertain p as true. Self-deception, then, is a kind of failure of self-knowledge, a misapprehension or misattribution of one’s own beliefs. By shifting the content of the self-deceptive belief to the second-order, this approach avoids the doxastic paradox and explains the characteristic ‘tension’ or ‘conflict’ attributed to self-deceivers in terms of the disharmony between the first-order and second-order beliefs, the latter explaining their avowed belief and the former their behavior that goes against that avowed belief (Funkhouser 2005; Fernández 2013).

Denying both the Welcome Belief and the Unwelcome Belief : Given the variety of proxies that have been offered for both the welcome and the unwelcome belief, it should not be surprising that some argue that self-deception can be explained without attributing either belief to self-deceivers, a position Archer (2013) refers to as ‘nondoxasticism.’ Porcher (2012) recommends against attributing beliefs to self-deceivers on the grounds that what they believe is indeterminate since they are, as Schwitzgebel (2001, 2010) contends, “in-between-believing,” neither fully believing that p nor fully not believing that p . For Porcher (2012), self-deceivers show the limits of the folk psychological concepts of belief and suggest the need to develop a dispositional account of self-deception that focuses on the ways that self-deceivers’ dispositions deviate from those of stereotypical full belief. Funkhouser (2009) also points to the limits of folk psychological concepts and suggests that in cases involving deep conflict between behavior and avowal, “the self-deceived produce a confused belief-like condition so that it is genuinely indeterminate what they believe regarding p .” Archer (2013), however, rejects the claim that the belief is indeterminate or that folk psychological concepts are inadequate, arguing that folk psychology offers a wide variety of non-doxastic attitudes such as ‘hope,’ ‘suspicion,’ ‘anxiety,’ and the like that are more than sufficient to explain paradigm cases of self-deception without adverting to belief.

Shifting Degrees of Belief : Some contend that attention to shifting degrees of belief offers a better explanation of paradigm cases of self-deception—especially the behavioral tensions—and avoids the static paradox (Chan and Rowbottom 2019). In their view, many so-called non-doxastic attitudes entail some degree of belief regarding p. Shifts in these beliefs are triggered by and track shifts in information and non-doxastic propositional attitudes such as desire, fear, anxiety, and anger. For instance, a husband might initially have a high degree of belief in his spouse’s fidelity that plummets when he encounters threatening evidence. His low confidence reveals afresh how much he wants her fidelity and prompts him to despair. These non-doxastic attitudes trigger another shift by focusing his attention on evidence of his spouse’s love and fidelity, leaving him with a higher degree of confidence than his available evidence warrants. On this shifting belief account, the self-deceiver holds both p and ~p at varying levels of confidence that are always greater than zero (example due to Chan and Rowbottom 2019).

While revision of belief approaches suggest a number of non-paradoxical ways of thinking about self-deception, some worry that those approaches denying that self-deceivers hold the welcome but unwarranted belief that p eliminate what is central to the notion of self-deception, namely, deception (see, e.g., Lynch 2012; Mele 2010). Whatever the verdict, these revision of belief approaches suggest that our way of characterizing belief may not be fine-grained enough to account for the subtle attitudes or meta-attitudes that self-deceivers bear on the proposition in question. Taken together, these approaches make it clear that the question regarding what self-deceivers believe is by no means resolved.

‘Twisted’ or negative self-deception differs from ‘straight’ or positive self-deception because it involves the acquisition of an unwelcome as opposed to a welcome belief (Mele 1999, 2001; Funkhouser 2019). Roughly, the negatively self-deceived have a belief that is neither warranted nor wanted in consequence of some desire, emotion, or combination of both. For instance, a jealous husband, uncertain about his wife’s fidelity, comes to believe she’s having an affair on scant and ambiguous evidence, something he certainly doesn’t want to be the case. Intentionalists may see little problem here, at least in terms of offering a unified account, since both positive and negative self-deceivers intend to produce the belief in question, and nothing about intentional deception precludes getting the victim to believe something unpleasant or undesirable. That said, intentionalists typically see the intention to believe p as serving a desire to believe p (Davidson 1985; Talbott 1995; Bermúdez 2000, 2017), so they still face the difficult task of explaining why negative self-deceivers intend to acquire a belief they don’t want (Lazar 1999; Echano 2017). Non-intentionalists have a steeper hill to climb since it’s difficult to see how someone like the anxious husband could be motivated to form a belief that he doesn’t at all desire. The challenge for the non-intentionalist, then, is to supply a motivational story that explains the acquisition of such an unwelcome belief. Ideally, the aim is to provide a unified explanation for both the positive and negative varieties with a view to theoretical simplicity. Attempts to provide such an explanation now constitute a significant and growing body of literature that centers on the nature of the motivations involved and the precise role affect plays in the process.

Since the desire for the welcome belief cannot serve as the motive for acquiring the unwelcome one, non-intentionalists have sought some ulterior motive (Pears 1984), such as the reduction of anxiety (Barnes 1997), the avoidance of costly errors (Mele 1999, 2001), or denial that the motivation is oriented toward the state of the world at all (Nelkin 2002).

The jealous husband might be motivated to believe his wife is unfaithful because it supports the vigilance needed to eliminate all rival lovers and preserve the relationship—both of which he desires (Pears 1984). Similarly, I might be anxious about my house burning down and come to hold the unwelcome belief that I’ve left the burner on. Ultimately, acquiring the unwelcome belief reduces my anxiety because it prompts me to double-check the stove (Barnes 1997). Some are skeptical that identifying such ulterior desires or anxieties is always possible or necessary (Lazar 1999; Mele 2001). Many, following Mele (2001, 2003), see a simpler explanation of negative cases in terms of the way motivation, broadly speaking, affects the agent’s assessment of the relative costs of error. The jealous husband—not wanting to be made a fool—sees the cost of falsely believing his spouse is faithful as high, while the person anxious about their house burning down sees the cost of falsely believing the burner is on as low. Factors such as what the agent cares about and what she can do about the situation affect these error cost assessments and may explain, in part, the conditions under which negative self-deception occurs.

Since negative self-deception often involves emotions—fear, anxiety, jealousy, rage—a good deal of attention has been given to how this component is connected to the motivation driving negative self-deception. Some, like Mele (2001, 2003), acknowledge the possibility that emotion alone or in combination with desire is fundamental to what motivates bias in these cases but remain reluctant to say such affective motives are essential or entirely distinguishable from the desires involved. Others worry that leaving motivation so ambiguous threatens the claim to provide a unified explanation of self-deception (Galeotti 2016a). Consequently, some have sought a more central role for affect, seeing emotion as triggering or priming motivationally biased cognition (Scott-Kakures 2000, 200; Echano 2017) or as operating as a kind of evidential filter in a pre-attentive—non-epistemic—appraisal of threatening evidence (Galeotti 2016a). On this latter affective-filter view, our emotions may lead us to see evidence regarding a situation we consider significant to our wellbeing as ambiguous and therefore potentially distressing, especially when we deem our ability to deal with the unwelcome situation as limited. Depending on how strong our affective response to the distressing evidence is, we may end up discounting evidence for the situation we want, listening instead to our negative emotions (anxiety, fear, sorrow, etc.), with the result that we become negatively self-deceived (see Lauria and Preissmann 2018). Research on the role of dopamine regulation and negative somatic markers provides some neurobiological evidence in support of this sort of affective-filter model and its potential to offer a unified account of positive and negative self-deception (Lauria et al. 2016; Lauria and Preissmann 2018).

While the philosophers considered so far take the relevant motives to be about the state of the world, some hold that the relevant motives have to do with self-deceivers’ states of mind. If this latter desire-to-believe approach is taken, then there may be just one general motive for both kinds of self-deception. Nelkin (2002, 2012), for instance, argues that the motivation for self-deceptive belief formation should be restricted to a desire to believe that p and that this is compatible with not wanting p to be true. I might want to hold the belief that I have left the stove burner on but not want it to be the case that I have actually left it on. The belief is desirable in this instance because holding it ensures it won’t be true. What unifies cases of self-deception—both twisted and straight—is that the self-deceptive belief is motivated by a desire to believe that p ; what distinguishes them is that twisted self-deceivers do not want p to be the case, while straight self-deceivers do. Some, like Mele (2009), argue that such an approach is unnecessarily restrictive since a variety of other motives oriented toward the state of the world might lead one to acquire the unwelcome belief; for example, even just wanting to not be wrong about the welcome belief (see Nelkin 2012 for a response). Others, like Galeotti (2016a), worry that this desire-to-believe account renders self-deceivers’ epistemic confusion into something bordering on incoherence since it seems to imply self-deceivers want to believe p regardless of the state of the world, and such a desire seems absurd even at an unconscious level.

Whether the motive for self-deception aims at the state of the world or the state of the self-deceiver’s mind, the role of affect in the process remains a significant question that further research in neurobiology may shed light upon. The role of affect has been underappreciated but seems to be gathering support and will no doubt guide future theorizing, especially on negative self-deception.

5. Morality and Self-Deception

Even though much of the contemporary philosophical discussion of self-deception has focused on epistemology, philosophical psychology, and philosophy of mind, historically, the morality of self-deception has been the central focus of discussion. Self-deception has been thought to be morally wrong or, at least, morally dangerous insofar as it represents a threat to moral self-knowledge, a cover for immoral activity, or a violation of authenticity. Some thinkers, what Martin (1986) calls ‘the vital lie tradition,’ however, have held that self-deception can, in some instances, be salutary, protecting us from truths that would make life unlivable (e.g., Rorty 1972, 1994). There are two major questions regarding the morality of self-deception: First, can a person be held morally responsible for self-deception, and if so, under what conditions? Second, is there anything morally problematic with self-deception, and if so, what and under what circumstances? The answers to these questions are clearly intertwined. If self-deceivers cannot be held responsible for self-deception, then their responsibility for whatever morally objectionable consequences self-deception might have will be mitigated if not eliminated. Nevertheless, self-deception might be morally significant even if one cannot be taxed for entering it. To be ignorant of one’s moral self, as Socrates saw, may represent a great obstacle to a life well lived, whether or not one is at fault for such ignorance.

Whether self-deceivers can be held responsible for their self-deception is largely a question of whether they have the requisite control over the acquisition and maintenance of their self-deceptive belief. In general, intentionalists hold that self-deceivers are responsible since they intend to acquire the self-deceptive belief, usually recognizing the evidence to the contrary. Even when the intention is indirect, such as when one intentionally seeks evidence in favor of p or avoids collecting or examining evidence to the contrary, self-deceivers seem to intentionally flout their own normal standards for gathering and evaluating evidence. So, minimally, they are responsible for such actions and omissions.

Initially, non-intentionalist approaches may seem to remove the agent from responsibility by rendering the process by which she is self-deceived subintentional. If my anxiety, fear, or desire triggers a process that ineluctably leads me to hold the self-deceptive belief, it seems I cannot be held responsible for holding that belief. How could I be held responsible for processes that operate without my knowledge and which are set in motion without my intention? Most non-intentionalist accounts, however, do allow for the possibility that self-deceivers are responsible for individual episodes of self-deception or for the vices of cowardice and lack of self-control from which they spring. To be morally responsible in the sense of being an appropriate target for praise or blame requires, at least, that agents have control over the actions in question. Mele (2001), for example, argues that many sources of bias are controllable and that self-deceivers can recognize and resist the influence of emotion and desire on their belief acquisition and retention, particularly in matters they deem to be important—morally or otherwise. The extent of this control, however, is an empirical question. Nelkin (2012) argues that since Mele’s account leaves the content of motivation driving the bias unrestricted, the mechanism in question is so complex that “it seems unreasonable to expect the self-deceiver to guard against” its operation.

Other non-intentionalists take self-deceivers to be responsible for certain epistemic vices, such as cowardice in the face of fear or anxiety and lack of self-control with respect to the biasing influences of desire and emotion. Thus, Barnes (1997) argues that self-deceivers “can, with effort, in some circumstances, resist their biases” (83) and “can be criticized for failing to take steps to prevent themselves from being biased; they can be criticized for lacking courage in situations where having courage is neither superhumanly difficult nor costly” (175). Whether self-deception is due to a character defect or not, ascriptions of responsibility depend upon whether the self-deceiver has control over the biasing effects of her desires and emotions.

Some question whether self-deceivers do have such control. For instance, Levy (2004) has argued that deflationary accounts of self-deception that deny the contradictory belief requirement should not suppose that self-deceivers are typically responsible since it is rarely the case that self-deceivers possess the requisite awareness of the biasing mechanisms operating to produce their self-deceptive belief. Lacking such awareness, self-deceivers do not appear to know when or on which beliefs such mechanisms operate, rendering them unable to curb the effects of these mechanisms, even when they operate to form false beliefs about morally significant matters. Lacking the control necessary for moral responsibility in individual episodes of self-deception, self-deceivers seem also to lack control over being the sort of person disposed to self-deception.

Non-intentionalists may respond by claiming that self-deceivers often are aware of the potentially biasing effects their desires and emotions might have and can exercise control over them (DeWeese-Boyd 2007). They might also challenge the idea that self-deceivers must be aware in the ways Levy suggests. One well-known account of control, employed by Levy, holds that a person is responsible just in case she acts on a mechanism that is moderately responsive to reasons (including moral reasons), such that were she to possess such reasons, this same mechanism would act upon those reasons in at least one possible world (Fischer and Ravizza 1999). Guidance control, in this sense, requires that the mechanism in question be capable of recognizing and responding to moral and non-moral reasons sufficient for acting otherwise. Nelkin (2011, 2012), however, argues that reasons-responsiveness should be seen as applying primarily to the agent , not the mechanism, requiring only that the agent has the capacity to exercise reason in the situation under scrutiny. The question isn’t whether the biasing mechanism itself is reasons-responsive but whether the agent governing its operation is—that is, whether self-deceivers typically could recognize and respond to moral and non-moral reasons to resist the influence of their desires and emotions and instead exercise special scrutiny of the belief in question. According to Nelkin (2012), expecting self-deceivers to have such a capacity is more likely if we understand the desire driving their bias as a desire to believe that p , since awareness of this sort of desire would make it easier to guard against its influence on the process of determining whether p . Van Loon (2018) points out that discussion of moral responsibility and reasons-responsiveness have focused on actions and omissions that indirectly affect belief formation when it is more appropriate to focus on epistemic reasons-responsiveness. Attitudinal control, not action control, is what is at issue in self-deception. Drawing on McHugh’s (2013, 2014, 2017) account of attitudinal control, Van Loon argues that self-deceivers on Mele-style deflationary accounts are responsible for their self-deceptive beliefs because they recognize and react to evidence against their self-deceptive belief across a wide-range of counter-factual scenarios even though their recognition of this evidence does not alter their belief and their reaction to such evidence leads them viciously to hold the self-deceptive belief.

Galeotti (2016b) rejects the idea that control is the best way to think about responsibility in cases of self-deception since self-deceivers on deflationary approaches seem both confused and relatively powerless over the process. Instead, following a modified version of Sher’s (2009) account of responsibility, she contends that self-deceivers typically have, but fail to recognize, evidence that their acts related to belief formation are wrong or foolish and so fall below some applicable standard (epistemic, moral, etc.). The self-deceiver is under motivational pressure, not incapacitated, and on exiting self-deception, recognizes “the faultiness of her condition and feels regret and shame at having fooled herself” (Galeotti 2016b). This ex-post reasons-responsiveness suggests self-deceivers are responsible in Sher’s sense even if their self-deception is not intentional.

In view of these various ways of cashing out responsibility, it’s plausible that self-deceivers can be morally responsible for their self-deception on deflationary approaches and certainly not obvious that they couldn’t be.

Insofar as it seems plausible that, in some cases, self-deceivers are apt targets for censure, what prompts this attitude? Take the case of a mother who deceives herself into believing her husband is not abusing their daughter because she can’t bear the thought that he is a moral monster (Barnes 1997). Why do we blame her? Here we confront the nexus between moral responsibility for self-deception and the morality of self-deception. Understanding what obligations may be involved in cases of this sort will help to clarify the circumstances in which ascriptions of moral responsibility are appropriate. While some instances of self-deception seem morally innocuous, and others may even be thought salutary in various ways (Rorty 1994), most theorists have thought there to be something morally objectionable about self-deception or its consequences in many cases. Self-deception has been considered objectionable because it facilitates harm to others (Linehan 1982; Gibson 2020; Clifford 1877) and to oneself, undermines autonomy (Darwall 1988; Baron 1988), corrupts conscience (Butler 1726), violates authenticity (Sartre 1943), manifests a vicious lack of courage and self-control that undermines the capacity for compassionate action (Jenni 2003), violates an epistemic duty to properly ground self-ascriptions (Fernández 2013), violates a general duty to form beliefs that “conform to the available evidence” (Nelkin 2012), or violates a general duty to respect our own values (MacKenzie 2022).

Linehan (1982) argues that we have an obligation to scrutinize the beliefs that guide our actions that is proportionate to the harm to others such actions might involve. When self-deceivers induce ignorance of moral obligations in connection with the particular circumstances, likely consequences of actions, or nature of their own engagements, by means of their self-deceptive beliefs, they may be culpable for negligence with respect to their obligation to know the nature, circumstances, likely consequences, and so forth of their actions (Jenni 2003; Nelkin 2012). Self-deception, accordingly, undermines or erodes agency by reducing our capacity for self-scrutiny and change (Baron 1988). If I am self-deceived about actions or practices that harm others or myself, my abilities to take responsibility and change are also severely restricted.

Joseph Butler (1726), in his well-known sermon “On Self-Deceit,” emphasizes the ways in which self-deception about one’s moral character and conduct—‘self-ignorance’ driven by inordinate ‘self-love’—not only facilitates vicious actions but hinders the agent’s ability to change. Such ignorance, claims Butler, “undermines the whole principle of good … and corrupts conscience, which is the guide of life” (1726). Holton (2022) explores the way our motivation to see ourselves as morally good may play a role in this lack of moral self-knowledge. Existentialist philosophers such as Kierkegaard and Sartre, in very different ways, view self-deception as a threat to ‘authenticity’ insofar as self-deceivers fail to take responsibility for themselves and their engagements in the past, present, and future. By alienating us from our own principles, self-deception may also threaten moral integrity (Jenni 2003). MacKenzie (2022) might be seen as capturing precisely what’s wrong with this sort of inauthenticity when she contends that we have a duty to properly respect our values, even non-moral ones. Since self-deception is always about something we value in some way, it represents a failure to properly respect ourselves as valuers. Others note that self-deception also manifests a certain weakness of character that disposes us to react to fear, anxiety, or the desire for pleasure in ways that bias our belief acquisition and retention in ways that serve these emotions and desires rather than accuracy (Butler 1726; Clifford 1877). Such epistemic cowardice (Barnes 1997; Jenni 2003) and lack of self-control may inhibit the ability of self-deceivers to stand by or apply moral principles they hold by biasing their beliefs regarding particular circumstances, consequences, or engagements or by obscuring the principles themselves. Gibson (2020), following Clifford (1877), contends that self-deception increases the risk of harm to the self and others and the cultivation of epistemic vices like credulity that may have devastating social ramifications. In all these ways and a myriad of others, philosophers have found some self-deception objectionable in itself or for the consequences it has, not only on our ability to shape our lives but also for the potential harm it may cause to ourselves and others.

Evaluating self-deception and its consequences for ourselves and others is a difficult task. It requires, among other things: determining the degree of control self-deceivers have; what the self-deception is about (Is it important morally or otherwise?); what ends the self-deception serves (Does it serve mental health or as a cover for moral wrongdoing?); how entrenched it is (Is it episodic or habitual?); and, whether it is escapable (What means of correction are available to the self-deceiver?). As Nelkin (2012) contends, determining whether and to what degree self-deceivers are culpably negligent will ultimately need to be determined on a case-by-case basis in light of answers to such questions about the stakes at play and the difficulty involved. Others like Mackenzie (2022) hold that every case of self-deception is a violation of a general duty to respect our own values, though some cases are more egregious than others.

If self-deception is morally objectionable for any of these reasons, we ought to avoid it. But, one might reasonably ask how that is possible given the subterranean ways self-deception seems to operate. Answering this question is tricky since strategies will vary with the analyses of self-deception and our responsibility for it. Nevertheless, two broad approaches seem to apply to most accounts, namely, the cultivation of one’s epistemic virtues and the cultivation of one’s epistemic community (Galeotti 2016b). One might avoid the self-deceptive effects of motivation by cultivating virtues like impartiality, vigilance, conscientiousness, and resistance to the influence of emotion, desire, and the like. Additionally, one might cultivate an epistemic community that holds one accountable and guides one away from self-deception (Rorty 1994; Galeotti 2016b, 2018). By binding ourselves to communities we have authorized to referee our belief formation in this way, we protect ourselves from potential lapses in epistemic virtue. These kinds of strategies might indirectly affect our susceptibility to self-deception and offer some hope of avoiding it.

Quite aside from the doxastic, strategic, and moral puzzles self-deception raises, there is the evolutionary puzzle of its origin. Why do human beings have this capacity in the first place? Why would natural selection allow a capacity to survive that undermines the accurate representation of reality, especially when inaccuracies about individual ability or likely risk can lead to catastrophic errors?

Many argue that self-deceptively inflated views of ourselves, our abilities, our prospects, or our control—so-called ‘positive illusions’—confer direct benefits in terms of psychological well-being, physical health, and social advancement that serve fitness (Taylor and Brown 1994; Taylor 1989; McKay and Dennett 2009). Just because ‘positive illusions’ make us ‘feel good,’ of course, it does not follow that they are adaptive. From an evolutionary perspective, whether an organism ‘feels good’ or is ‘happy’ is not significant unless it enhances survival and reproduction. McKay and Dennett (2009) argue that positive illusions are not only tolerable; evolutionarily speaking, they contribute to fitness directly. Overly positive beliefs about our abilities or chances for success appear to make us more apt to exceed our abilities and achieve success than more accurate beliefs would (Taylor and Brown 1994; Bandura 1989). According to Johnston and Fowler (2011), overconfidence is “advantageous, because it encourages individuals to claim resources they could not otherwise win if it came to a conflict (stronger but cautious rivals will sometime fail to make a claim, and it keeps them from walking away from conflicts they would surely win).” Inflated attitudes regarding the personal qualities and capacities of one’s partners and children would also seem to enhance fitness by facilitating the thriving of offspring (McKay and Dennett 2009).

Alternatively, some argue that self-deception evolved to facilitate interpersonal deception by eliminating the cues and cognitive load that consciously lying produces and by mitigating retaliation should the deceit become evident (von Hippel and Trivers 2011; Trivers 2011, 2000, 1991). On this view, the real gains associated with ‘positive illusions’ and other self-deceptions are byproducts that serve this greater evolutionary end by enhancing self-deceivers’ ability to deceive. Von Hippel and Trivers (2011) contend that “by deceiving themselves about their own positive qualities and the negative qualities of others, people are able to display greater confidence than they might otherwise feel, thereby enabling them to advance socially and materially.” Critics have pointed to data suggesting high self-deceivers are deemed less trustworthy than low self-deceivers (McKay and Dennett 2011). Others have complained that there is little data to support this hypothesis (Dunning 2011; Van Leeuwen 2007a, 2013a), and what data there is shows us to be poor lie-detectors (Funkhouser 2019; Vrij 2011). Some challenge this theory by noting that a simple disregard for the truth would serve as well as self-deception and have the advantage of retaining true representations (McKay and Prelec 2011) or that often self-deceivers are the only ones deceived (Van Leeuwen 2007a; Kahlil 2011). Van Leeuwen (2013a) raises the concern that the wide variety of phenomena identified by this theory as self-deception render the category so broad that it is difficult to tell whether it is a unified phenomenon traceable to particular mechanisms that could plausibly be sensitive to selection pressures. Funkhouser (2019) worries that the unconscious retention of the truth that von Hippel and Trivers (2011) propose would generate tells of its own and that the psychological complexity of this explanation is unnecessary if the goal is to deceive others (which is itself contentious) since that goal would be easier to achieve through self-delusion. So, von Hippel and Trivers’ (2011) theory may explain self-delusion but not cases of self-deception marked by deep conflict (Funkhouser 2017b).

In view of these shortcomings, Van Leeuwen (2007a) argues the capacity for self-deception is a spandrel —a byproduct of other aspects of our cognitive architecture—not an adaptation in the strong sense of being positively selected. While Funkhouser (2017b) agrees that the basic cognitive architecture that allows motivation to influence belief formation—as well as the specific tools used to form or maintain biased belief—were not selected for the sake of self-deception, it nevertheless makes sense to say, for at least some contents, that self-deception is adaptive (2017b).

Whether it is an adaptation or a spandrel, it’s possible this capacity has nevertheless been retained as a consequence of its fitness value. Lopez and Fuxjager (2011) argue that the broad research on the so-called “winner effect”—the increased probability of achieving victory in social or physical conflicts following prior victories—lends support to the idea that self-deception is at least weakly adaptive since self-deception in the form of positive illusions, like past wins, confers a fitness advantage. Lamba and Nityananda (2014) test the theory that self-deceived are better at deceiving others—specifically, whether overconfident individuals are overrated by others and underconfident individuals underrated. In their study, students in tutorials were asked to predict their own performance on the next assignment as well as that of each of their peers in the tutorial in terms of absolute grade and relative rank. Comparing these predictions and the actual grades given on the assignment suggests a strong positive relationship between self-deception and deception. Those who self-deceptively rated themselves higher were rated higher by their peers as well. These findings lend suggestive support to the claim self-deception facilitates other deception. While these studies certainly do not supply all the data necessary to support the theory that the propensity to self-deception should be viewed as adaptation, they do suggest ways to test these evolutionary hypotheses by focusing on specific phenomena.

Whether or not the psychological and social benefits identified by these theories explain the evolutionary origins of the capacity for self-deceit, they may well shed light on its prevalence and persistence, as well as point to ways to identify contexts in which this tendency presents high collective risk (Lamba and Nityananda 2014).

7. Collective Self-Deception

Collective self-deception has received scant direct philosophical attention as compared with its individual counterpart. Collective self-deception might refer simply to a group of similarly self-deceived individuals or to a group-entity (such as a corporation, committee, jury, or the like) that is self-deceived. These alternatives reflect two basic perspectives that social epistemologists have taken on ascriptions of propositional attitudes to collectives. On the one hand, such attributions might be taken summatively as simply an indirect way of attributing those states to members of the collective (Quinton 1975/1976). This summative understanding, then, considers attitudes attributed to groups to be nothing more than metaphors expressing the sum of the attitudes held by their members. To say that students think tuition is too high is just a way of saying that most students think so. On the other hand, such attributions might be understood non-summatively as applying to collective entities, themselves ontologically distinct from the members upon which they depend. These so-called ‘plural subjects’ (Gilbert 1989, 1994, 2005) or ‘social integrates’ (Pettit 2003), while supervening upon the individuals comprising them, may well express attitudes that diverge from individual members. For instance, saying NASA believed the O-rings on the space shuttle’s booster rockets to be safe need not imply that most or all the members of this organization personally held this belief, only that the institution itself did. The non-summative understanding, then, considers collectives to be, like persons, apt targets for attributions of propositional attitudes and potentially of moral and epistemic censure as well. Following this distinction, collective self-deception may be understood in either a summative or non-summative sense.

In the summative sense, collective self-deception refers to a self-deceptive belief shared by a group of individuals whom each come to hold the self-deceptive belief for similar reasons and by similar means, varying according to the account of self-deception followed. We might call this self-deception across a collective. In the non-summative sense, the subject of collective self-deception is the collective itself, not simply the individuals comprising it. The following sections offer an overview of these forms of collective self-deception, noting the significant challenges posed by each.

Understood summatively, we might define collective self-deception as the holding of a false belief in the face of evidence to the contrary by a group of people as a result of shared desires, emotions, or intentions (depending upon the account of self-deception) favoring that belief. Collective self-deception is distinct from other forms of collective false belief—such as might result from deception or lack of evidence—insofar as the false belief issues from the agents’ own self-deceptive mechanisms (however these are construed), not the absence of evidence to the contrary or presence of misinformation. Accordingly, the individuals constituting the group would not hold the false belief if their vision weren’t distorted by their attitudes (desire, anxiety, fear, or the like) toward the belief. What distinguishes collective self-deception from solitary self-deception is just its social context; namely, that it occurs within a group that shares both the attitudes bringing about the false belief and the false belief itself.

Merely sharing desires, emotions, or intentions favoring a belief with a group does not entail that the self-deception is properly social since these individuals may well self-deceive regardless of the fact that their motivations are shared with others (Dings 2017; Funkhouser 2019); they are just individually self-deceiving in parallel. What makes collective self-deception social , according to Dings (2017), is that others are a means used in each individual’s self-deception. So, when a person situates herself in a group of like-minded people in response to an encounter with new and threatening evidence, her self-deception becomes social. Self-deception also becomes social in Dings’ (2017) view when a person influences others to make them like-minded with regard to her preferred belief, using their behavior to reinforce her self-deception . Within highly homogeneous social groups, however, it may be difficult to tell who is using the group instrumentally in these ways, especially when that use is unwitting. Moreover, one may not need to seek out such a group of like-minded people if they already comprise one’s community. In this case, those people may become instrumental to one’s self-deception simply by dint of being there to provide insulation from threatening evidence and support for one’s preferred belief. In any case, this sort of self-deception is both easier to foster and more difficult to escape, being abetted by the self-deceptive efforts of others within the group.

Virtually all self-deception has a social component, being wittingly or unwittingly supported by one’s associates (see Ruddick 1988). In the case of collective self-deception, however, the social dimension comes to the fore since each member of the collective unwittingly helps to sustain the self-deceptive belief of the others in the group. For example, my cancer-stricken friend might self-deceptively believe her prognosis to be quite good. Faced with the fearful prospect of death, she does not form accurate beliefs regarding the probability of her full recovery, attending only to evidence supporting full recovery and discounting or ignoring altogether the ample evidence to the contrary. Caring for her as I do, I share many of the anxieties, fears, and desires that sustain my friend’s self-deceptive belief, and as a consequence, I form the same self-deceptive belief via the same mechanisms. In such a case, I unwittingly support my friend’s self-deceptive belief, and she mine—our self-deceptions are mutually reinforcing. We are collectively or mutually self-deceived, albeit on a very small scale. Ruddick (1988) calls this ‘joint self-deception,’ and it is properly social just in case each person is instrumental in the formation of the self-deceptive belief in the other (Dings 2017).

On a larger scale, sharing common attitudes, large segments of a society might deceive themselves together. For example, we share a number of self-deceptive beliefs regarding our consumption patterns. Many of the goods we consume are produced by people enduring labor conditions we do not find acceptable and in ways that we recognize are environmentally destructive and likely unsustainable. Despite our being at least generally aware of these social and environmental ramifications of our consumptive practices, we hold the overly optimistic beliefs that the world will be fine, that its peril is overstated, that the suffering caused by the exploitive and ecologically degrading practices are overblown, that our own consumption habits are unconnected to these sufferings, and even that our minimal efforts at conscientious consumption are an adequate remedy (see Goleman 1989). When self-deceptive beliefs such as these are held collectively, they become entrenched, and their consequences, good or bad, are magnified (Surbey 2004).

The collective entrenches self-deceptive beliefs by providing positive reinforcement through others sharing the same false belief as well as by protecting its members from evidence that would destabilize the target belief. There are, however, limits to how entrenched such beliefs can become and remain. Social support cannot be the sole or primary cause of the self-deceptive belief, for then the belief would simply be the result of unwitting interpersonal deception and not the deviant belief formation process that characterizes self-deception. If the environment becomes so epistemically contaminated as to make counter-evidence inaccessible to the agent, then we have a case of simple false belief, not self-deception. Thus, even within a collective, a person is self-deceived just in case her own motivations skew the belief formation process that results in her holding the false belief. But, to bar this from being a simple case of solitary self-deception, others must be instrumental to her belief formation process such that if they were not part of that process, she would not be self-deceived (Dings 2017). For instance, I might be motivated to believe that climate change is not a serious problem and form that false belief as a consequence. In such a case, I’m not socially self-deceived, even if virtually everyone I know shares a similar motivation and belief. But, say I encounter distressing evidence in my environmental science class that I can’t shake on my own. I may seek to surround myself with like-minded people, thereby protecting myself from further distressing evidence and providing myself with reassuring evidence. Now, my self-deception is social, and this social component drives and reinforces my own motivations to self-deceive.

Relative to solitary self-deception, the collective variety presents greater external obstacles to avoiding or escaping self-deception and is, for this reason, more entrenched. If the various proposed psychological mechanisms of self-deception pose an internal challenge to the self-deceiver’s power to control her belief formation, then these social factors pose an external challenge to the self-deceiver’s control. Determining how superable this challenge is will affect our assessment of individual responsibility for self-deception as well as the prospects of unassisted escape from it.

Collective self-deception can also be understood from the perspective of the collective itself in a non-summative sense. Though there are varying accounts of group belief, generally speaking, a group can be said to believe, desire, value, or the like just in case its members “jointly commit” to these things as a body (Gilbert 2005). A corporate board, for instance, might be jointly committed as a body to believe, value, and strive for whatever the CEO recommends. Such commitment need not entail that each individual board member personally endorses such beliefs, values, or goals, only that they do so as members of the board (Gilbert 2005). While philosophically precise accounts of non-summative self-deception remain largely unarticulated—an exception is Galeotti’s (2018) detailed analysis of how collective self-deception occurs in the context of politics—the possibilities mirror those of individual self-deception. When collectively held attitudes motivate a group to espouse a false belief despite the group’s possession of evidence to the contrary, we can say that the group is collectively self-deceived in a non-summative sense.

For example, Robert Trivers (2000) suggests that ‘organizational self-deception’ led to NASA’s failure to represent accurately the risks posed by the space shuttle’s O-ring design, a failure that eventually led to the Challenger disaster. The organization as a whole, he argues, had strong incentives to represent such risks as small. As a consequence, NASA’s Safety Unit mishandled and misrepresented data it possessed that suggested that under certain temperature conditions, the shuttle’s O-rings were not safe. NASA, as an organization, then, self-deceptively believed the risks posed by O-ring damage were minimal. Within the institution, however, there were a number of individuals who did not share this belief, but both they and the evidence supporting their belief were treated in a biased manner by the decision-makers within the organization. As Trivers (2000) puts it, this information was relegated “to portions of … the organization that [were] inaccessible to consciousness (we can think of the people running NASA as the conscious part of the organization).” In this case, collectively held values created a climate within NASA that clouded its vision of the data and led to its endorsement of a fatally false belief.

Collective self-deceit may also play a significant role in facilitating unethical practices by corporate entities. For example, a collective commitment by members of a corporation to maximizing profits might lead members to form false beliefs about the ethical propriety of the corporation’s practices. Gilbert (2005) suggests that such a commitment might lead executives and other members to “simply lose sight of moral constraints and values they previously held.” Similarly, Tenbrunsel and Messick (2004) argue that self-deceptive mechanisms play a pervasive role in what they call ‘ethical fading,’ acting as a kind of ‘bleach’ that renders organizations blind to the ethical dimensions of their decisions. They argue that such self-deceptive mechanisms must be recognized and actively resisted at the organizational level if unethical behavior is to be avoided. More specifically, Gilbert (2005) contends that collectively accepting that “certain moral constraints must rein in the pursuit of corporate profits” might shift corporate culture in such a way that efforts to respect these constraints are recognized as part of being a good corporate citizen. In view of the ramifications this sort of collective self-deception has for the way we understand corporate misconduct and responsibility, understanding its specific nature in greater detail remains an important task.

Collective self-deception, understood in either the summative or non-summative sense, raises significant questions, such as whether individuals within collectives bear responsibility for their self-deception or the part they play in the collective’s self-deception and whether collective entities can be held responsible for their epistemic failures (see Galeotti 2016b, 2018 on these questions). Finally, collective self-deception prompts us to ask what means are available to collectives and their members to resist, avoid, and escape self-deception. Galeotti (2016b, 2018) argues for a variety of institutional constraints and precommitments to keep groups from falling prey to self-deception.

Given the capacity of collective self-deception to entrench false beliefs and to magnify their consequences—sometimes with disastrous results—collective self-deception is not just a philosophical puzzle; it is a problem that demands attention.

  • Ames, R.T., and W. Dissanayake (eds.), 1996, Self and Deception , New York: State University of New York Press.
  • Archer, Sophie, 2013, “Nondoxasticism about Self‐Deception,” Dialectica , 67(3): 265–282.
  • –––, 2018, “Why ‘believes’ is not a vague predicate,” Philosophical Studies ,175 (12): 3029–3048.
  • Audi, R., 2007, “Belief, Intention, and Reasons for Action,” in Rationality and the Good , J. Greco, A. Mele, and M. Timmons (eds.), New York: Oxford University Press.
  • –––, 1989, “Self-Deception and Practical Reasoning,” Canadian Journal of Philosophy , 19: 247–266.
  • –––, 1982, “Self-Deception, Action, and Will,” Erkenntnis , 18: 133–158.
  • –––, 1976, “Epistemic Disavowals and Self-Deception,” The Personalist , 57: 378–385.
  • Bach, K., 1997, “Thinking and Believing in Self-Deception,” Behavioral and Brain Sciences , 20: 105.
  • –––, 1981, “An Analysis of Self-Deception,” Philosophy and Phenomenological Research , 41: 351–370.
  • Baghramian, M., and A. Nicholson, 2013, “The Puzzle of Self-Deception,” Philosophy Compass , 8(11): 1018–1029.
  • Bagnoli, C., 2012, “Self-deception and agential authority,” Humana Mente , 20: 99–116
  • Bargh, John and Morsella, Ezequiel, 2008, “The Unconscious Mind,” Perspectives on Psychological Science , 3: 73–79.
  • Barnes, A., 1997, Seeing through Self-Deception , New York: Cambridge University Press.
  • Baron, M., 1988, “What is Wrong with Self-Deception,” in Perspectives on Self-Deception , B. McLaughlin and A. O. Rorty (eds.), Berkeley: University of California Press.
  • Bayne, T. & J. Fernandez Eds., 2009, Delusion and Self-Deception: Affective and Motivational Influences on Belief Formation , New York: Psychology Press.
  • Bok, S., 1989, “Secrecy and Self-Deception,” in Secrets: On the Ethics of Concealment and Revelation , New York: Vintage.
  • –––, 1980, “The Self Deceived,” Social Science Information , 19: 923–935.
  • Bermúdez, José Luis, 2017,“Self-deception and Selectivity: Reply to Jurjako,” Croatian Journal of Philosophy , 17(1): 91–95.
  • –––, 2000, “Self-Deception, Intentions, and Contradictory Beliefs,” Analysis , 60(4): 309–319.
  • –––, 1997, “Defending Intentionalist Accounts of Self-Deception,” Behavioral and Brain Sciences , 20: 107–8.
  • Bird, A., 1994, “Rationality and the Structure of Self-Deception,” in S. Gianfranco (ed.), European Review of Philosophy (Volume 1: Philosophy of Mind), Stanford: CSLI Publications.
  • Borge, S., 2003, “The Myth of Self-Deception,” The Southern Journal of Philosophy , 41: 1–28.
  • Borman, David, 2022, “Self-Deception and Moral Interests,” European Journal of Philosophy , first online 17 January 2022. doi:10.1111/ejop.12756
  • Brown, R., 2003, “The Emplotted Self: Self-Deception and Self-Knowledge.,” Philosophical Papers , 32: 279–300.
  • Butler, J., 1726, “Upon Self-Deceit,” in D.E. White (ed.), 2006, The Works of Bishop Butler , Rochester: Rochester University Press. [ Available online ]
  • Cerovac, I., 2015, “Intentionalism as a theory of self-deception,” Balkan Journal of Philosophy , 7: 145–150.
  • Chisholm, R. M., and Feehan, T., 1977, “The Intent to Deceive,” Journal of Philosophy , 74: 143–159.
  • Clifford, W. K., 1877, “The Ethics of Belief,” in The Ethics of Belief and Other Essays , Amherst, NY: Prometheus Books, 70–97.
  • Cook, J. T., 1987, “Deciding to Belief without Self-deception,” Journal of Philosophy , 84: 441–446.
  • Correia, Vasco, 2014, “From Self-deception to self-control: Emotional biases and the virtues of precommitment,” Croatian Journal of Philosophy , 14(3): 309–323.
  • Dalton, P., 2002, “Three Levels of Self-Deception (Critical Commentary on Alfred Mele’s Self-Deception Unmasked),” Florida Philosophical Review , 2(1): 72–76.
  • Darwall, S., 1988, “Self-Deception, Autonomy, and Moral Constitution,” in Perspectives on Self-Deception , B. McLaughlin and A. O. Rorty (eds.), Berkeley: University of California Press.
  • Davidson, D., 1986, “Deception and Division,” in J. Elster (ed.) 1986, 79–92; reprinted in D. Davidson, Problems of Rationality , with introduction by Marcia Cavell and interview with Ernest Lepore, Oxford: Clarendon Press, 2004, 199–212.
  • –––, 1982, “Paradoxes of Irrationality,” in Philosophical Essays on Freud , R. Wollheim and J. Hopkins (eds.), Cambridge: Cambridge University Press.
  • Demos, R., 1960, “Lying to Oneself,” Journal of Philosophy , 57: 588–95.
  • Dennett, D., 1992, “The Self as a Center of Narrative Gravity,” in Consciousness and Self: Multiple Perspectives , F. Kessel, P. Cole, and D. Johnson (eds.), Hillsdale, NJ: L. Erlbaum.
  • de Sosa, R., 1978, “Self-Deceptive Emotions,” Journal of Philosophy , 75: 684–697.
  • –––, 1970, “Self-Deception,” Inquiry , 13: 308–321.
  • DeWeese-Boyd, I., 2014, “Self-Deceptive Religion and the Prophetic Voice,” Journal for Religionsphilosophie , 3: 26–37.
  • –––, 2007, “Taking Care: Self-Deception, Culpability and Control,” teorema , 26(3): 161–176.
  • Dings, Roy, 2017, “Social Strategies in Self-deception,” New Ideas in Psychology , 47: 16–23.
  • Doody, P., 2017, “Is There Evidence of Robust, Unconscious Self- Deception? A Reply to Funkhouser and Barrett,” Philosophical Psychology , 30(5): 657–676.
  • Dunn, R., 1995, “Motivated Irrationality and Divided Attention,” Australasian Journal of Philosophy , 73: 325–336.
  • Dunning, D., 2011, “Get Thee to a Laboratory,” Commentary on target article, “The Evolution and Psychology of Self-Deception,” by W. von Hippel and R. Trivers, Behavioral and Brain Sciences , 34(1): 18–19.
  • –––, 1995, “Attitudes, Agency and First-Personality,” Philosophia , 24: 295–319.
  • –––, 1994, “Two Theories of Mental Division,” Australasian Journal of Philosophy , 72: 302–316.
  • Dupuy, J-P. (ed.), 1998, Self-Deception and Paradoxes of Rationality (Lecture Notes 69), Stanford: CSLI Publications.
  • Dyke, D., 1633, The Mystery of Selfe-Deceiving , London: William Standby.
  • Echano, M., 2017, “The Motivating Influence of Emotion on Twisted Self-Deception,” Kritike , 11(2): 104–120.
  • Edwards, S., 2013, “Nondoxasticism about Self-Deception,” Dialectica , 67(3): 265–282.
  • Egan, Andy, 2009, “Imagination, Delusion, and Self-Deception,” in Delusions, Self-Deception, and Affective Influences on Belief-formation , T. Bayne and J. Fernandez (eds.), New York: Psychology Press.
  • Elster, J. (ed.), 1986, The Multiple Self , Cambridge: Cambridge University Press.
  • Fairbanks, R., 1995, “Knowing More Than We Can Tell,” The Southern Journal of Philosophy , 33: 431–459.
  • Fernández, Jordi, 2013, “Self-deception and self-knowledge,” Philosophical Studies 162(2): 379–400.
  • Fingarette, H., 1998, “Self-Deception Needs No Explaining,” The Philosophical Quarterly , 48: 289–301.
  • –––, 1969, Self-Deception , Berkeley: University of California Press; reprinted, 2000.
  • Friedrich, James, 1993, “Primary error detection and minimization PEDMIN strategies in social cognition: A reinterpretation of confirmation bias phenomena,” Psychological Review , 100, 298–319.
  • Fischer, J. and Ravizza, M., 1998, Responsibility and Control . Cambridge: Cambridge University Press.
  • Funkhouser, Eric, 2019, Self-Deception , New York: Routledge.
  • –––, 2017a, “Beliefs as Signals: A New Function for Belief,” Philosophical Psychology , 30(6): 809–831.
  • –––, 2017b, “Is Self-Deception an Effective Non-Cooperative Strategy?,” Biology and Philosophy , 32: 221–242.
  • –––, 2009, “Self-Deception and the Limits of Folk Psychology,” Social Theory and Practice , 35(1): 1–13.
  • –––, 2005, “Do the Self-Deceived Get What They Want?,” Pacific Philosophical Quarterly , 86(3): 295–312.
  • Funkhouser, Eric, and David Barrett, 2017, “Reply to Doody,” Philosophical Psychology , 30(5): 677–681.
  • –––, 2016, “Robust, unconscious self-deception: Strategic and flexible,” Philosophical Psychology , 29(5): 1–15.
  • Funkhouser, Eric, and Kyle Hallam, 2022, “Self-Handicapping and Self-Deception: A Two-Way Street,” Philosophical Psychology ,
  • Galeotti, Anna Elisabetta, 2018, Political Self-Deception , Cambridge: Cambridge University Press.
  • –––, 2016a, Straight and Twisted Self-Deception,” Phenomenology and Mind , 11: 90–99.
  • –––, 2016b, “The Attribution of Responsibility to Self-Deceivers,” Journal of Social Philosophy , 47(4): 420–438.
  • –––, 2012, “Self-Deception: Intentional Plan or Mental Event?” Humana Mente , 20: 41–66.
  • Gendler, T. S., 2007, “Self-Deception as Pretense,” Philosophical Perspectives , 21: 231–258.
  • Gibson, Quinn Hiroshi, 2020, “Self-Deception as Omission,” Philosophical Psychology , 33(5): 657–678.
  • Gilbert, Margaret, 2005, “Corporate Misbehavior and Collective Values,” Brooklyn Law Review , 70(4): 1369–80.
  • –––, 1994, “Remarks on Collective Belief,” in Socializing Epistemology , F. Schmitt (ed.), Lanham, MD: Rowman and Littlefield.
  • –––, 1989, On Social Facts , London: Routledge.
  • Goleman, Daniel, 1989, “What is negative about positive illusions?: When benefits for the individual harm the collective,” Journal of Social and Clinical Psychology , 8: 190–197.
  • Graham, G., 1986. “Russell’s Deceptive Desires,” The Philosophical Quarterly , 36: 223–229.
  • Haight, R. M., 1980, A Study of Self-Deception , Sussex: Harvester Wheatsheaf.
  • Hales, S. D., 1994, “Self-Deception and Belief Attribution,” Synthese , 101: 273–289.
  • Hassin, Ran, John Bargh, & Shira Cohen-Zimerman, 2009, “Automatic and Flexible: The Case of Nonconscious Goal Pursuit,” Social Cognition , 27: 20–36.
  • Hernes, C., 2007, “Cognitive Peers and Self-Deception,” teorema , 26(3): 123–130.
  • Hauerwas, S. and Burrell, D., 1977, “Self-Deception and Autobiography: Reflections on Speer’s Inside the Third Reich,” in Truthfulness and Tragedy , S. Hauerwas with R. Bondi and D. Burrell, Notre Dame: University of Notre Dame Press.
  • Holton, Richard, 2022, “Self-Deception and the Moral Self,” in Manuel Vargas, and John M. Doris (eds.), The Oxford Handbook of Moral Psychology , Oxford University Press, 262–284.
  • –––, 2001, “What is the Role of the Self in Self-Deception?,” Proceedings of the Aristotelian Society , 101(1): 53–69.
  • Jenni, K., 2003, “Vices of Inattention,” Journal of Applied Philosophy , 20(3): 279–95.
  • Johnson, D., and Fowler, J. 201, “The Evolution of Overconfidence,” Nature , 477: 317–320.
  • Johnston, M., 1988, “Self-Deception and the Nature of Mind,” in Perspectives on Self-Deception , B. McLaughlin and A. O. Rorty (eds.), Berkeley: University of California Press.
  • Jordan, Maiya, 2022, “Instantaneous Self-Deception,” Inquiry , 65(2): 176–201.
  • –––, 2020, “Literal Self-Deception,” Analysis , 80(2): 248–256.
  • –––, 2019, “Secondary Self-Deception,” Ratio , 32(2): 122–130.
  • Jurjako, Mark, 2013, “Self-deception and the selectivity problem,” Balkan Journal of Philosophy , 5: 151–162.
  • Kahlil, E., 2011, “The Weightless Hat,” Commentary on target article, “The Evolution and Psychology of Self-Deception,” by W. von Hippel and R. Trivers, Behavioral and Brain Sciences , 34(1): 30–31.
  • Kirsch, J., 2005, “What’s So Great about Reality?,” Canadian Journal of Philosophy , 35(3): 407–428.
  • Lauria, Federico and Delphine Preissmann, 2018, “What Does Emotion Teach Us about Self-Deception? Affective Neuroscience in Support of Non-Intentionalism,” Les Ateliers de l’Éthique / The Ethics Forum , 13(2): 70–94.
  • Lauria, F., Preissmann, D., and Cle menta, F., 2016, "Self-deception as Affective Coping: An Empirical Perspective on Philosophical Issues." Consciousness and Cognition , 41: 119–134.
  • Lazar, A., 1999, “Deceiving Oneself Or Self-Deceived?,” Mind , 108: 263–290.
  • –––, 1997, “Self-Deception and the Desire to Believe,” Behavioral and Brain Sciences , 20: 119–120.
  • Lamba, S and Nityananda, V., 2014, “Self-Deceived Individuals are Better at Deceiving Others,” PLOS One , 9/8: 1–6.
  • Levy, N., 2004, “Self-Deception and Moral Responsibility,” Ratio (new series) , 17: 294–311.
  • Linehan, E. A. 1982, “Ignorance, Self-deception, and Moral Accountability,” Journal of Value Inquiry , 16: 101–115.
  • Lockhard, J. and Paulhus, D. (eds.), 1988, Self-Deception: An Adaptive Mechanism? , Englewood Cliffs: Prentice-Hall.
  • Longeway, J., 1990, “The Rationality of Self-deception and Escapism,” Behavior and Philosophy , 18: 1–19.
  • Lopez, J., and M. Fuxjager, 2012, “Self-deception’s adaptive value: Effects of positive thinking and the winner effect,” Consciousness and Cognition , 21(1): 315–324.
  • Lynch, Kevin, 2022, “Being Self-Deceived about One’s Own Mental State,” Philosophical Quarterly , 72(3): 652–672.
  • –––, 2017, “An Agentive Non-Intentionalist Theory of Self-Deception,” Canadian Journal of Philosophy , 47(6): 779–798.
  • –––, 2014, “Self-deception and shifts of attention,” Philosophical Explorations , 17(1): 63–75.
  • –––, 2013, “Self-Deception and Stubborn Belief,” Erkenntnis , 78(6): 1337–1345.
  • –––, 2012, “On the ‘tension’ inherent in Self-Deception,” Philosophical Psychology , 25(3): 433–450.
  • –––, 2010, “Self-deception, religious belief, and the false belief condition,” Heythrop Journal , 51(6): 1073–1074.
  • –––, 2009, “Prospects for an Intentionalist Theory of Self-Deception,” Abstracta , 5(2): 126–138.
  • MacKenzie, J., 2022, “Self-Deception as a Moral Failure,” The Philosophical Quarterly , 72(2): 402-421.
  • Martin, M., 1986, Self-Deception and Morality , Lawrence: University Press of Kansas.
  • ––– (ed.), 1985, Self-Deception and Self-Understanding , Lawrence: University Press of Kansas.
  • Martínez Manrique, F., 2007, “Attributions of Self-Deception,” teorema , 26(3): 131–143.
  • McHugh, Conor, 2017, “Attitudinal Control,” Synthese , 194(8): 2745–2762.
  • –––, 2014, “Exercising Doxastic Freedom,” Philosophy and Phenomenological Research , 88(1): 1–37.
  • –––, 2013, “Epistemic Responsibility and Doxastic Agency,” Philosophical Issues , 23(1): 132–157.
  • McLaughlin, B. and Rorty, A. O. (eds.), 1988, Perspectives on Self-Deception , Berkeley: University of California Press.
  • McKay, R. and Dennett, D., 2009, “The Evolution of Misbelief,” Behavioral and Brain Sciences , 32(6): 493–561.
  • McKay, R., Prelec, D., “Protesting Too Much: Self-Deception and Self-Signaling,” Commentary on target article, “The Evolution and Psychology of Self-Deception,” by W. von Hippel and R. Trivers, Behavioral and Brain Sciences , 34(1): 34–35.
  • Mele, Alfred, 2020, “Self-Deception and Selectivity,” Philosophical Studies , 177: 2697–2711.
  • –––, 2012, “When Are We Self-Deceived?” Humana Mente Journal of Philosophical Studies , 20: 1–15.
  • –––, 2010, “Approaching Self-Deception: How Robert Audi and I Part Company,” Consciousness and Cognition , 19: 745–750.
  • –––, 2009, “Delusional Confabulations and Self-Deception,” in W. Hirstein (ed.), Confabulation: Views from Neuroscience, Psychiatry, Psychology, and Philosophy , Oxford: Oxford University Press, pp. 139–157.
  • –––, 2009, “Have I Unmasked Self-Deception or Am I Self-Deceived?” in C. Martin (ed.), The Philosophy of Deception , Oxford: Oxford University Press, pp. 260–276.
  • –––, 2007, “Self-Deception and Three Psychiatric Delusions: On Robert Audi’s Transition from Self-Deception to Delusion,” in M. Timmons, J. Greco, and A. Mele (eds.), Rationality and the Good , Oxford: Oxford University Press, pp. 163–175.
  • –––, 2007, “Self-Deception and Hypothesis Testing,” in M. Marraffa, M. De Caro, and F. Feretti (eds.), Cartographies of the Mind , Dordrecht: Kluwer, pp. 159–167.
  • –––, 2006, “Self-Deception and Delusions,” European Journal of Analytic Philosophy , 2: 109–124.
  • –––, 2003, “Emotion and Desire in Self-Deception,” in A. Hatzimoysis (ed.), Philosophy and the Emotions , Cambridge: Cambridge University Press, pp. 163–179.
  • –––, 2001, Self-Deception Unmasked , Princeton: Princeton University Press.
  • –––, 2000, “Self-Deception and Emotion,” Consciousness and Emotion , 1: 115–137.
  • –––, 2000, “Self-Deception and Emotion,” Consciousness and Emotion , 1: 115–139.
  • –––, 1999, “Twisted Self-Deception,” Philosophical Psychology , 12: 117–137.
  • –––, 1997, “Real Self-Deception,” Behavioral and Brain Sciences , 20: 91–102.
  • –––, 1987a, Irrationality: An Essay on Akrasia, Self-Deception, Self-Control , Oxford: Oxford University Press.
  • –––, 1987b, “Recent Work on Self-deception,” American Philosophical Quarterly , 24: 1–17.
  • –––, 1983, “Self-Deception,” Philosophical Quarterly , 33: 365–377.
  • Mijović-Prelec, D., and Prelec, D., 2010, “Self-deception as Self-Signaling: A Model and Experimental Evidence,” Philosophical Transactions of the Royal Society B , 365: 227–240.
  • Moran, R., 1988, “Making Up Your Mind: Self-Interpretation and Self-constitution,” Ratio (new series) , 1: 135–151.
  • Nelkin, D., 2012, “Responsibility and Self-Deception: A Framework,” Humana Mente Journal of Philosophical Studies , 20: 117–139.
  • –––, 2002, “Self-Deception, Motivation, and the Desire to Believe,” Pacific Philosophical Quarterly , 83: 384–406.
  • Nicholson, A., 2007.“Cognitive Bias, Intentionality and Self-Deception,” teorema , 26(3): 45–58.
  • Noordhof, P., 2009, “The Essential Instability of Self-Deception,” Social Theory and Practice , 35(1): 45–71.
  • –––, 2003, “Self-Deception, Interpretation and Consciousness,” Philosophy and Phenomenological Research , 67: 75–100.
  • Paluch, S., 1967, “Self-Deception,” Inquiry , 10: 268–78.
  • Patten, D., 2003, “How do we deceive ourselves?,” Philosophical Psychology , 16(2): 229–46.
  • Pears, D., 1991, “Self-Deceptive Belief Formation,” Synthese , 89: 393–405.
  • –––, 1984, Motivated Irrationality , New York: Oxford University Press.
  • Pettit, Philip, 2006, “When to Defer to Majority Testimony — and When Not,” Analysis , 66(3): 179–187.
  • –––, 2003, “Groups with Minds of Their Own,” in Socializing Metaphysics , F. Schmitt (ed.), Lanham, MD: Rowman and Littlefield.
  • Philström, S., 2007, “Transcendental Self-Deception,” teorema , 26(3): 177–189.
  • Porcher, J., 2012, “Against the Deflationary Account of Self-Deception,” Humana Mente , 20: 67–84.
  • Quinton, Anthony, 1975/1976, “Social Objects,” Proceedings of the Aristotelian Society , 75: 1–27.
  • Räikkä, J. 2007, “Self-Deception and Religious Beliefs,” Heythrop Journal , 48: 513–526.
  • Rorty, A. O., 1994, “User-Friendly Self-Deception,” Philosophy , 69: 211–228.
  • –––, 1983, “Akratic Believers,” American Philosophical Quarterly , 20: 175–183.
  • –––, 1980, “Self-Deception, Akrasia and Irrationality,” Social Science Information , 19: 905–922.
  • –––, 1972, “Belief and Self-Deception,” Inquiry , 15: 387–410.
  • Rowbottom, D. & Chan, C., 2019, “Self-Deception and Shifting Degrees of Belief,” Philosophical Psychology , 32: 1204–1220.
  • Sahdra, B. and Thagard, P., 2003, “Self-Deception and Emotional Coherence,” Minds and Machines , 13: 213–231.
  • Sanford, D,1988, “Self-Deception as Rationalization,” in Perspectives on Self-Deception , B. McLaughlin and A. O. Rorty (eds.), Berkeley: University of California Press.
  • Sartre, J-P., 1946, L’etre et le néant , Paris: Gallimard; trans. H. E. Barnes, 1956, Being and Nothingness , New York, Washington Square Press.
  • Scott-Kakures, Dion, 2021, “Self-Deceptive Inquiry: Disorientation, Doubt, Dissonance,” Midwest Studies in Philosophy , 45: 457–482.
  • –––, 2002, “At Permanent Risk: Reasoning and Self-Knowledge in Self-Deception,” Philosophy and Phenomenological Research , 65: 576–603.
  • –––, 2001, “High anxiety: Barnes on What Moves the Unwelcome Believer,” Philosophical Psychology , 14: 348–375.
  • –––, 2000, “Motivated Believing: Wishful and Unwelcome,” Noûs , 34: 348–375.
  • Sher, George, 2009, Who Knew? Responsibility without Awareness , Oxford: Oxford University Press.
  • Smith, D. L., 2014, "Self-Deception: A Teleofunctional Approach," Philosophia, 42: 181–199.
  • –––, 2013, "The Form and Function of Self-Deception: A Biological Model," Sistemi intelligenti , 3: 565–580.
  • Sorensen, R., 1985, “Self-Deception and Scattered Events,” Mind , 94: 64–69.
  • Surbey, M., 2004, “Self-deception: Helping and hindering personal and public decision making,” in Evolutionary Psychology, Public Policy and Personal Decisions , C. Crawford and C. Salmon (eds.), Mahwah, NJ: Lawrence Earlbaum Associates.
  • Schwitzgebel, E., 2001, “In-Between Believing,” Philosophical Quarterly , 51: 76–82.
  • Szabados, B., 1973, “Wishful Thinking and Self-Deception,” Analysis , 33(6): 201–205.
  • Talbott, W. J., 1997, “Does Self-Deception Involve Intentional Biasing,” Behavior and Brain Sciences , 20: 127.
  • –––, 1995, “Intentional Self-Deception in a Single Coherent Self,” Philosophy and Phenomenological Research , 55: 27–74.
  • Taylor, S. and Brown, J., 1994, “Positive Illusion and Well-Being Revisited: Separating Fact from Fiction,” Psychological Bulletin , 116: 21–27.
  • Taylor, S. and Brown, J., 1988, “Illusion and Well-Being: A Social Psychological Perspective on Mental Health,” Psychological Bulletin , 103(2): 193–210.
  • Tenbrusel, A.E. and D. M Messick, 2004, “Ethical Fading: The Role of Self-Deception in Unethical Behavior,” Social Justice Research , 7(2): 223–236.
  • Trivers, R., 2011, The Folly of Fools: The Logic of Deceit and Self-Deception in Human life , New York: Basic Books.
  • Trivers, R., 2000, “The Elements of a Scientific Theory of Self-Deception,” in Evolutionary Perspectives on Human Reproductive Behavior , Dori LeCroy and Peter Moller (eds.), Annals of the New York Academy of Sciences , 907: 114–131.
  • Trivers, R., 1991, “Deceit and Self-Deception: The relationship between Communication and Consciousness,” Man and Beast Revisited , 907: 175–191.
  • Van Fraassen, B., 1995, “Belief and the Problem of Ulysses and the Sirens,” Philosophical Studies , 77: 7–37.
  • –––, 1984, “Belief and Will,” Journal of Philosophy , 81: 235–256.
  • Van Leeuwen, N., 2013a, “Review of Robert Trivers’ The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life,” Cognitive Neuropsychiatry , 18(1-2): 146–151.
  • –––, 2013b, “Self-Deception,” in International Encyclopedia of Ethics , H. LaFollette (ed.), New York: Wiley-Blackwell.
  • –––, 2009, “Self-Deception Won’t Make You Happy,” Social Theory and Practice , 35(1): 107–132.
  • –––, 2007a, “The Spandrels of Self-deception: Prospects for a biological theory of a mental phenomenon,” Philosophical Psychology , 20(3): 329–348.
  • –––, 2007b, “The Product of Self-Deception,” Erkenntnis , 67(3): 419–437.
  • Van Loon, Marie, 2018, “Responsibility for self-deception,” Les Ateliers de l’Éthique / the Ethics Forum , 13(2): 119–134.
  • Von Hippel, W. & Trivers, R., 2011, “The Evolution and Psychology of Self-Deception,” Behavioral and Brain Sciences , 34(1): 1–56.
  • Vrij, A., 2011, “Self-deception, lying, and the ability to deceive,” Behavioral and Brain Sciences , 34(1): 40–41.
  • Wei, Xintong, 2020, “The role of pretense in the process of self-deception,” Philosophical Explorations , 23(1): 1–14.
  • Whisner, W., 1993, “Self-Deception and Other-Person Deception,” Philosophia , 22: 223–240.
  • –––, 1989, “Self-Deception, Human Emotion, and Moral Responsibility: Toward a Pluralistic Conceptual Scheme,” Journal for the Theory of Social Behaviour , 19: 389–410.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.

[Please contact the author with suggestions.]

action | belief | Davidson, Donald | delusion | lying and deception: definition of | moral responsibility | Sartre, Jean-Paul | self-knowledge | weakness of will

Acknowledgments

The author would like to thank Margaret DeWeese-Boyd, Douglas Young, and the editors for their help in constructing and revising this entry.

Copyright © 2023 by Ian Deweese-Boyd < ian . deweese-boyd @ gordon . edu >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2023 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

The complex dynamics of wishful thinking: the critical positivity ratio

Affiliations.

  • 1 Independent Practice.
  • 2 Department of Physics, New York University.
  • 3 Graduate College of Psychology and Humanistic Studies, Saybrook University.
  • PMID: 23855896
  • DOI: 10.1037/a0032850

We examine critically the claims made by Fredrickson and Losada (2005) concerning the construct known as the "positivity ratio." We find no theoretical or empirical justification for the use of differential equations drawn from fluid dynamics, a subfield of physics, to describe changes in human emotions over time; furthermore, we demonstrate that the purported application of these equations contains numerous fundamental conceptual and mathematical errors. The lack of relevance of these equations and their incorrect application lead us to conclude that Fredrickson and Losada's claim to have demonstrated the existence of a critical minimum positivity ratio of 2.9013 is entirely unfounded. More generally, we urge future researchers to exercise caution in the use of advanced mathematical tools, such as nonlinear dynamics, and in particular to verify that the elementary conditions for their valid application have been met.

PsycINFO Database Record (c) 2013 APA, all rights reserved.

Publication types

  • Mental Health*
  • Models, Psychological*

Lets Connect

wishful thinking in critical thinking

Critical Thinking vs. Wishful Thinking

wishful thinking in critical thinking

Dr. Hurd: I absolutely loved what you said in your article about “confidence, but no competence.”

I’d like to add, though, that the reason parents don’t teach their kids about the tools to gain that competence is simply because … they don’t really understand them, either.

I see many previous generations having “done” something … because they believed they had to. They couldn’t explain why (and believe me, at a much younger age, I asked older folks many times…and I got nothing but a run-around that would make President Obama blush; no joke), they just did. It was supposedly just, “What you were supposed to do, because you were.”

If the previous generations can’t explain to the proceeding generations why what’s being done is necessary to be done, then what we all see is those future generations living by the ideology that the previous generations believed, but didn’t want to admit.

Perfect example: Why did the Baby Boomer generation turn on to the Hippie Movement and revolt against the “man” when their parents seemed so incredibly patriotic? Maybe because deep down their parents weren’t very patriotic, they just claimed they were–to their kids and to themselves, and it wasn’t questioned.

Again, fantastic piece; I agree with you all the way.

Dr. Hurd replies:

In my newly released book, BAD THERAPY GOOD THERAPY, I made the distinction between “do as I say” dogmatism and “do as you feel” subjectivism. You’re talking about the first, and you’re so right that

this leads to one of two things: Either mindless rebellion (as in the 1960s student movements), or subservience, with one generation to the next not thinking about how wrong many unchallenged assumptions are.

The antidote to all of this is critical thinking. Good teachers can — at least theoretically — be hired to teach the skills of critical thinking to children, but parents must above all foster and provide the leadership for it. I don’t care whether a particular parent has a Ph.D. in chemistry or an M.D., or barely has a high school degree. Everyone has the capacity for critical thinking within his or her context of knowledge. Critical thinking means objective analysis, logic and reason. It means NOT blindly accepting vaguely held, unidentified assumptions, and it means NOT refusing to accept something as true just because everyone else seems to accept it as true.

Critical thinkers are self-aware about the content of their own minds, and always willing to identify and challenge their underlying or “hidden” assumptions.

Critical thinking is independent, by definition. It’s not subjective and emotion-driven, but it’s not “by-the-book” either. If you conclude something is true that happens to be conventional and a majority happens to agree, then fine. You still concluded it with your own independent, thinking and reasoning mind. You understand the reasons and arguments in favor of the conclusion that others might also accept. By that same method, you can determine when a majority are off course, or guilty of sloppy or erroneous thinking.

Parents have to be the leader on this issue. The way to be the leader is to live your life as a critical thinker, something you should be doing even without children. The other way to be the leader is to coach and guide your children through whatever critical thinking they’re capable of. You take the time to find out what they’re capable of by having intelligent discussions with them about anything at all of concern to them. It could be events in a book, on a television show, in the neighborhood, at the school, or within the family. Every conversation about anything is a potential opportunity to demonstrate critical thinking. Critical thinking is not lecturing. It’s logical, factual, and sometimes even common sensical analysis. “Here’s my answer to the question, and here are my reasons. What do you think?”

A good therapist, as I write in my book, thinks along with the therapy client. This is different from thinking FOR the client. The same applies to your relationship with your child. If your child, at a given stage of mental development, is honestly unable to grasp the essentials of a certain subject, then of course you do this for him. You decide, for example, where you’re going to live and whether you can afford a particular purchase. But always be open to the possibility that your child is capable of at least a certain amount of thinking on any subject, including a subject about which he still won’t be making any final decisions. When possible, let him make his own decisions even when — gasp! — he’s plainly wrong. (Most American parents cannot stand to let their children be wrong. This is why so many children grow up into these helpless and entitled adults.) When it’s physically safe, let your child make mistakes and learn from them. Critically think with him along the way. Reason with him, and think it all out along with him. Don’t talk down to your child; think along with him, as much as he’s able.

I understand what you’re saying in your comments. Not all parents do this. It’s truly sad. When I look at what passes for intellectual, moral, social and political “leadership” in this country, I recognize that these are the choices of people — of grown adults — who truly are lost and clueless when it comes to critical thinking. There would be no President Obama nor a President Bush, nor a President Romney or President Gingrich, in a culture where people engaged in even a little bit of critical thinking about political matters. In politics, most people are engaging in wishful thinking, not critical thinking, which is why most of us keep hiring idiots and liars to lead us. The same applies in the realm of what passes for moral and spiritual leadership, whether in the “do as I say” dogmatic context of traditional church, or the “do as you feel” subjectivism of the so-called self-help movement, including but not limited to all things Oprah.

The real battle in society, and within any individual soul, is the battle between wishful thinking and critical thinking. Critical thinking does not always lead to the truth, but it provides the means for correction while getting to the truth. And it helps you face and identify the truth, once there. Wishful thinking gives up on the concept of objective truth altogether, and replaces it with the nonsense that permeates our entire culture, from individual households and minds, to the President, to most of our celebrities and academic leaders, and everything in between.

America may go down as a civilization that got so much done, but that ultimately lacked a regard for the rational mind — and paid a terrible price for it. It’s the same for an individual, regardless of the civilization he happens to live in. If he thinks critically, he’ll flourish to the greatest degree possible in that society. If he doesn’t think critically, the greatest civilization in the world, while doing him some material good, will not do a thing for his mind, his intellect, emotions or soul.

Subscribe to Dr. Hurd News!

Please enter your Email Address.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Soc Cogn Affect Neurosci
  • v.7(8); 2012 Nov

Neural correlates of wishful thinking

Tatjana aue.

1 Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland and 2 Center for Cognitive and Social Neuroscience, University of Chicago, Chicago, IL, USA

Howard C. Nusbaum

John t. cacioppo.

Wishful thinking (WT) implies the overestimation of the likelihood of desirable events. It occurs for outcomes of personal interest, but also for events of interest to others we like. We investigated whether WT is grounded on low-level selective attention or on higher level cognitive processes including differential weighting of evidence or response formation. Participants in our MRI study predicted the likelihood that their favorite or least favorite team would win a football game. Consistent with expectations, favorite team trials were characterized by higher winning odds. Our data demonstrated activity in a cluster comprising parts of the left inferior occipital and fusiform gyri to distinguish between favorite and least favorite team trials. More importantly, functional connectivities of this cluster with the human reward system were specifically involved in the type of WT investigated in our study, thus supporting the idea of an attention bias generating WT. Prefrontal cortex activity also distinguished between the two teams. However, activity in this region and its functional connectivities with the human reward system were altogether unrelated to the degree of WT reflected in the participants’ behavior and may rather be related to social identification, ensuring the affective context necessary for WT to arise.

For what a man more likes to be true, he more readily believes (Francis Bacon, 1561–1626).

INTRODUCTION

Human reasoning and decision-making are subject to distortions, which may originate in a variety of different reasons, ranging from the application of simple heuristics ( Goldstein and Gigerenzer, 2002 ) to more complex groupthink ( Kerschreiter et al ., 2008 ). These distortions are not necessarily irrational. Under some circumstances, simple heuristics can lead to more accurate inferences than strategies that arise from considerable knowledge about a situation ( Goldstein and Gigerenzer, 2002 ). Also, Zábojník (2004) developed a model that explains why most individuals legitimately believe that they are above average in various skills and abilities. According to this model, individuals test and improve their abilities until they are reasonably favorable, and thus, above average. Moreover, skills and abilities are not always normally distributed, leading occasionally to the phenomenon that >50% of the individuals in a population score indeed above average. These considerations show that some biases are quite rational—but others are not.

Francis Bacon's observation presaged work on a particularly pernicious systematic irrational bias, the tendency to overestimate the likelihood of desirable events and underestimate the likelihood of undesirable events. This cognitive bias has been called wishful thinking ( WT, McGuire, 1960 ). 1 In contrast to heuristics and the type of rational estimations of own capacities described before, WT originates in the application of individual preferences that are linked to affective experiences.

WT biased beliefs and decision-making have reliably been observed in many different contexts, especially when events are of personal relevance and people thus feel personally invested in the outcomes of their predictions ( Babad, 1987 ; Granberg and Holmberg, 1988 ). Biased predictions may work to maximize the anticipated reward or the hedonic valence of an expected outcome. WT has been documented whether the outcomes apply directly to the decision maker or they apply to someone with whom the decision maker identifies ( McGuire, 1960 ; Babad and Katz, 1991 ; Price, 2000 ). The particular case of WT for others is in line with research on social identity theory, which has shown that individuals generally prefer ingroup members over outgroup members ( Tajfel et al ., 1971 ) and that they display a need for a positive social identity ( Turner and Onorato, 1999 ). Overestimating the likelihood of positive events for other people or groups with whom we identify therefore may increase the experienced positivity of our own social identity, and, consequently, the hedonic valence of a predicted outcome, thus functioning as an affective reward. Evidently, WT is not a rare but a rather common phenomenon.

The concrete mechanisms underlying WT are still unknown. Krizan and Windschitl (2007) put forth three different stages at which WT could arise. First, at an early processing stage individuals could selectively attend to existing situational cues. Attentional deployment in WT may work in the sense of a confirmation bias, making us attend to supporting evidence for what we desire and neglect contradictory evidence. Such a mechanism would consequently serve the WT effect. Alternatively, WT could be generated by selective interpretation of available evidence, thus changing the sense of but not the access to the existing information in a situation. Attributing greater importance to the desirable than to the undesirable evidence in a situation would work in this service. Finally, a third alternative assumes WT to arise at a comparably late stage of information processing, namely at response formation. Consequently, the mechanism underlying WT could be either an attention bias, an interpretation bias, or a response bias.

Investigations of the neural correlates of wishful thinking are absent but could help identifying the mechanisms generating WT by supporting or contradicting either of these alternatives. For example, if early selective attention were at the basis of the WT effect, individuals should display WT-related activation in low-level attention-associated brain regions. Since, in the current study, the background information was presented visually, selective attention should be particularly visible in the visual cortex (e.g. Sabatinelli et al ., 2007 ). If, on the contrary, an interpretation bias or a response bias rather than an attention bias were responsible for the genesis of WT, we would expect the implication of higher level cognitive processing areas within the prefrontal cortex (PFC) because these mechanisms would imply either evidence reevaluation (reappraisal) or attempts at cognitive regulation (e.g. Ochsner et al ., 2002 ; Sharot et al ., 2007 ; see also Miller and Cohen, 2001 , for an extensive discussion on the importance of the prefrontal cortex for cognitive control). Additional justification for such a hypothesis comes from research on pathological gambling, which has been associated with biased cognitions leading to over-optimism and, importantly, with dysfunctions of the ventromedial PFC ( Cavendini et al ., 2002 ). Consequently, fMRI may help to reveal whether low-level attentive or higher level cognitive processes are at the origin of WT.

Moreover, because WT has been postulated to be a hedonic bias, activity in the areas supporting either of these mechanisms being at the origin of the WT phenomenon should be functionally connected to activity in the human reward system. This is because during WT people adopt expectations that they believe will yield affective rewards. Therefore, WT should recruit structures of the human reward system (including the striatum, the posterior cingulate and the amygdala; Heekeren et al ., 2007 ; Delgado et al ., 2008 ; Xue et al ., 2009 ). For instance, anticipated reward, such as pleasure experienced by favorite team soccer goals ( McLean et al ., 2009 ) has been reported to go along with activation in the dorsal striatum.

We investigated the neural correlates of WT in a social identification context based on National Football League (NFL) teams with participants who were recruited based on their reports that they cared about and followed closely a particular team in the NFL. Objective background information in this task was given in the same manner for all experimental scenarios, prohibiting the possibility that observed differences between scenarios would result from a differential access to available information. Participants in the current study estimated the winning odds for different NFL teams [favorite, neutral (filler items) and least favorite] on the basis of this background information. According to identification theory, outcomes bearing on the participant's favorite team are higher in self-relevance than outcomes bearing on one's least favorite team. Brain activity was recorded in an MRI scanner while the participants gave the estimations. In line with earlier research and behavioral pilot testing, we expected that participants would overestimate the success of the favorite team with respect to the least favorite team, that is, that they would exhibit WT.

The neural correlates of WT were investigated by (i) comparing favorite to least favorite team trials, because, during WT, individuals should well distinguish between the two teams, (ii) identifying functional connectivities of these areas with other brain regions (e.g. the human reward system) and (iii) by correlating these activation and connectivity data with the extent of WT demonstrated in the participants’ behavior. The latter point is particularly important because activation and connectivity patterns differing for favorite vs least favorite team trials could otherwise be simply a sign of differential preference for or identification with the two teams without being related to WT itself.

Participants

The study was approved by a local ethics committee (institutional review board for biological sciences at the University of Chicago). Written informed consent was obtained in accordance with the Helsinki Declaration of Human Rights (1999). Thirteen (two female) healthy University of Chicago students were recruited via ads posted on the Internet and in university buildings. They were aged between 19 and 34 years ( M  = 21.8, s.d. = 3.82) and indicated high interest in the NFL [ M  = 5.8, s.d. = 1.63, on a scale ranging from 1 (no interest) to 7 (intense interest)]. Five participants actively practiced American football ( M  = 8 h/week, s.d. = 6.08 h/week) at the time of the study. All but one watched football on a weekly basis (TV or local; M  = 4.5 h/week, s.d. = 3.40 h/week). Participants had normal or corrected-to-normal vision, and were not currently seeking treatment for affective disorders. Handedness was assessed with the Edinburgh Handedness Inventory ( Oldfield, 1971 ). Only individuals indicating ‘right’ for all items were included in the study. Participants were paid $15/h.

Setting and apparatus

MRI data were acquired on a 3T GE Signa scanner (GE Medical Systems, Milwaukee, WI, USA) with a standard quadrature GE head coil. Stimulus presentation and collection of participants’ button presses were controlled by e-prime 1.1 (Psychology Software Tools, Inc., Pittsburgh, PA, USA) running on a PC. Participants viewed the stimuli by the use of binocular goggles mounted on the head coil ∼2 in above the participants’ eyes. Button press responses were made on two MRI-compatible response boxes. Every button corresponded to a specific number (e.g. leftmost button left hand: 1; rightmost button right hand: 0).

High-resolution volumetric anatomical images [T1-weighted spoiled gradient-recalled (SPGR) images] were collected for every participant in 124 1.5-mm sagittal slices with 6° flip angle and 24 cm field of view (FOV). Functional data were acquired using a forward/reverse spiral acquisition with 40 contiguous 4.2-mm coronal slices with 0.5-mm gaps, for an effective slice thickness of 4.7 mm, in an interleaved order spanning the whole brain. Volumes were collected continuously with a repetition time (TR) of 3 s, an echo time (TE) of 28 ms and a FOV of 24 cm (flip angle = 84°, 64 × 64 matrix size, fat suppressed).

Upon participants’ arrival at the laboratory the nature of the experiment was explained and informed consent was obtained. Then, participants specified their favorite, neutral (team they neither liked nor disliked) and least favorite NFL teams.

Brain activity was recorded in an MRI scanner while the participants did 60 estimations (20 for each team). In each trial, participants were given background information about a single team (favorite, neutral, or least favorite). Trials for the neutral team acted as irrelevant filler trials in the current study. The background information comprised three pieces: (i) The probability of winning for the team in question if a given player played , (ii) the probability of winning if this specific player did not play and (iii) the probability that this player would actually play . Bayesian statistics permit the combination of these probabilities to determine the objective probability that the team will win the game ( Petty and Cacioppo, 1981 ). However, in the current study, participants were not given enough time to perform exact statistics.

The particular probabilities used for the favorite, neutral and least favorite teams were counterbalanced across participants. In order to make the teams more salient, in every trial, the team logos appeared in each of the four corners of the computer screen with the background information presented in between ( Figure 1 ). The participants’ task then was to specify what they believed to be the probability that the presented team would win its first playoff game given this information.

An external file that holds a picture, illustration, etc.
Object name is nsr081f1.jpg

Trial sequence. In every trial, the three statements were first presented for a random time of 3, 6 or 9 s, without any indication of the corresponding probabilities and the concerned team. Next, the club banner of the concerned team appeared in all four corners of the screen and the statement-related probabilities were projected to the right of the statements. From this moment, participants had 9 s to estimate the supposed winning odds. After this, a question prompted them to indicate their probability estimate by the use of two button boxes. Participants were given 6 s to do so. Immediately after, the next trial began with the presentation of the three statements.

Before beginning the study, participants were trained on the use of two button boxes, permitting them to indicate the probability estimates, and got the occasion to go through 10 practice trials to get familiar with the task. They were informed that they might ask questions about anything that was not perfectly clear to them. After the study, the participants indicated their sympathy with the different teams on three different Likert scales. These scales asked how much they liked the teams [ranging from 1 (extremely dislike) to 7 (extremely like)], as well as how pleasant [ranging from 1 (extremely unpleasant) to 7 (extremely pleasant)] and appealing [ranging from 1 (extremely unappealing) to 7 (extremely appealing)] the teams were to them.

Data analysis

Behavioral data.

Participants’ sympathy ratings for the favorite and least favorite team were contrasted with a paired samples t -test (α level of 0.05, one-tailed). In order to determine the extent of WT, for each trial, the objectively correct winning odds were subtracted from the participants’ estimates, thus yielding 60 difference scores per participant. Missing responses and outliers (estimates referring to difference scores deviating >3 s.d.'s from the mean difference score of a participant) constituted ∼5% of the data and were eliminated. A paired-samples t -test with an α level of 0.05 comparing participants’ estimates for the favorite team with the ones for the least favorite team was calculated. Given the directional nature of the hypothesis, a one-tailed test was performed.

Functional scans were realigned, normalized, time and motion corrected, temporally smoothed by a low-pass filter consisting of a 3-point Hamming window, and spatially smoothed by a 5-mm full-width at half-maximum (FWHM) Gaussian filter, using AFNI software. Next, a percent-signal change data transformation was applied to the functional scans. Individual deconvolution analyses incorporated the generation of impulse response functions (IRFs) of the BOLD signal on a voxel-wise basis ( Ward, 2001 ), permitting the estimation of the hemodynamic response for each condition relative to a baseline state (baseline, in our case, referred to the mean individual signal across the whole experimental session) without a priori assumptions about the specific form of an IRF. Deconvolution analyses comprised separate regressors for each time point (six included TRs, starting with the apparition of the club banners and the statement-related probabilities, i.e. the calculation process) of each experimental condition, and fitted these regressors by the use of a linear least squares model. Estimated individual percent-signal change for TRs two to six was averaged for each voxel in each condition for use in later statistical analyses. Output from the individual deconvolution analyses was converted to Talairach stereotaxic coordinate space ( Talairach and Tournoux, 1988 ) and interpolated to volumes with 3 mm 3 voxels.

Since WT should rely on a distinction between favorite and least favorite team trials, for each participant, three contrasts of interest were conducted for use in group-level whole-brain analyses: Percent signal change of each condition with respect to baseline as well as the differences in-percent-signal change for favorite vs least favorite team. Results were then entered into a cluster analysis with an individual voxel threshold of P  < 0.01, minimum cluster connection radius of 5.2, and a cluster volume of 702 µl (corresponding to 26 active, contiguous voxels). Minimum cluster volume was calculated by a Monte Carlo simulation with 10 000 iterations, assuming some interdependence between voxels (5 mm FWHM), resulting in a corrected whole-brain P -value of 0.05. Consequently, this method corrects for whole brain volume and thus, multiple testing at the voxel level.

To get insight into the coactivations of the brain areas differing between the favorite and least favorite teams with other brain areas (including the human reward system), we entered the regions found to significantly differ between the two teams as input seeds for context-dependent functional connectivity (also called psychophysiological interaction, PPI) analyses. Analyses for each cluster were conducted separately for each team as well as for the differential activity between the two teams. These analyses tested, within a whole brain analysis, for synchronized activities with the input seed regions. The exact method adopted can be found at: http://afni.nimh.nih.gov/sscc/gangc/CD-CorrAna.html . Results were entered into a cluster analysis with an individual voxel threshold of P  < 0.01, minimum cluster connection radius of 5.2 and a cluster volume of 702 µl (corresponding to 26 active, contiguous voxels).

Finally, and most importantly, we tested whether clusters displaying differential activation for the favorite and the least favorite team as revealed in the whole brain analysis were indeed related to WT and not just displaying affective preferences or differential identification with the two teams. We calculated between-subjects Spearman correlations between the differential activations in these clusters, on the one hand, and the difference in behavioral estimates for the two teams, on the other hand. We also extended this analysis to the functional connectivities: The differences (favorite vs least favorite trials) in Fisher's z correlations between seed and identified functionally connected clusters, on the one hand, were correlated with the difference in behavioral estimates for the two teams, on the other hand.

The favorite team was more liked than the least favorite team, t (12) = 10.31, P  < 0.000001 ( M 's = 6.5 and 2.0), and it was judged as more pleasant, t (12) = 7.66, P  < 0.00001 ( M 's = 6.2 and 1.8), and more appealing, t (12) = 7.96, P  < 0.000005 ( M 's = 6.2 and 2.2). More importantly, participants systematically overestimated the winning prospects for their favorite team compared with their least favorite team, t (12) = 2.13, P  < 0.05 (deviations from objectively correct estimates: M 's = 1.0, and −1.3, respectively), thus displaying WT. The 13 participants specified 9 different favorite teams and 8 different least favorite teams. Therefore, our results are not restricted to specific NFL teams.

Functional MRI data

Changes from baseline for both the favorite and the least favorite team included a decrease of default network activity (medial frontal and various temporal areas), but an increase in activity of the dorsolateral prefrontal cortex (dlPFC) and occipital areas ( Table 1 ). Activations and deactivations for the two teams were largely overlapping, with the slight difference of larger cluster sizes and, consequently, fewer clusters for the favorite team trials.

Whole-brain analyses

All P s are based on two-tailed testing.

B = bilateral; L = left; R = right.

The BOLD contrast for favorite minus least favorite team revealed greater activity in (i) left medial and superior frontal gyri within the dorsomedial PFC [dmPFC; x , y , z  = −2, 54, 22; t (12) = 6.29, P  < 0.0001; cluster volume: 702 mm 3 ; Figure 2 a], (ii) right precuneus and right superior parietal lobule [ x , y , z  = 26, −60, 43; t (12) = 4.87, P  < 0.0005; cluster volume: 1404 mm 3 ; Figure 2 b] and (iii) left inferior occipital and left fusiform gyri [ x , y , z  = −34, −62, −6; t (12) = 6.73, P  < 0.0001; cluster volume: 783 mm 3 ; Figure 2 b] for the favorite team.

An external file that holds a picture, illustration, etc.
Object name is nsr081f2.jpg

Areas showing differential activation for the two teams. Significantly greater activation for the favorite team as compared with the least favorite team was observed in bilateral medial and superior frontal gyri within the medial prefrontal cortex [ a ; Talairach coordinates ( x , y , z ): −2, −54, 22], right precuneus and right superior parietal lobule ( b ; 26, −60, 43) and left interior occipital and left fusiform gyri ( b ; −34, −62, −6).

Context-dependent functional connectivity analyses

During the favorite team trials, but not during the least favorite team trials, the parietal seed cluster was functionally connected with an area in the left dlPFC ( Table 2 and Figure 3 ). However, the parietal cluster was not functionally connected to any other brain region in our study when the differential activity between the favorite and least favorite teams was examined, thus showing that the supposed difference in coactivations of the parietal cortex and the dlPFC between the favorite and the least favorite team did not reach significance.

An external file that holds a picture, illustration, etc.
Object name is nsr081f3.jpg

Results of the conducted connectivity analyses. Favorite: connectivity during favorite team trials; Least favorite: connectivity during least favorite team trials; Favorite-least favorite: differential connectivity between favorite and least favorite team trials.

The occipital seed cluster displayed, for the favorite team, positive coupling with parts of right and left claustrum, insula and putamen. It further revealed positive connectivity with an area in the dmPFC. No functional connections with the occipital seed were observed for the least favorite team trials. The favorite vs least favorite PPI demonstrated differential connectivity for two clusters comprising left and right claustrum, insula and putamen, and a cluster in the posterior cingulate cortex. In all cases stronger positive connectivity was observed for the favorite as compared with the least favorite team.

Finally, activity in the dmPFC seed, during the favorite team trials, was positively coupled with activity in left and right caudate and posterior cingulate gyri, in two other areas in the dmPFC, as well as in insula and claustrum. Interestingly, negative coupling with right caudate and cingulate gyrus as well as two areas in the dmPFC were observed for the least favorite team; one of these dmPFC clusters extended into the anterior cingulate cortex (ACC). The favorite vs least favorite PPI revealed differential connectivity of the dmPFC seed with five clusters, including left and right caudate and cingulate gyri, bilateral ACC and two clusters in the dmPFC.

Between-subjects Spearman correlations

Table 3 displays the between-subjects Spearman correlations between the areas displaying differential activity or connectivities for the favorite vs least favorite team trials, on the one hand, and the difference in behavioral estimates for favorite vs least favorite team trials, on the other hand. Participants who displayed a greater extent of differential connectivities with the occipital cluster were characterized by stronger WT in the behavioral estimates, thus supporting the idea of an attention bias being the central mechanism underlying WT in our study.

Correlations with extent of WT displayed in behavior

All P 's are based on two-tailed testing.

B = bilateral; L = left; R = right; N = 13.

Our participants displayed WT and treated their preferred sports team advantageously. In accordance with expectations, we observed systematic differences in participants’ winning estimates for the favorite as compared with the least favorite team. Additional statistical analyses revealed no difference between the neutral and the least favorite team but significantly higher estimates for the favorite than for the neutral team. Thus, the bias observed in our participants’ behavior is largely attributable to a preferential treatment of the favorite team.

On the neural level, we found differential activity between the favorite and the least favorite team in three key areas: The dmPFC, the precuneus/superior parietal lobule and the fusiform gyrus/occipital gyrus. However, activity in none of these areas was per se associated with the degree of WT displayed in our participants’ behavior. Earlier research has related activity in a network comprising the dmPFC and the right precuneus to self-relevance (here defined by social identity), self-reflection and social cognition ( Ochsner et al ., 2004 ; Amodio and Frith, 2006 ). Therefore, activity in these areas may have been related to processes accompanying the WT phenomenon while not being part of the definitive mechanism generating it. For instance, identification with the teams may have created a context of affective preferences without which WT would not have arisen. That we found differential activation in occipital (inferior occipital and fusiform gyri) and parietal areas (superior parietal lobule and precuneus) could further be a sign of selective attention to the information presented (cf Gerlach et al ., 1999 ; Sturm et al ., 2006 ) in the service of these affective preferences (e.g. McDermott et al ., 1999 ).

Limbic (i.e. areas in the anterior and posterior cingulate cortex) and dorsal striatal regions—both structures of the human reward system ( Heekeren et al ., 2007 ; Delgado et al ., 2008 ; Xue et al ., 2009 )—as well as closely related structures typically implicated in emotion processes such as the insula ( Craig, 2003 ; Critchley et al ., 2004 )—were not per se differentially active for the two teams. However, they were found to be differentially coupled with the occipital and the dmPFC clusters identified to differ between the favorite and the least favorite team trials in the activation analyses. Importantly, only the degree of differential connectivities with the occipital cluster was related to the extent of WT revealed in the participants’ behavior. Therefore, our results are supportive of an attention bias rather than interpretation or response bias being responsible for WT.

Specifically, the occipital cluster differing between the favorite team and the least favorite team was positively coupled with dorsal striatum (putamen and caudate), claustrum and insula during the favorite team trials. Furthermore, the posterior cingulate cortex displayed differential (favorite vs least favorite team) connectivity with this same cluster. Differential connectivity of these areas with the fusiform cortex is in line with the idea that the anticipated reward from seeing the favorite team winning guided the participants’ visual perception and attention in the favorite team trials, and thus supportive of the hypothesis that early selective attention in the service of hedonic needs is an important mechanism in the generation of WT. Interestingly, no coupling of any brain area with the occipital cluster was observed for the least favorite team. This result suggests that the synchronization of activities in the occipital cortex and the reward system we observed for the favorite team was related to positive rather than negative distinction processes in this study (up-regulation of expectancies for the favorite team by the application of selective visual attention). 2

Also during favorite team trials, positive coupling of activity in the dorsal striatum (caudate) was observed with activity in the area in the dmPFC that distinguished favorite and least favorite team trials. Conversely, negative coupling between these areas was demonstrated for the least favorite team. Thus, this connectivity pattern is consistent with the idea of a close link between identification and reward processing in our study. That differential activity and differential connectivities for the dmPFC cluster were altogether unrelated to the extent of WT displayed by our participants, on the contrary, is clearly inconsistent with the hypothesis that WT is generated by higher level biased cognitive evidence evaluation or response formation.

Opposed to this latter finding, alternative research on optimism bias suggests higher level cognitive processes to be of particular importance, instead. According to results reported by Sharot et al . (2007) , a mechanism of neural activity within limbic and control regions plays a pivotal role. The authors demonstrated reduced activity in the amygdala and the ACC when participants imagined negative future outcomes compared with positive future outcomes as well as positive and negative past outcomes. They suggested that the ACC applies less emotional salience to anticipated negative events (in the sense of an interpretation bias), which then leads to reduced amygdala activity.

The results of Sharot et al . (2007) may appear to diverge importantly from our own data. However, it has to be taken in consideration as well, that, in our experiment in contrast to Sharot's work on optimism, participants faced a favorite and least favorite team. Thus, they did not only anticipate positive and negative events for themselves. In some cases, in our study, negative outcomes may even have been specifically desired (i.e. anticipating low chances for the least favorite team to win the game) and the emotional salience of these outcomes did not need to be reduced. Thus, we would not expect a generally increased activity in the ACC for low vs high chances of winning (irrespective of consideration of team). For the same reason, we would not expect differential ACC activity for the two teams, irrespective of anticipated chances of winning. Moreover, the concrete aims of the study may have been more transparent and consciously perceivable in the Sharot et al . (2007) study than in our own study, thus leading to the implication of higher level cognitive processes.

In sum, our data propose that the occipital cluster and its functional connectivities are specifically involved in the type of WT investigated in our study, whereas activity in the dmPFC—possibly related to social identification—may just ensure the affective context necessary for WT to arise. We outlined that the occipital cluster has been related to visual attention, its functional connections (insula, posterior cingulate, caudate, putamen) to reward processing in earlier research. In our specific case, self-relevance may have activated structures of the human reward system, which in turn synchronized with the occipital cortex to guide visual attention in order to serve individual hedonic needs.

It is important to note, however, that our interpretations concerning the direction of information transmission in the brain during WT are premature since the PPI does not allow for causal inferences. Future research should investigate the flow of information, for instance, whether supposedly attention-related areas are recruited by the limbic and striatal areas, whether the flow of information goes the other way around, or whether influences are bi-directional.

Besides, the neural correlates we identified in our study do not need to be specific to WT. One can very well imagine highly similar neural response patterns in studies that confront individuals with alternative affective-laden situations without any need to take a decision or to make future predictions (e.g. in an affective priming context). Here, selective attention may take place nonetheless and activity in the visual cortex may display similar patterns of connectivities as in the current study.

Finally, our behavioral results demonstrate that WT appears even in tasks with demand for the application of rational Bayesian statistics. Greater effects of WT might be expected when no concrete and explicit objective background information is available, which is the case in most everyday life situations. These real-life situations should give even greater space for the employment of selective attention, for instance, by the activation of memory episodes that serve the goal of a positive identity. In such cases, the reward and regulation systems may be additionally functionally connected to memory-related neural networks.

Conflict of Interest

None declared.

Acknowledgments

This research was supported by National Institute of Mental Health Grant No. P50 MH72850 and the John Templeton Foundation.

1 , Weinstein (1980) defined this same phenomenon as optimism . We use the term wishful thinking because (i) this is the literature upon which we draw in the present study, (ii) McGuire's concept of wishful thinking predated Weinstein's use of the term optimism to refer to this phenomenon and (iii) contemporary researchers often use the term optimism not to mean Weinstein's definition but to mean the dictionary definition of optimism which implies that individuals display hopefulness and confidence about the future or the successful outcome of something without explicitly referring to individual preferences or affective experiences.

2 Remember that, consistent with this observation, our behavioral data did not reveal any systematic difference between the neutral and the least favorite team either and thus suggested a preferential treatment of the favorite team without specific punishment of the least favorite team.

  • Amodio DM, Frith CD. Meeting of minds: the medial frontal cortex and social cognition. Nature Reviews Neuroscience. 2006; 7 :268–77. [ PubMed ] [ Google Scholar ]
  • Babad E. Wishful thinking and objectivity among sports fans. Social Behaviour. 1987; 2 :231–40. [ Google Scholar ]
  • Babad E, Katz Y. Wishful thinking – against all odds. Journal of Applied Social Psychology. 1991; 21 :1921–38. [ Google Scholar ]
  • Cavendini P, Riboldi G, Keller R, D'Annucci A, Bellodi L. Frontal lobe dysfunction in pathological gambling patients. Biological Psychiatry. 2002; 51 :334–41. [ PubMed ] [ Google Scholar ]
  • Craig AD. A new view of pain as a homeostatic emotion. Trends in Neuroscience. 2003; 26 :303–7. [ PubMed ] [ Google Scholar ]
  • Critchley HD, Wiens S, Rotshtein T, Öhman A, Dolan RJ. Neural systems supporting interoceptive awareness. Nature Neuroscience. 2004; 7 :189–95. [ PubMed ] [ Google Scholar ]
  • Delgado MR, Gillis MM, Phelps EA. Regulating the expectation of reward via cognitive strategies. Nature Neuroscience. 2008; 11 :880–1. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Gerlach C, Law I, Gade A, Paulson OB. Perceptual differentiation and category effects in normal object recognition. Brain. 1999; 122 :2159–70. [ PubMed ] [ Google Scholar ]
  • Goldstein DG, Gigerenzer G. Models of ecological rationality: The recognition heuristic. Psychological Review. 2002; 109 :75–90. [ PubMed ] [ Google Scholar ]
  • Granberg D, Holmberg S, editors. The Political System Matters: Social Psychology and Voting Behavior in Sweden and the United States. Cambridge: Cambridge University Press; 1988. [ Google Scholar ]
  • Heekeren HR, Wartenburger I, Marschner A, Mell T, Villringer A, Reischies FM. Role of the ventral striatum in reward-based decision making. Neuroreport. 2007; 18 :951–5. [ PubMed ] [ Google Scholar ]
  • Kerschreiter R, Schulz-Hardt S, Mojzisch A, Frey D. Biased information search in homogeneous groups: confidence as a moderator for the effect of anticipated task requirements. Personality and Social Psychology Bulletin. 2008; 34 :679–91. [ PubMed ] [ Google Scholar ]
  • Krizan Z, Windschitl PD. The influence of outcome desirability on optimism. Psychological Bulletin. 2007; 133 :95–121. [ PubMed ] [ Google Scholar ]
  • McDermott KB, Ojemann JG, Petersen SE, Ollinger JM, Snyder AZ, Akbudak E, et al. Direct comparison of episodic encoding and retrieval of words: an event-related fMRI study. Memory. 1999; 7 :661–78. [ PubMed ] [ Google Scholar ]
  • McGuire WJ. A syllogistic analysis of cognitive relationships. In: Rosenberg MJ, Hovland CI, McGuire WJ, Abelson RP, Brehm JW, editors. Attitude Organization and Change. New Haven, CT: Yale University Press; 1960. pp. 65–111. [ Google Scholar ]
  • McLean J, Brennan D, Wyper D, Condon B, Hadley D, Cavanagh J. Localization of regions of intense pleasure response evoked by soccer goals. Psychiatry Research – Neuroimaging. 2009; 171 :33–43. [ PubMed ] [ Google Scholar ]
  • Miller EK, Cohen JD. An integrative theory of prefrontal cortex function. Annual Review of Neuroscience. 2001; 24 :167–202. [ PubMed ] [ Google Scholar ]
  • Ochsner KN, Bunge SA, Gross JJ, Gabrieli JDE. Rethinking feelings: an fMRI study of the cognitive regulation of emotion. Journal of Cognitive Neuroscience. 2002; 14 :1215–29. [ PubMed ] [ Google Scholar ]
  • Ochsner KN, Knierim K, Ludlow DH, Hanelin J, Ramachandran T, Glover G, et al. Reflecting upon feelings: an fMRI study of neural systems supporting the attribution of emotion to self and other. Journal of Cognitive Neuroscience. 2004; 16 :1746–72. [ PubMed ] [ Google Scholar ]
  • Oldfield RC. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia. 1971; 9 :97–113. [ PubMed ] [ Google Scholar ]
  • Petty RE, Cacioppo JT, editors. Attitudes and persuasion: classic and contemporary approaches. Dubuque, IA: Wm. C. Brown; 1981. [ Google Scholar ]
  • Price PC. Wishful thinking in the prediction of competitive outcomes. Thinking and Reasoning. 2000; 6 :161–72. [ Google Scholar ]
  • Sabatinelli D, Lang PJ, Keil A, Bradley MM. Emotional perception: correlation of functional MRI and event-related potentials. Cerebral Cortex. 2007; 17 :1085–91. [ PubMed ] [ Google Scholar ]
  • Sharot T, Riccardi AM, Raio CM, Phelps EA. Neural mechanisms mediating optimism bias. Nature. 2007; 450 :102–5. [ PubMed ] [ Google Scholar ]
  • Sturm W, Schmenk B, Fimm B, Specht K, Weis S, Thron A, et al. Spatial attention: more than intrinsic alerting? Experimental Brain Research. 2006; 171 :16–25. [ PubMed ] [ Google Scholar ]
  • Tajfel H, Flament C, Billig MG, Bundy RF. Social categorization and intergroup behaviour. European Journal of Social Psychology. 1971; 1 :149–77. [ Google Scholar ]
  • Talairach J, Tournoux P, editors. Co-Planar Stereotaxic Atlas of the Human Brain: 3D Proportional System: An Approach to Cerebral Imaging. New York, NY: Georg Thieme Verlag; 1988. [ Google Scholar ]
  • Turner JC, Onorato RS. Social identity, personality, and the self concept: a self categorization perspective. In: Tyler TR, Kramer RM, John OP, editors. The Psychology of the Social Self. Mahwah, NJ: Erlbaum; 1999. pp. 11–46. [ Google Scholar ]
  • Ward BD, editor. Deconvolution Analysis of fMRI Time Series Data (Technical Report) Milwaukee, WI: Biophysics Research Institute, Medical College of Wisconsin; 2001. [ Google Scholar ]
  • Weinstein ND. Unrealistic optimism about future life events. Journal of Personality and Social Psychology. 1980; 39 :806–820. [ Google Scholar ]
  • World Medical Association. Proposed revision of the Declaration of Helsinki. Bulletin of Medical Ethics. 1999; 147 :18–22. [ PubMed ] [ Google Scholar ]
  • Xue G, Lu ZL, Levin IP, Weller JA, Li XR, Bechara A. Functional dissociations of risk and reward processing in the medial prefrontal cortex. Cerebral Cortex. 2009; 19 :1019–27. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Zábojník J. A model of rational bias in self-assessments. Economic Theory. 2004; 23 :259–82. [ Google Scholar ]

Advertisement

Issue Cover

  • Previous Article
  • Next Article

EXPERIMENT 1: WISHFUL THINKING IN ToM (3-PoV) AND ONLINE BEHAVIOR (1-PoV)

Experiment 2: learning from others with an otom, general discussion, author contributions, acknowledgments, so good it has to be true: wishful thinking in theory of mind.

Competing Interests: The authors have no significant competing financial, professional, or personal interests that might have influenced the execution or presentation of the work described in this manuscript.

  • Cite Icon Cite
  • Open the PDF for in another window
  • Permissions
  • Article contents
  • Figures & tables
  • Supplementary Data
  • Peer Review
  • Search Site

Daniel Hawthorne-Madell , Noah D. Goodman; So Good It Has to Be True: Wishful Thinking in Theory of Mind. Open Mind 2017; 1 (2): 101–110. doi: https://doi.org/10.1162/OPMI_a_00011

Download citation file:

  • Ris (Zotero)
  • Reference Manager

In standard decision theory, rational agents are objective, keeping their beliefs independent from their desires. Such agents are the basis for current computational models of Theory of Mind (ToM), but the accuracy of these models are unknown. Do people really think that others do not let their desires color their beliefs? In two experiments we test whether people think that others engage in wishful thinking. We find that participants do think others believe that desirable events are more likely to happen, and that undesirable ones are less likely to happen. However, these beliefs are not well calibrated as people do not let their desires influence their beliefs in the task. Whether accurate or not, thinking that others wishfully think has consequences for reasoning about them. We find one such consequence—people learn more from an informant who thinks an event will happen despite wishing it was otherwise. People’s ToM therefore appears to be more nuanced than the current rational accounts in that it allows other’s desires to directly affect their subjective probability of an event.

Whether thinking “I can change him/her” about a rocky relationship or the more benign “those clouds will blow over” when at a picnic, people’s desires seem to color their beliefs. However, such an explanation presupposes a direct link between his desires and beliefs, a link that is currently absent in normative behavioral models and current Theory of Mind (ToM) models.

Does a causal link between desires and beliefs actually exist? 1 The evidence is mixed. There are a number of compelling studies that find “wishful thinking,” or a “desirability bias” in both carefully controlled laboratory studies (Mayraz, 2011 ) and real-world settings, such as the behavior of sport fans (Babad, 1987 ; Babad & Katz, 1991 ), expert investors (Olsen, 1997 ), and voters (Redlawsk, 2002 ). However, other researchers have failed to observe the effect—for example, Bar-Hillel and Budescu’s “The Elusive Wishful Thinking Effect” ( 1995 ) has provided alternative accounts of previous experiments (Hahn & Harris, 2014 ), and has argued that there is insufficient evidence for a systematic wishful thinking bias (Hahn & Harris, 2014 ; Krizan & Windschitl, 2007 ).

Whether or not there actually is a direct effect of desires on beliefs, people might think that there is and use this fact when reasoning about other people. That is to say, people’s ToM might incorporate the wishful thinking link seen in Figure 1b . The direct influence of desires on beliefs is a departure from classic belief–desire “folk” psychology in which beliefs and desires are independent and jointly cause action ( Figure 1a ). Previous models of ToM formalize belief–desire psychology into probabilistic models of action and belief formation. They show that inferring others’ beliefs (Baker, Saxe, & Tenenbaum, 2011 ), preferences (Jern, Lucas, & Kemp, 2011 ), and desires (Baker, Saxe, & Tenenbaum, 2009 ) can be understood as Bayesian reasoning over these generative models. A fundamental assumption of these models is that beliefs are formed on the basis of evidence, and a priori independent of desire. We will call models that make this assumption rational theories of mind (rToM). We can contrast this rationally motivated theory with one that incorporates the rose-colored lenses of a desire–belief link, an optimistic ToM (oToM). 2 We use their qualitative predictions to motivate two experiments into the presence (and calibration) of wishful thinking in ToM and its impact on social reasoning.

Competing models of Theory of Mind (ToM). Causal models of (a) rational ToM based upon classic belief-desire psychology and (b) optimistic ToM that includes a direct “wishful thinking” link between desires and beliefs.

Competing models of Theory of Mind (ToM). Causal models of (a) rational ToM based upon classic belief-desire psychology and (b) optimistic ToM that includes a direct “wishful thinking” link between desires and beliefs.

In Experiment 1 we explore wishful thinking in both ToM and behavior. In the third-person point-of-view (3-PoV) condition, we test whether people use an rToM or an oToM when reasoning about how others play a simple game—will manipulating an agent’s desire for an outcome affect people’s judgments about the agent’s belief in that outcome? In the first person point of view (1-PoV) condition we test whether people actually exhibit wishful thinking when playing the game themselves. We carefully match the (3-PoV) and (1-PoV) conditions and run them concurrently to have a clear test of whether people’s ToM assumptions lead them to make appropriate inferences about people’s behavior in the game. 3 Regardless of its appropriateness, people’s ToM should have consequences for both how they reason about others’ actions and how they learn from them. If people do attribute wishful thinking to others, it would have a dramatic impact on their interpretation of others’ behavior. In Experiment 2 we therefore test for a social learning pattern that only reasoners using an oToM would exhibit, highlighting the impact ToM assumptions have on social reasoning.

3-PoV Condition

To test for the presence of wishful thinking in people’s mental models of others we introduced Josh, a person playing a game with a transparent causal structure. The causal structure of the game was conveyed via the physical intuitions of the Galton board pictured in Figure 2b (in which a simulated ball bounces off pegs to land in one of two bins). The outcome of the game is binary (there are two bins) with different values associated with each outcome (money won or lost). We call the value of an outcome (i.e., the amount that Josh stands to win or lose) the utility of that outcome, U ( outcome ). Participants were asked what they think about Josh’s belief in the likelihood of the outcome p j ( outcome ). By manipulating outcome values we are able to test for wishful thinking. If people incorporate wishful thinking into their ToM, we should find that increasing an outcome’s utility results in higher estimates of Josh’s belief in the outcome’s occurrence, p j ( outcome ).

Stimuli used in Experiment 1. (a) The wheel used to determine the payout for the next outcome and (b) the Galton board used to decide the outcome. The blue arrow at the top indicates where the marble will be dropped. The numbers indicate the four drop positions used in the experiment.

Stimuli used in Experiment 1. (a) The wheel used to determine the payout for the next outcome and (b) the Galton board used to decide the outcome. The blue arrow at the top indicates where the marble will be dropped. The numbers indicate the four drop positions used in the experiment.

We first measured p j ( outcome | evidence ) without manipulating the desirability of the outcome in the “baseline” block of trials. Then in the “utility” block of trials we assigned values to outcomes, manipulating Josh’s U ( outcome ). 4 In the utility block of trials we used a spinning wheel ( Figure 2a ) to determine what Josh stood to win or lose based on the outcome of the marble drop. By comparing these two blocks of trials we test for the presence of wishful thinking in people’s ToM.

1-PoV Condition

To test whether people’s desires directly influence their beliefs in the Galton board game, we simply had the participant directly play the game (replacing Josh) and asked them about their belief in the likelihood of the outcome [their “self” belief p s ( outcome )].

Participants

Eighty participants (24 female, μ age = 32.93, σ age = 9.68) were randomly assigned to either the 3-PoV or the 1-PoV condition such that there were 40 in each.

Design and Procedure

Participants were first introduced to Josh, who was playing a marble-drop game with a Galton board (as seen in Figure 2b ). Josh was personified as a stick figure and appeared on every screen. We then presented the causal structure (i.e., physics) of the game by dropping a marble from the center of the board two times, with one landing in the orange bin ( Figure 2b left bin) and one landing in the purple bin ( Figure 2b right bin). After observing the two marble drops, participants began the baseline block of trials. In the four baseline trials, the marble’s drop position varied and participants were asked, “What do you think Josh thinks is the chance that the marble lands in the bin with the purple/orange box?” Participants’ responses were recorded on a continuous slider with endpoints labeled “Certainly Will” and “Certainly Won’t.” Color placement was randomized on each trial, and the color of the box in question varied between participants. The marble drop position was indicated with a blue arrow at the top of the Galton board, and there were four drop positions used ( marble x ; top of Figure 2b ) that varied in how likely they were to deliver the marble into the bin in question. In the baseline and subsequent trials, participants did not observe the marble drop and outcome; they only observed the position the marble would be dropped from.

The procedure mirrored the 3-PoV condition with the participant taking the place of Josh. All questions were therefore reframed to ask the participant’s beliefs about the outcome. The participants were given a $1 bonus initially and instructed that one trial at random would be selected to augment their current bonus, that is, they could gain or lose $1.

In a rational theory of mind, beliefs and desires are a priori independent. Manipulating Josh’s desires therefore shouldn’t have an effect on his beliefs, and we would predict that the utility trials look like the baseline trials. However, as seen in Figure 3a , the utility trials varied systematically from the baseline trials and, therefore, the predictions of an rToM. To quantify this deviation we fit a logistic mixed-effects model to participants’ p j ( outcome ) responses. The model used marble x and the categorically coded value of the outcome (negative, baseline, and positive) as fixed effects and included the random effect of marble x and intercept for each participant. The resulting model indicated that if an outcome was associated with a utility for Josh, participants thought that it would impact his beliefs about the probability of that outcome.

Experiment 1 data. The effect of an agent’s desire for an outcome on the mean subjective pj(outcome) attributed to the agent (with 95% CIs). For each participant, the mean effect of the positive utility ($1) and the negative utility (−$1) was determined by taking the difference between the pj(outcome) in each utility trial and the corresponding baseline trial. The effect is shown for the (a) 3-PoV (point-of-view) and (b) 1-PoV condition [where ps(outcome) is displayed]. These data are compared with the posterior predictives of the (c) optimistic and (d) rational Theory of Mind (ToM) models (see Supplemental Materials [Hawthorne-Madell & Goodman, 2017]).

Experiment 1 data. The effect of an agent’s desire for an outcome on the mean subjective p j ( outcome ) attributed to the agent (with 95% CIs). For each participant, the mean effect of the positive utility ($1) and the negative utility (−$1) was determined by taking the difference between the p j ( outcome ) in each utility trial and the corresponding baseline trial. The effect is shown for the (a) 3-PoV (point-of-view) and (b) 1-PoV condition [where p s ( outcome ) is displayed]. These data are compared with the posterior predictives of the (c) optimistic and (d) rational Theory of Mind (ToM) models (see Supplemental Materials [Hawthorne-Madell & Goodman, 2017 ]).

Participants thought that Josh would believe that an outcome that lost him money was less likely than the corresponding baseline trial ( β = −0.70, z = −2.10, p = .036). 6 They also thought that Josh would believe an outcome that would net him money was more likely than the corresponding baseline trial ( β = 0.96, z = 2.87, p = .004). 7 Finally, marble x , the direct evidence, had a significant influence ( β = 10.37, z = 11.78, p < .001). There was no evidence that the effect of the outcome value was affected by marble x (the interactive model did not provide a superior fit [ χ 2 (2) = 0.68, p = .736].

Unlike in the 3-PoV condition, as seen in Figure 3b , there was no effect of utility on participants’ p s ( outcome ) responses compared with their baseline responses. Using the same logistic mixed-model employed in the 3-PoV condition, neither outcomes that would lose the participant money ( β = 0.09, z = 0.30, p = .760), nor outcomes that would win them money ( β = −0.09, z = 0.30, p = .760) influenced participants’ p s ( outcome ) responses. Similar to the 3-PoV condition, a strong effect of the marble’s position was observed ( β = 8.88, z = 11.95, p < .001).

Comparing Conditions

To formalize the discrepancy of the effect of utility across conditions, we analyzed them together with a logistic mixed-model. We used the same model described previously except we continuously coded the effect of utility and added an interaction between this utility and condition. The resulting model had a significant interaction between PoV (condition) and the effect of utility on participants’ p j / s ( outcome ) responses ( β = 0.43, z = 3.83, p < .001). This interactive model provided a better fit than the additive model [ χ 2 (1) = 15.11, p < .001].

The results from the 3-PoV condition indicate that people’s ToM includes a direct “wishful thinking” link. This is consistent with the qualitative predictions of the oToM model (see the Supplemental Materials [Hawthorne-Madell & Goodman, 2017 ]; Equation 2), unlike rToM models where beliefs and desires are a priori independent. 8 However, the 1-PoV condition did not find evidence that people are biased by their desires in the Galton board game. This disconnect suggests that people’s attribution of wishful thinking in this situation is miscalibrated . That is to say that Experiment 1 represents a situation where wishful thinking is present in ToM reasoning but absent in actual behavior—people think others will behave wishfully when, in fact, they do not.

This miscalibration is consistent with an over attribution of wishful thinking. However, the present study does not provide insights into why there is this miscalibration. Any number of incorrect assumptions could lead to the results. Perhaps people think that everyone wishfully thinks, but only they are clever enough to correct for it. Alternatively, they could think that $1 or $5 is much more desirable for others than it is for themselves. There are a number of actor–observer asymmetries and self-enhancement biases that could plausibly underpin the observed inconsistency (Jones & Nisbett, 1971 ; Kunda, 1999 ). Further study is necessary to determine the cause of the over attribution.

Regardless of whether people actually engage in wishful thinking, if people assume others do, then it should affect how they interpret others’ actions and learn from them. In Experiment 2 we therefore expand our sights to social learning situations where oToM (but, crucially, not rToM) predicts that desires affect a social source’s influence.

Do people consider a social source’s desires when learning from them? It would be important to do so if they think that his desires have a direct influence on his beliefs. Consider a learner using an oToM to reason about her uncle, a Chicago Cubs fan, who proudly proclaims that this is the year the Cubs will win it all. Though her uncle knows a lot about baseball, the oToM learner is unmoved from her (understandably) skeptical stance. However, if her aunt, a lifelong Chicago White Sox fan (hometown rival to the Cubs), agrees that the Cubs do look better than the Sox this year, then an oToM learner considers this a much stronger teaching signal. In fact, a learner with an oToM would consider her aunt’s testimony as more persuasive than an impartial source (see Figure 4b ). A learner reasoning with an rToM wouldn’t distinguish between these three social sources, 9 as seen in Figure 4c .

Experiment 2 data. Effect of a social sources’ desire on how others learn from them for (a) data with 95% CIs, which we compare to the posterior predictives of (b) an optimistic Theory of Mind (ToM) and (c) a rational ToM. Points represent the mean p(teamx) response after hearing equally knowledgeable sources place a bet on teamx that is either consistent, unrelated, or inconsistent with their desires.

Experiment 2 data. Effect of a social sources’ desire on how others learn from them for (a) data with 95% CIs, which we compare to the posterior predictives of (b) an optimistic Theory of Mind (ToM) and (c) a rational ToM. Points represent the mean p ( team x ) response after hearing equally knowledgeable sources place a bet on team x that is either consistent, unrelated, or inconsistent with their desires.

We investigated which ToM best describes learning from social sources in a controlled version of this biased opinion scenario. Participants were asked how likely a team ( x ) was to win an upcoming match, p ( team x ), in a fictional college soccer tournament after seeing a knowledgeable student bet on the team. The student was either a fan of one of the teams facing off, or indifferent to the outcome. Participants therefore saw three trials—the consistent trial where the student bet on the team he wanted to win, the inconsistent trial where he bet on the team wished would lose, and the impartial trial where he didn’t care which team won before he bet.

One hundred twenty participants were randomly assigned into the consistent, inconsistent, or impartial conditions.

Participants were first introduced to a (fictional) annual British collegiate soccer tournament and told that they would see bets on these matches from a student who “Unbeknownst to his friends makes a £100 bet online on which team he thinks will win this year’s game.” 10 The student would either be a fan of one of the teams (attending that college) or neither of the teams (attending a different college). The students were equally knowledgeable across conditions, being described as seeing the outcome of the last 10 matches these teams played against each other.

After the introduction, participants were given a test trial appropriate for their (randomly assigned) condition in which the student bet consistently with his school, bet against his school, or was impartial (not a fan of either school). After observing the student’s bet and allegiance participants were asked “What do you think is the chance that team x wins the match this year?”

As seen in Figure 4a , participants’ responses were sensitive to the student’s a priori desires, consistent with learners who reason with an oToM (but not an rToM). Participants who saw an impartial student bet on team x thought the team was more likely to win than when they saw a fan of team x place an identical bet ( d = 0.80, 95% CI [0.33 1.27], z = 3.35, p < .001 11 ). This is consistent with the learner thinking that the fan’s desire to see his team win made him think it was objectively more likely. Additionally, participants who saw a fan of the other team bet on team x were more influenced than the same bet from the impartial student ( d = 0.67, 95% CI [0.21 1.14], z = 2.87, p = .004). As predicted by the model of the oToM learner, someone who bets against their desires is more diagnostic of team x being dominant than the independent source. The oToM learner thinks that team x had to be clearly dominant to overcome the wishful thinking of a fan rooting against them.

Assuming that fans engage in wishful thinking allows oToM learners to make stronger inferences about the strength of the fans’ evidence in some cases. For an rToM learner, the fan would have to have seen team x win a majority of the 10 observed matches in order to bet on them, regardless of their predilections, resulting in the flat predictions seen in Figure 4c . Meanwhile, the oToM learner thinks that a fan of team x could bet on them even if the fan only observed them win a few times. 12 If, however, the fan bets against their team, the oToM learner assumes that the fan must have seen their team trounced in the 10 observed matches. Using these insights, an oToM learner using Bayesian inference to learn from the fan will exhibit the qualitative pattern seen in Figure 4b , which is consistent with participants’ behavior (as seen in Figure 4a ). The pattern of results is consistent with the predictions of a learner using an oToM, (but see the discussion of limitations and additional potential explanations in the Supplemental Materials [Hawthorne-Madell & Goodman, 2017 ]).

Current computational models of theory of mind are built upon the assumption that beliefs are a priori independent of desires. Whether social reasoners use such a rational ToM (rToM) is an empirical question. In two experiments we tested the independence of beliefs and desires in ToM and found that people behave as if they think that others are wishful thinkers whose beliefs are colored by their desires.

In the 3-PoV condition of Experiment 1, we found that people believe that others inflate the probability of desirable outcomes and underestimate the probability of undesirable ones, as they would if they have an optimistic ToM (oToM) with a direct link between desires and beliefs ( Figure 3 ). If people broadly attribute wishful thinking to others (as Experiment 1 suggests), it should be reflected in their social reasoning. For example, social learners using an oToM to make sense of an agent’s beliefs would be sensitive to that agent’s relevant desires. This is exactly what we found in Experiment 2 ( Figure 4 )—how much people learned from an agent’s beliefs depended on his desires. Agents whose beliefs ran against their desires were more influential than impartial agents, who, in turn, were more influential than agents with consistent beliefs and desires.

The observed presence of wishful thinking in ToM has no necessary relation to its existence in people’s “online” belief formation. Indeed, the 1-PoV conditions of Experiment 1 indicate that people’s model of others’ wishful thinking is not perfectly calibrated. They over attribute wishful thinking to others in situations where they would actually form their beliefs independently of their desires. Charting the situations where wishful thinking is over applied in this way may be a fruitful avenue for further research. At the extreme, we could imagine finding that everyone thinks one another wishfully thinks, but in fact everyone forms their beliefs independent of their desires! This radical thesis is surely too strong, 13 but oToM may well overestimate the strength of wishful thinking and over generalize it—amplifying a small online effect into a larger social cognition effect. Attention to whether a task engages (potentially amplified) oToM representations could provide insight into the considerable heterogeneity of the wishful thinking effect as it has been studied. Specifically, it could help explain why first-person wishful thinking is reliably found in some paradigms and not others.

The paradigms in which wishful thinking is reliably found involve participants reasoning about themselves or others , such as the 3-PoV condition of Experiment 1 where participants reasoned about Josh’s beliefs (for a review of many tasks that may engage social reasoning, see, e.g., Shepperd, Klein, Waters, & Weinstein, 2013 , and Weinstein, 1980 , but see Harris & Hahn, 2011 , and Hahn & Harris, 2014 , for an alternative explanation). Whereas asocial paradigms involving direct estimation of probabilities usually do not find the effect, like the 1-PoV condition of Experiment 1 where participants directly estimated the chance that the ball would fall into a particular bin (for other examples of wishful thinking paradigms that do not involve social reasoning, see Study 1 of Bar-Hillel & Budescu, 1995 , and for a more general review of asocial bias experiments, see the “bookbags” and “pokerchips” paradigms cited in Hahn & Harris, 2014 , but see Francis Irwin’s series of experiments for an example of asocial paradigms that do find a wishful thinking effect—starting with Irwin, 1953 ).

Where people’s predictions of others’ behaviors (1-PoV, Experiment 1) and their actual behavior (3-PoV, Experiment 1) diverge is also important to map because these disconnects inject a systematic bias into social reasoning. Taking the social learning of Experiment 2 as an example, oToM learners ignored the belief of the agent whose bet was consistent with his desires. However, if this agent actually formed his beliefs without bias, then the learner would be missing a valuable learning opportunity. Asserting that others let their desires cloud their beliefs allows people to “explain away” those beliefs without seriously considering the possible evidence on which they are based. Future work should explore the details of these effects. For example, does a learner attribute bias equally to those who share his desires and those who hold competing ones?

The experiments presented here suggest that people think that others are wishful thinkers; this has broad consequences for social reasoning ranging from our inferences about heated scientific debates to pundit-posturing. Our findings highlight the importance of further research into the true structure of theory of mind. Do people think that others exhibit loss aversion or overweight low probabilities? Is the connection between beliefs and desires bidirectional? Rigorous examination of questions like these may buttress new, empirically motivated computational models of ToM that capture the nuance of human social cognition—an idea so good it has to be true.

All authors developed the study concept and design. Testing, data collection, and analysis were performed by DHM under supervision of NDG. DHM drafted the manuscript and NDG provided critical revisions. All authors approved the final version of the manuscript for submission.

This work was supported by ONR Grants N000141310788 and N000141310341, and a James S. McDonnell Foundation Scholar Award. We would also like to thank Joshua Hawthorne-Madell, Gregory Scontras, and Andreas Stuhlmüller for their careful reading and thoughtful comments on the manuscript.

While the causal link between desires and beliefs may, in fact, be bidirectional, we will focus on the evidence for the a priori effect of desires on beliefs.

We formally describe Bayesian models of both rToM and oToM in the Supplemental Materials (Hawthorne-Madell & Goodman, 2017 ).

Experiment 1 is a slightly modified replication of the two conditions previously run as separate experiments (see Supplemental Materials [Hawthorne-Madell & Goodman, 2017 ]).

Crucially, Josh’s U ( outcome ) should not be chosen by him, for example, “I bet $5 that it lands in the right bin,” as such an action would render U ( outcome ) and p ( outcome ) conditionally dependent and both rToM and oToM would predict influence of desire on belief judgments. To test pure wishful thinking, Josh’s U ( outcome ) has to be assigned to him by a process independent of p ( outcome )—in our case, a spinner.

All p values reported for Experiment 1 are based on the asymptotic Wald test.

There was no evidence of loss aversion in the relative magnitude of the wishful thinking effect for positive and negative utilities. In fact, the magnitude of the wishful thinking effect was slightly stronger for positive utilities.

Interestingly, there was consistency in the magnitude of this effect when Josh stood to gain $1 (as in the present experiment) or $5 in Experiment 1b (see the Supplemental Materials [Hawthorne-Madell & Goodman, 2017 ]). The extent to which people attributed wishful thinking to Josh was therefore not sensitive to the magnitude of Josh’s potential payout for this range (where payout is our operationalization of his desire).

Assuming that the three sources are equally knowledgeable and their statements have no causal influence on the game, for example, if the uncle is an umpire, his desires may matter through more objective routes.

See the Supplemental Materials [Hawthorne-Madell & Goodman, 2017 ] for complete experimental materials.

Calculated with Fisher-Pitman permutation test.

In fact, if the oToM learner thinks that the fan is a completely wishful thinker, then his bet is no longer diagnostic of his evidence (he could have seen anything!).

As seen in well-controlled examples of desires influencing online belief formation (e.g., Mayraz, 2011 ).

Competing Interests

Supplementary data, email alerts, related articles, affiliations.

  • Online ISSN 2470-2986

A product of The MIT Press

Mit press direct.

  • About MIT Press Direct

Information

  • Accessibility
  • For Authors
  • For Customers
  • For Librarians
  • Direct to Open
  • Open Access
  • Media Inquiries
  • Rights and Permissions
  • For Advertisers
  • About the MIT Press
  • The MIT Press Reader
  • MIT Press Blog
  • Seasonal Catalogs
  • MIT Press Home
  • Give to the MIT Press
  • Direct Service Desk
  • Terms of Use
  • Privacy Statement
  • Crossref Member
  • COUNTER Member  
  • The MIT Press colophon is registered in the U.S. Patent and Trademark Office

This Feature Is Available To Subscribers Only

Sign In or Create an Account

Critical thinking definition

wishful thinking in critical thinking

Critical thinking, as described by Oxford Languages, is the objective analysis and evaluation of an issue in order to form a judgement.

Active and skillful approach, evaluation, assessment, synthesis, and/or evaluation of information obtained from, or made by, observation, knowledge, reflection, acumen or conversation, as a guide to belief and action, requires the critical thinking process, which is why it's often used in education and academics.

Some even may view it as a backbone of modern thought.

However, it's a skill, and skills must be trained and encouraged to be used at its full potential.

People turn up to various approaches in improving their critical thinking, like:

  • Developing technical and problem-solving skills
  • Engaging in more active listening
  • Actively questioning their assumptions and beliefs
  • Seeking out more diversity of thought
  • Opening up their curiosity in an intellectual way etc.

Is critical thinking useful in writing?

Critical thinking can help in planning your paper and making it more concise, but it's not obvious at first. We carefully pinpointed some the questions you should ask yourself when boosting critical thinking in writing:

  • What information should be included?
  • Which information resources should the author look to?
  • What degree of technical knowledge should the report assume its audience has?
  • What is the most effective way to show information?
  • How should the report be organized?
  • How should it be designed?
  • What tone and level of language difficulty should the document have?

Usage of critical thinking comes down not only to the outline of your paper, it also begs the question: How can we use critical thinking solving problems in our writing's topic?

Let's say, you have a Powerpoint on how critical thinking can reduce poverty in the United States. You'll primarily have to define critical thinking for the viewers, as well as use a lot of critical thinking questions and synonyms to get them to be familiar with your methods and start the thinking process behind it.

Are there any services that can help me use more critical thinking?

We understand that it's difficult to learn how to use critical thinking more effectively in just one article, but our service is here to help.

We are a team specializing in writing essays and other assignments for college students and all other types of customers who need a helping hand in its making. We cover a great range of topics, offer perfect quality work, always deliver on time and aim to leave our customers completely satisfied with what they ordered.

The ordering process is fully online, and it goes as follows:

  • Select the topic and the deadline of your essay.
  • Provide us with any details, requirements, statements that should be emphasized or particular parts of the essay writing process you struggle with.
  • Leave the email address, where your completed order will be sent to.
  • Select your prefered payment type, sit back and relax!

With lots of experience on the market, professionally degreed essay writers , online 24/7 customer support and incredibly low prices, you won't find a service offering a better deal than ours.

NYU Scholars Logo

  • Help & FAQ

The complex dynamics of wishful thinking: The critical positivity ratio

Research output : Contribution to journal › Article › peer-review

We examine critically the claims made by Fredrickson and Losada (2005) concerning the construct known as the "positivity ratio." We find no theoretical or empirical justification for the use of differential equations drawn from fluid dynamics, a subfield of physics, to describe changes in human emotions over time; furthermore, we demonstrate that the purported application of these equations contains numerous fundamental conceptual and mathematical errors. The lack of relevance of these equations and their incorrect application lead us to conclude that Fredrickson and Losada's claim to have demonstrated the existence of a critical minimum positivity ratio of 2.9013 is entirely unfounded. More generally, we urge future researchers to exercise caution in the use of advanced mathematical tools, such as nonlinear dynamics, and in particular to verify that the elementary conditions for their valid application have been met.

  • Broaden-and-build theory
  • Lorenz system
  • Nonlinear dynamics
  • Positive psychology
  • Positivity ratio

ASJC Scopus subject areas

  • General Psychology

Access to Document

  • 10.1037/a0032850

Other files and links

  • Link to publication in Scopus
  • Link to the citations in Scopus

Fingerprint

  • Nonlinear Dynamics Medicine & Life Sciences 100%
  • Physics Medicine & Life Sciences 93%
  • Hydrodynamics Medicine & Life Sciences 88%
  • Thinking Medicine & Life Sciences 75%
  • physics Social Sciences 64%
  • Emotions Medicine & Life Sciences 60%
  • emotion Social Sciences 55%
  • Research Personnel Medicine & Life Sciences 53%

T1 - The complex dynamics of wishful thinking

T2 - The critical positivity ratio

AU - Brown, Nicholas J.L.

AU - Sokal, Alan D.

AU - Friedman, Harris L.

PY - 2013/12

Y1 - 2013/12

N2 - We examine critically the claims made by Fredrickson and Losada (2005) concerning the construct known as the "positivity ratio." We find no theoretical or empirical justification for the use of differential equations drawn from fluid dynamics, a subfield of physics, to describe changes in human emotions over time; furthermore, we demonstrate that the purported application of these equations contains numerous fundamental conceptual and mathematical errors. The lack of relevance of these equations and their incorrect application lead us to conclude that Fredrickson and Losada's claim to have demonstrated the existence of a critical minimum positivity ratio of 2.9013 is entirely unfounded. More generally, we urge future researchers to exercise caution in the use of advanced mathematical tools, such as nonlinear dynamics, and in particular to verify that the elementary conditions for their valid application have been met.

AB - We examine critically the claims made by Fredrickson and Losada (2005) concerning the construct known as the "positivity ratio." We find no theoretical or empirical justification for the use of differential equations drawn from fluid dynamics, a subfield of physics, to describe changes in human emotions over time; furthermore, we demonstrate that the purported application of these equations contains numerous fundamental conceptual and mathematical errors. The lack of relevance of these equations and their incorrect application lead us to conclude that Fredrickson and Losada's claim to have demonstrated the existence of a critical minimum positivity ratio of 2.9013 is entirely unfounded. More generally, we urge future researchers to exercise caution in the use of advanced mathematical tools, such as nonlinear dynamics, and in particular to verify that the elementary conditions for their valid application have been met.

KW - Broaden-and-build theory

KW - Lorenz system

KW - Nonlinear dynamics

KW - Positive psychology

KW - Positivity ratio

UR - http://www.scopus.com/inward/record.url?scp=84890783867&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84890783867&partnerID=8YFLogxK

U2 - 10.1037/a0032850

DO - 10.1037/a0032850

M3 - Article

C2 - 23855896

AN - SCOPUS:84890783867

SN - 0003-066X

JO - American Psychologist

JF - American Psychologist

Help | Advanced Search

Nonlinear Sciences > Chaotic Dynamics

Title: the complex dynamics of wishful thinking: the critical positivity ratio.

Abstract: We examine critically the claims made by Fredrickson and Losada (2005) concerning the construct known as the "positivity ratio". We find no theoretical or empirical justification for the use of differential equations drawn from fluid dynamics, a subfield of physics, to describe changes in human emotions over time; furthermore, we demonstrate that the purported application of these equations contains numerous fundamental conceptual and mathematical errors. The lack of relevance of these equations and their incorrect application lead us to conclude that Fredrickson and Losada's claim to have demonstrated the existence of a critical minimum positivity ratio of 2.9013 is entirely unfounded. More generally, we urge future researchers to exercise caution in the use of advanced mathematical tools such as nonlinear dynamics and in particular to verify that the elementary conditions for their valid application have been met.

Submission history

Access paper:.

  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

2 blog links

Bibtex formatted citation.

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

Wishful Thinking: Why It Happens, How to Prevent It, and How to Return to Reality

Wishful thinking is expensive. It causes delays, rework, and even outright failure, on every scale from the project to the enterprise. In this program, we explore tools we can use to detect wishful thinking while we're doing it. We then describe several techniques for preventing it, noticing it, and finally, for repairing the consequences of wishful thinking when it happens.

When things go wrong, and we look back at how we got there, we must sometimes admit to wishful thinking. In this program, we explore why wishful thinking is so common and suggest techniques for limiting its frequency and effects.

Wishful Thinking

A dandelion gone to seed. Some people like to make wishes and then blow on the seeds to set them free.

For example, a phenomenon known as the IKEA effect is the tendency for people to overvalue objects that they partially assembled themselves, such as furniture from IKEA, independent of the quality of the end result. This phenomenon, in organizations, might explain the "not invented here" syndrome that is, in many organizations, responsible for so much waste of resources and loss of market position.

In this program we provide practices and procedures for decision makers or groups of decision makers to help them detect and prevent wishful thinking. The work is based on tools for improving interpersonal communication due to Virginia Satir, and on recent advances in our understanding of the various psychological phenomena known as cognitive biases.

This program gives attendees the tools and concepts they need to detect and prevent wishful thinking, and once detected, to intervene constructively to limit its effects. It deals with issues such as:

  • How does wishful thinking affect our awareness of situations?
  • How does it influence our ability to remain attentive to tasks?
  • How does it affect the likelihood of seeing patterns that aren't actually present?
  • How does it contribute to the sunk cost effect, thereby interfering with attempts to cancel failed efforts?
  • What is the sunk time effect and how does it affect management?
  • How does wishful thinking prime us to make bad decisions?
  • How does wishful thinking affect negotiation?
  • Can it distort our assessment of the state of mind of superiors, subordinates, colleagues, or rivals? How?

This program is available as a keynote , seminar , or breakout . For the shorter formats, coverage of the outline below is selective.

Learning objectives

This program helps people who make decisions or who want or need to influence others as they make decisions. As it turns out, that's just about everyone in the knowledge-oriented workplace. Participants learn:

  • The concept and importance of cognitive biases
  • The Satir Interaction Model of communication, and how to use it as a framework for detecting wishful thinking
  • How cognitive biases contribute to the incidence of wishful thinking
  • What conditions expose groups to the risk of wishful thinking
  • Indicators of those cognitive biases that most affect individual and group decision making through wishful thinking
  • A basic checklist to use to evaluate the likelihood that wishful thinking has affected group decisions through cognitive biases

Participants learn to appreciate the causes of wishful thinking, both personal and organizational. Most important, they learn strategies and tactics for limiting its effects, or, having discovered that wishful thinking is in action, how to intervene to enhance decision quality.

Program structure and content

We learn through presentation, discussion, exercises, simulations, and post-program activities. We can tailor a program for you that addresses your specific challenges, or we can deliver a tried-and-true format that has worked well for other clients. Participants usually favor a mix of presentation, discussion, and focused exercises.

Whether you're a veteran decision maker, or a relative newcomer to high-stakes decision making as a workplace practice, this program is a real eye-opener.

Learning model

When we learn most new skills, we intend to apply them in situations with low emotional content. But knowledge about how people work together is most needed in highly charged situations. That's why we use a learning model that goes beyond presentation and discussion — it includes in the mix simulation, role-play, metaphorical problems, and group processing. This gives participants the resources they need to make new, more constructive choices even in tense situations. And it's a lot more fun for everybody.

Target audience

Decision makers at all levels, including managers of global operations, sponsors of global projects, managers, business analysts, team leads, project managers, and team members.

Program duration

Available formats range from 50 minutes to one full day. The longer formats allow for more coverage or more material, more experiential content and deeper understanding of issues specific to audience experience.

Follow Rick

Send email or subscribe to one of my newsletters

Send an email message to a friend

rbren IyeJIiAfnGdKlUXr ner@Chac sxirZwZlENmHUNHi oCanyon.com Send a message to Rick

   A Tip A Day feed

   Point Lookout weekly feed

Technical Debt for Policymakers Blog

  • "Rick is a dynamic presenter who thinks on his feet to keep the material relevant to the group." — Tina L. Lawson, Technical Project Manager, BankOne (now J.P. Morgan Chase)
  • "Rick truly has his finger on the pulse of teams and their communication." — Mark Middleton, Team Lead, SERS

Comprehensive collection of all e-books and e-booklets

  • More from M-W
  • To save this word, you'll need to log in. Log In

wishful thinking

Definition of wishful thinking

Examples of wishful thinking in a sentence.

These examples are programmatically compiled from various online sources to illustrate current usage of the word 'wishful thinking.' Any opinions expressed in the examples do not represent those of Merriam-Webster or its editors. Send us feedback about these examples.

Word History

1932, in the meaning defined above

Dictionary Entries Near wishful thinking

wishful thinker

Cite this Entry

“Wishful thinking.” Merriam-Webster.com Dictionary , Merriam-Webster, https://www.merriam-webster.com/dictionary/wishful%20thinking. Accessed 24 Apr. 2024.

More from Merriam-Webster on wishful thinking

Nglish: Translation of wishful thinking for Spanish Speakers

Subscribe to America's largest dictionary and get thousands more definitions and advanced search—ad free!

Play Quordle: Guess all four words in a limited number of tries.  Each of your guesses must be a real 5-letter word.

Can you solve 4 words at once?

Word of the day.

See Definitions and Examples »

Get Word of the Day daily email!

Popular in Grammar & Usage

Commonly misspelled words, how to use em dashes (—), en dashes (–) , and hyphens (-), absent letters that are heard anyway, how to use accents and diacritical marks, on 'biweekly' and 'bimonthly', popular in wordplay, the words of the week - apr. 19, 10 words from taylor swift songs (merriam's version), 9 superb owl words, 10 words for lesser-known games and sports, your favorite band is in the dictionary, games & quizzes.

Play Blossom: Solve today's spelling word game by finding as many words as you can using just 7 letters. Longer words score more points.

  • Departments, units, and programs
  • College leadership
  • Diversity, equity, and inclusion
  • Faculty and staff resources
  • LAS Strategic Plan

Facebook

  • Apply to LAS
  • Liberal arts & sciences majors
  • LAS Insider blog
  • Admissions FAQs
  • Parent resources
  • Pre-college summer programs

Quick Links

Request info

  • Academic policies and standing
  • Advising and support
  • College distinctions
  • Dates and deadlines
  • Intercollegiate transfers
  • LAS Lineup student newsletter
  • Programs of study
  • Scholarships
  • Certificates
  • Student emergencies

Student resources

  • Access and Achievement Program
  • Career services
  • First-Year Experience
  • Honors program
  • International programs
  • Internship opportunities
  • Paul M. Lisnek LAS Hub
  • Student research opportunities
  • Expertise in LAS
  • Research facilities and centers
  • Dean's Distinguished Lecture series
  • Alumni advice
  • Alumni award programs
  • Get involved
  • LAS Alumni Council
  • LAS@Work: Alumni careers
  • Study Abroad Alumni Networks
  • Update your information
  • Nominate an alumnus for an LAS award
  • Faculty honors
  • The Quadrangle Online
  • LAS News email newsletter archive
  • LAS social media
  • Media contact in the College of LAS
  • LAS Landmark Day of Giving
  • About giving to LAS
  • Building projects
  • Corporate engagement
  • Faculty support
  • Lincoln Scholars Initiative
  • Impact of giving

Why is critical thinking important?

What do lawyers, accountants, teachers, and doctors all have in common?

Students in the School of Literatures, Languages, Cultures, and Linguistics give a presentation in a classroom in front of a screen

What is critical thinking?

The Oxford English Dictionary defines critical thinking as “The objective, systematic, and rational analysis and evaluation of factual evidence in order to form a judgment on a subject, issue, etc.” Critical thinking involves the use of logic and reasoning to evaluate available facts and/or evidence to come to a conclusion about a certain subject or topic. We use critical thinking every day, from decision-making to problem-solving, in addition to thinking critically in an academic context!

Why is critical thinking important for academic success?

You may be asking “why is critical thinking important for students?” Critical thinking appears in a diverse set of disciplines and impacts students’ learning every day, regardless of major.

Critical thinking skills are often associated with the value of studying the humanities. In majors such as English, students will be presented with a certain text—whether it’s a novel, short story, essay, or even film—and will have to use textual evidence to make an argument and then defend their argument about what they’ve read. However, the importance of critical thinking does not only apply to the humanities. In the social sciences, an economics major , for example, will use what they’ve learned to figure out solutions to issues as varied as land and other natural resource use, to how much people should work, to how to develop human capital through education. Problem-solving and critical thinking go hand in hand. Biology is a popular major within LAS, and graduates of the biology program often pursue careers in the medical sciences. Doctors use critical thinking every day, tapping into the knowledge they acquired from studying the biological sciences to diagnose and treat different diseases and ailments.

Students in the College of LAS take many courses that require critical thinking before they graduate. You may be asked in an Economics class to use statistical data analysis to evaluate the impact on home improvement spending when the Fed increases interest rates (read more about real-world experience with Datathon ). If you’ve ever been asked “How often do you think about the Roman Empire?”, you may find yourself thinking about the Roman Empire more than you thought—maybe in an English course, where you’ll use text from Shakespeare’s Antony and Cleopatra to make an argument about Roman imperial desire.  No matter what the context is, critical thinking will be involved in your academic life and can take form in many different ways.

The benefits of critical thinking in everyday life

Building better communication.

One of the most important life skills that students learn as early as elementary school is how to give a presentation. Many classes require students to give presentations, because being well-spoken is a key skill in effective communication. This is where critical thinking benefits come into play: using the skills you’ve learned, you’ll be able to gather the information needed for your presentation, narrow down what information is most relevant, and communicate it in an engaging way. 

Typically, the first step in creating a presentation is choosing a topic. For example, your professor might assign a presentation on the Gilded Age and provide a list of figures from the 1870s—1890s to choose from. You’ll use your critical thinking skills to narrow down your choices. You may ask yourself:

  • What figure am I most familiar with?
  • Who am I most interested in? 
  • Will I have to do additional research? 

After choosing your topic, your professor will usually ask a guiding question to help you form a thesis: an argument that is backed up with evidence. Critical thinking benefits this process by allowing you to focus on the information that is most relevant in support of your argument. By focusing on the strongest evidence, you will communicate your thesis clearly.

Finally, once you’ve finished gathering information, you will begin putting your presentation together. Creating a presentation requires a balance of text and visuals. Graphs and tables are popular visuals in STEM-based projects, but digital images and graphics are effective as well. Critical thinking benefits this process because the right images and visuals create a more dynamic experience for the audience, giving them the opportunity to engage with the material.

Presentation skills go beyond the classroom. Students at the University of Illinois will often participate in summer internships to get professional experience before graduation. Many summer interns are required to present about their experience and what they learned at the end of the internship. Jobs frequently also require employees to create presentations of some kind—whether it’s an advertising pitch to win an account from a potential client, or quarterly reporting, giving a presentation is a life skill that directly relates to critical thinking. 

Fostering independence and confidence

An important life skill many people start learning as college students and then finessing once they enter the “adult world” is how to budget. There will be many different expenses to keep track of, including rent, bills, car payments, and groceries, just to name a few! After developing your critical thinking skills, you’ll put them to use to consider your salary and budget your expenses accordingly. Here’s an example:

  • You earn a salary of $75,000 a year. Assume all amounts are before taxes.
  • 1,800 x 12 = 21,600
  • 75,000 – 21,600 = 53,400
  • This leaves you with $53,400
  • 320 x 12 = 3,840 a year
  • 53,400-3,840= 49,560
  • 726 x 12 = 8,712
  • 49,560 – 8,712= 40,848
  • You’re left with $40,848 for miscellaneous expenses. You use your critical thinking skills to decide what to do with your $40,848. You think ahead towards your retirement and decide to put $500 a month into a Roth IRA, leaving $34,848. Since you love coffee, you try to figure out if you can afford a daily coffee run. On average, a cup of coffee will cost you $7. 7 x 365 = $2,555 a year for coffee. 34,848 – 2,555 = 32,293
  • You have $32,293 left. You will use your critical thinking skills to figure out how much you would want to put into savings, how much you want to save to treat yourself from time to time, and how much you want to put aside for emergency funds. With the benefits of critical thinking, you will be well-equipped to budget your lifestyle once you enter the working world.

Enhancing decision-making skills

Choosing the right university for you.

One of the biggest decisions you’ll make in your life is what college or university to go to. There are many factors to consider when making this decision, and critical thinking importance will come into play when determining these factors.

Many high school seniors apply to colleges with the hope of being accepted into a certain program, whether it’s biology, psychology, political science, English, or something else entirely. Some students apply with certain schools in mind due to overall rankings. Students also consider the campus a school is set in. While some universities such as the University of Illinois are nestled within college towns, New York University is right in Manhattan, in a big city setting. Some students dream of going to large universities, and other students prefer smaller schools. The diversity of a university’s student body is also a key consideration. For many 17- and 18-year-olds, college is a time to meet peers from diverse racial and socio-economic backgrounds and learn about life experiences different than one’s own.

With all these factors in mind, you’ll use critical thinking to decide which are most important to you—and which school is the right fit for you.

Develop your critical thinking skills at the University of Illinois

At the University of Illinois, not only will you learn how to think critically, but you will put critical thinking into practice. In the College of LAS, you can choose from 70+ majors where you will learn the importance and benefits of critical thinking skills. The College of Liberal Arts & Sciences at U of I offers a wide range of undergraduate and graduate programs in life, physical, and mathematical sciences; humanities; and social and behavioral sciences. No matter which program you choose, you will develop critical thinking skills as you go through your courses in the major of your choice. And in those courses, the first question your professors may ask you is, “What is the goal of critical thinking?” You will be able to respond with confidence that the goal of critical thinking is to help shape people into more informed, more thoughtful members of society.

With such a vast representation of disciplines, an education in the College of LAS will prepare you for a career where you will apply critical thinking skills to real life, both in and outside of the classroom, from your undergraduate experience to your professional career. If you’re interested in becoming a part of a diverse set of students and developing skills for lifelong success, apply to LAS today!

Read more first-hand stories from our amazing students at the LAS Insider blog .

  • Privacy Notice
  • Accessibility

IMAGES

  1. PPT

    wishful thinking in critical thinking

  2. Critical Thinking Skills: Definitions, Examples, and How to Improve

    wishful thinking in critical thinking

  3. What do you know about Wishful Thinking?

    wishful thinking in critical thinking

  4. Lecture 1 intro and concepts(critical thinking)

    wishful thinking in critical thinking

  5. How to be a critical thinker

    wishful thinking in critical thinking

  6. How to promote Critical Thinking Skills

    wishful thinking in critical thinking

VIDEO

  1. Wishful thinking (cover 01/16/24)

  2. Comanova

  3. Comanova

  4. Wishful Thinking

  5. Wishful Thinking

  6. The Deep Image Analysis Episode

COMMENTS

  1. The Surprising Effects of Wishful Thinking

    Or peruse the internet, full of fat-shaming memes and tweets. This pervasive negativity towards larger body shapes, and the resulting discrimination and shame that people experience, becomes ...

  2. Wishful Thinking

    Wishful Thinking. Description: When the desire for something to be true is used in place of/or as evidence for the truthfulness of the claim. Wishful thinking, more as a cognitive bias than a logical fallacy, can also cause one to evaluate evidence very differently based on the desired outcome. Logical Form:

  3. Bayesianism and wishful thinking are compatible

    Nature Human Behaviour (2024) Bayesian principles show up across many domains of human cognition, but wishful thinking—where beliefs are updated in the direction of desired outcomes rather than ...

  4. Ought Is : Department of Philosophy : Texas State University

    Ought Is. The ought-is fallacy occurs when you assume that the way you want things to be is the way they are. This is also called wishful thinking. Wishful thinking is believing what you want to be true no matter the evidence or without evidence at all, or assuming something is not true, because you do not want it to be so. Examples:

  5. PDF American Psychologist

    The Complex Dynamics of Wishful Thinking The Critical Positivity Ratio Nicholas J. L. Brown Strasbourg, France Alan D. Sokal New York University and University College London Harris L. Friedman Saybrook University and University of Florida We examine critically the claims made by Fredrickson and Losada (2005) concerning the construct known as the

  6. All Thinking is 'Wishful' Thinking: Trends in Cognitive Sciences

    Whereas the term 'wishful thinking' is typically reserved for the motivation for specific certainty, the motivations to avoid it, or to have (or avoid) nonspecific certainties can be no less motivating or 'wished for', supporting our claim that 'all thinking is wishful thinking'; in other words, all thinking is motivated.

  7. Wishful Thinking and Values in Science

    Abstract. This article examines the concept of wishful thinking in philosophical literature on science and values. It suggests that this term tends to be used in an overly broad manner that fails to distinguish between separate types of bias, mechanisms that generate biases, and general theories that might explain those mechanisms.

  8. Self-Deception

    S's acquiring the belief that p is a product of "reflective, critical reasoning," and S is wrong in regarding that reasoning as properly directed; ... Self-Deception and Wishful Thinking: What distinguishes wishful thinking from self-deception, according to intentionalists, just is that the latter is intentional while the former is not ...

  9. The complex dynamics of wishful thinking: the critical positivity ratio

    The lack of relevance of these equations and their incorrect application lead us to conclude that Fredrickson and Losada's claim to have demonstrated the existence of a critical minimum positivity ratio of 2.9013 is entirely unfounded. More generally, we urge future researchers to exercise caution in the use of advanced mathematical tools, such ...

  10. PDF The Complex Dynamics of Wishful Thinking: The Critical Positivity Ratio

    Fredrickson and Kurtz (2011, pp. 41-42), in a recent review, highlighted this work as providing an "evidence-based guideline" for the claim that a specific value of the positivity ratio acts as a "tipping point beyond which the full impact of positive emotions becomes unleashed" (they now round off 2.9013 to 3).

  11. Critical Thinking vs. Wishful Thinking

    The real battle in society, and within any individual soul, is the battle between wishful thinking and critical thinking. Critical thinking does not always lead to the truth, but it provides the means for correction while getting to the truth. And it helps you face and identify the truth, once there. Wishful thinking gives up on the concept of ...

  12. Neural correlates of wishful thinking

    Abstract. Wishful thinking (WT) implies the overestimation of the likelihood of desirable events. It occurs for outcomes of personal interest, but also for events of interest to others we like. We investigated whether WT is grounded on low-level selective attention or on higher level cognitive processes including differential weighting of ...

  13. So Good It Has to Be True: Wishful Thinking in Theory of Mind

    Competing models of Theory of Mind (ToM). Causal models of (a) rational ToM based upon classic belief-desire psychology and (b) optimistic ToM that includes a direct "wishful thinking" link between desires and beliefs. In Experiment 1 we explore wishful thinking in both ToM and behavior. In the third-person point-of-view (3-PoV) condition ...

  14. Wishful Thinking

    Through wishful thinking, individuals prioritize desires over empirical truth in forming beliefs and making decisions. This phenomenon impacts both personal and broader societal decisions, underscoring the importance of critical, evidence-based thinking in navigating life's complexities.

  15. Using Critical Thinking in Essays and other Assignments

    Critical thinking, as described by Oxford Languages, is the objective analysis and evaluation of an issue in order to form a judgement. Active and skillful approach, evaluation, assessment, synthesis, and/or evaluation of information obtained from, or made by, observation, knowledge, reflection, acumen or conversation, as a guide to belief and action, requires the critical thinking process ...

  16. The complex dynamics of wishful thinking: The critical positivity ratio

    The lack of relevance of these equations and their incorrect application lead us to conclude that Fredrickson and Losada's claim to have demonstrated the existence of a critical minimum positivity ratio of 2.9013 is entirely unfounded. More generally, we urge future researchers to exercise caution in the use of advanced mathematical tools, such ...

  17. [1307.7006] The complex dynamics of wishful thinking: The critical

    The complex dynamics of wishful thinking: The critical positivity ratio. We examine critically the claims made by Fredrickson and Losada (2005) concerning the construct known as the "positivity ratio". We find no theoretical or empirical justification for the use of differential equations drawn from fluid dynamics, a subfield of physics, to ...

  18. The complex dynamics of wishful thinking: The critical positivity ratio

    Abstract. We examine critically the claims made by Fredrickson and Losada (2005) concerning the construct known as the "positivity ratio.". We find no theoretical or empirical justification for the use of differential equations drawn from fluid dynamics, a subfield of physics, to describe changes in human emotions over time; furthermore, we ...

  19. (PDF) The Persistence of Wishful Thinking: Response to "Updated

    2. Running head: RESPONSE ON POSITIVITY RATI OS 3. Recently we (Brown, Sok al, & Friedman, 2013) debunked the widely-cited claim made by. F redrickson and Losada (2005) that their use of a ma ...

  20. Anxiety Drives Wishful Thinking to Risky Levels

    The findings suggest that while wishful thinking can help cope with stress, it may also hinder necessary actions in critical situations like health or environmental crises. Key Facts: Bias Towards Optimism: In situations of anxiety, people tend to mistakenly perceive less harmful outcomes, showcasing a preference for wishful thinking.

  21. Wishful Thinking: Why It Happens, How to Prevent It,

    Wishful thinking is a means of reaching pleasing conclusions, maintaining preferred beliefs, or rejecting unfavorable beliefs, even when reality demands otherwise. We think wishfully by cherry-picking evidence, bending the rules of rational thought, or creating substitutes for reality. Wishful thinking is a source of risk in every human ...

  22. Wishful thinking Definition & Meaning

    The meaning of WISHFUL THINKING is the attribution of reality to what one wishes to be true or the tenuous justification of what one wants to believe. How to use wishful thinking in a sentence.

  23. 13 Examples of Wishful Thinking

    13 Examples of Wishful Thinking. John Spacey, February 28, 2021. Wishful thinking is the formation of opinions, decisions and strategy based on motivation and desires as opposed to realities. This is less than logical but may have a useful function in certain situations. The following are illustrative examples of wishful thinking.

  24. Why is critical thinking important?

    The benefits of critical thinking in everyday life Building better communication. One of the most important life skills that students learn as early as elementary school is how to give a presentation. Many classes require students to give presentations, because being well-spoken is a key skill in effective communication. This is where critical ...