Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Stereotype Content Model: An Analysis of Warmth and Competence Dimensions, Study notes of Social Psychology

An in-depth exploration of the Stereotype Content Model (SCM), which suggests that inferring Warmth and Competence serves adaptive functions for human beings. the controversy surrounding the number of dimensions of stereotypes and the importance of Morality and Sociability within the Warmth dimension and Ability and Assertiveness within the Competence dimension. The document also discusses the implications of stereotypes on intergroup relations and social hierarchies.

Typology: Study notes

2021/2022

Uploaded on 09/27/2022

marylen
marylen 🇺🇸

4.6

(26)

250 documents

1 / 75

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
SPONTANEOUS STEREOTYPE CONTENT 1
A Spontaneous Stereotype Content Model:
Taxonomy, Properties, and Prediction
Gandalf Nicolas12a, Xuechunzi Bai2, & Susan T. Fiske2
1 Rutgers UniversityNew Brunswick, NJ, United States
2 Princeton University, NJ, United States
In press at the Journal of Personality & Social Psychology.
This is a non-final, non-copy-edited version of the paper.
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b

Partial preview of the text

Download Stereotype Content Model: An Analysis of Warmth and Competence Dimensions and more Study notes Social Psychology in PDF only on Docsity!

A Spontaneous Stereotype Content Model:

Taxonomy, Properties, and Prediction

Gandalf Nicolas 12a^ , Xuechunzi Bai 2 , & Susan T. Fiske^2

(^1) Rutgers University – New Brunswick, NJ, United States (^2) Princeton University, NJ, United States

In press at the Journal of Personality & Social Psychology. This is a non-final, non-copy-edited version of the paper.

Abstract The Spontaneous Stereotype Content Model (SSCM) describes a comprehensive taxonomy, with associated properties and predictive value, of social-group beliefs that perceivers report in open-ended responses. Four studies (N = 1,470) show the utility of spontaneous stereotypes, compared to traditional, prompted, scale-based stereotypes. Using natural language processing text analyses, Study 1 shows the most common spontaneous stereotype dimensions for salient social groups. Our results confirm existing stereotype models’ dimensions, while uncovering a significant prevalence of dimensions that these models do not cover, such as Health, Appearance, and Deviance. The SSCM also characterizes the valence, direction, and accessibility of reported dimensions (e.g., Ability stereotypes are mostly positive, but Morality stereotypes are mostly negative; Sociability stereotypes are provided later than Ability stereotypes in a sequence of open-ended responses). Studies 2 and 3 check the robustness of these findings by: using a larger sample of social groups, varying time pressure, and diversifying analytical strategies. Study 3 also establishes the value of spontaneous stereotypes: compared to scales alone, open- ended measures improve predictions of attitudes toward social groups. Improvement in attitude prediction results partially from a more comprehensive taxonomy as well as a construct we refer to as stereotype representativeness: the prevalence of a stereotype dimension in perceivers’ spontaneous beliefs about a social group. Finally, Study 4 examines how the taxonomy provides additional insight into stereotypes’ influence on decision making in socially relevant scenarios. Overall, spontaneous content broadens our understanding of stereotyping and intergroup relations.

Keywords : Stereotype content, social cognition, intergroup relations, text analysis, natural language processing

moral, trustworthy, friendly), and whether they can act on their intentions to help or harm (competent, skilled, agentic, assertive). Social targets can vary independently on the Warmth and Competence dimensions (Fiske et al., 2002). For example, some social targets appear both warm and competent, such as the middle class or White people in the U.S. Other groups are judged to be both untrustworthy and incompetent, such as homeless people. However, social targets can also seem high on Warmth but low on Competence (e.g., elderly people), as well as low on Warmth but high on Competence (e.g., rich people). Depending on the historical moment, various ethnic and national groups land into these stereotypic quadrants, e.g., in the U.S., Asian people as competent but not warm; Hispanic people as neither competent nor warm; Canadians as both. The content matters because individuals’ focus on other people’s character predicts a myriad of outcomes, from emotional responses to interpersonal behaviors (Cuddy et al., 2007). Besides helping and harming, other behaviors associated with Warmth and Competence considerations include impression management (Dupree & Fiske, 2019), interactions across societal and organizational hierarchies (see Fiske et al., 2016), and hiring and performance evaluations (Cuddy et al., 2011). Considerations of Warmth and Competence extend even beyond other humans, to values and beliefs with respect to animals (e.g., Goodwin, 2015; Sevillano & Fiske, 2016) and organizations (e.g., Malone & Fiske, 2013), informing descriptive ethical issues ranging from veganism and ecological conservation to corporate responsibility. Current Controversies Despite its proven utility, the development of the SCM proceeded in an entirely theory- driven manner, working from the assumption that Warmth and Competence would be a good fit

for evaluative dimensions, building on the larger previous person perception literature (e.g., Asch, 1946). As a result, it focused on a subset of the possible dimensions that perceivers may use to make sense of others. But perhaps the taxonomy of stereotypes is much more complex. Many lines of research have studied different stereotype contents, in a non-unified manner. For example, some research has examined intersectional group membership stereotypes, that is, beliefs about a group’s members also belonging to other social groups (e.g., Schug et al., 2015). Other research has examined stereotypes about geographic origin (e.g., Lee et al., 2009) or about beauty and physical traits (e.g., Nicolas, Skinner, & Dickter, 2019). And just as the SCM stereotype dimensions have a myriad of associated behaviors and consequences for targets, so can these less-frequently-studied dimensions (e.g., see Nicolas et al., 2017). For example, Geography stereotypes (e.g., related to foreignness) about Asian Americans can result in interracial tension and discrimination (Lee et al., 2009), Deviance stereotypes about ethnic minorities predict social distance in domains of marriage and friendship (Hagendoorn & Hraba, 1989), and Emotions and Health stereotypes can lead to misdiagnoses and discrimination in healthcare settings (Boysen et al., 2006; Neighbors et al., 1989). However, these studies often focus on a single dimension for a limited subset of relevant social groups. This paper aims to unify these prior endeavors by studying multiple dimensions used widely across a society’s most salient social groups. A recent exception to these examinations of very specific social groups, which initially challenged the SCM, is the Agency-Beliefs-Communion (ABC) model of stereotype content (Koch et al., 2016; Koch et al., 2020). The ABC model examined stereotypes of a large representation of social groups in a data-driven manner, allowing for the emergence of dimensions other than Warmth and Competence. In fact, focusing on intergroup similarity in a

comprehensive taxonomy, sketching a framework for understanding more complex dynamics of stereotyping. Spontaneous Content The importance of understanding stereotype content and the current diversity of proposed stereotype dimensions demands a unified taxonomy of contents for a representative sample of societal groups. We believe that traditional metrics such as scales, although useful at exploring knowledge of stereotypes along predefined dimensions, cannot uncover the whole gamut of contents that perceivers possess about a diverse sample of social groups. More novel approaches, such as the ABC’s spatial arrangement method, capitalize on abstract measures of similarity to derive models of how perceivers organize groups along content dimensions. However, these methods still need to be correlated with a predetermined set of scale-measured dimensions for interpretation and are thus limited to evaluating only these preselected dimensions (see Koch et al., 2016). In addition, spatial arrangement cannot determine the percentage of stereotype content that can be classified into the dimensions the model identifies (i.e., its coverage), a criterion that can be used to evaluate a taxonomy’s comprehensiveness. In contrast to previous methods, here we propose that free-response, open-ended stereotypes of social groups may best systematically reveal the complex contents that are spontaneously available to perceivers upon encountering a target. Free response tasks have been pivotal in recent attempts to revise and improve upon well-established theories and findings. For example, partially due to reliance on forced-choice tasks, previous research on emotion has posited the existence of universal basic emotions that were closely tied to specific physical representations and were independent of language. However, studies on spontaneous emotion perception reveal a more psychological constructionist perspective, wherein emotion perception

depends on linguistic, cultural, and idiosyncratic factors (Gendron et al., 2014). Additionally, studies have used free-response methods to show how the widespread use of forced-choice tasks in racial categorization research has resulted in biased estimates of categorization rates (Nicolas, Skinner, & Dickter, 2019). Specifically, when not constrained to categories such as “Black”, “White”, or “Multiracial” to categorize Black-White mixed-race faces (as has been the norm in the field; see Nicolas & Skinner, 2017; Skinner & Nicolas, 2015), participants categorize these faces into alternative categories, such as “Hispanic” and “Middle Eastern”. This finding suggests that free response tasks can uncover previously neglected perceptions that may more closely align with real-world perceptions. In the stereotyping literature, however, free response tasks have rarely been used, and certainly not to the ends and extent examined here. For example, classical studies on stereotype content (e.g., Katz & Braly, 1933) use open-ended responses only as an initial means to obtain more traditional measures (e.g., scales or checklists), rather than as the focus of analysis itself. As a result, they end up focusing on only a subset of the possible dimensions, preselected by the investigator, and obtaining information on recognition (i.e., knowledge) rather than recall (i.e., spontaneous content). Other studies have looked at the open-ended responses themselves (e.g., Niemann, et al., 1994), but focused only on a subset of responses and did not fully characterize the taxonomy and its associated properties. Furthermore, none of the existing studies have used a large, representative sample of social groups, thus potentially being applicable to only a specific subset of social categories. Avoidance of free responses may give researchers an incomplete, at times oversimplified, understanding of how people think about others. Methodology Advances

of words in a multidimensional space, based on their co-occurrences in large text corpora (e.g., large news articles archives; see Supplement for more information). Notably, word embeddings allow for a quantitative analysis of participants’ text responses. Word embeddings appear in both data- and theory-driven approaches that are elaborated later. Current Studies Despite a few studies looking at unprompted stereotypes of specific social groups (e.g., ethnic and racial groups; Katz & Braly, 1933) and content analysis of individual impressions (Fiske & Cox, 1979; Park, 1986), no systematic investigation has examined the content of spontaneous stereotypes across a representative sample of social groups, including gender, racial, and occupational groups, among others. Free responses have the potential to advance psychological theories of perceivers’ perceptions and evaluations of themselves and others. This provides an opportunity for interdisciplinary research that integrates insights and methods from fields such as linguistics and computer science. Specifically, we use a task asking participants to list their spontaneous thoughts about a series of social groups, presented one at a time. These responses are then quantitatively analyzed for content dimensions, as well as response order and reaction times, among other measures. Thus, the current research uses a free-response task and computer-aided coding to study spontaneous stereotype content. Throughout the paper, we use the term “spontaneous” to indicate that participants arrived at the content of their responses without such content being explicitly elicited (e.g., the participant may evaluate a group as “warm” in the free-response task, but Warmth as a content dimension is never elicited or primed). Compare this to traditional scales, where participants are explicitly provided with the content dimensions along which to evaluate targets (e.g., “How warm is group X?”). On the other hand, because we explicitly ask the

participant to provide characteristics of the target, the term spontaneous here does not mean that the process of evaluating the targets occurs automatically (c.f., spontaneous trait inferences; Uleman, 1987). The current research has several aims. First, it will allow us to revisit and improve existing theories of social cognition by proposing a working taxonomy of stereotype content. Currently, multiple models propose distinct stereotype dimensions, from Warmth and Competence (Fiske et al., 2002) to Status and Beliefs (Koch et al., 2016; 2020). Given the variety of dimensions revealed by different methods, and the lack of basic discovery-driven research using free responses, these studies fill a gap that may help clarify the content of social cognition. As previously discussed, different dimensions predict distinct interpersonal discriminatory behaviors and organizational policy decisions (Fiske & Tablante, 2015). Clarifying which dimensions perceivers use to represent social groups will help us better address some of the presently most relevant social and ethical issues. Second, the current approach permits exploring critical properties of spontaneous stereotype contents, for example, how representative a dimension is in a perceiver’s mental mapping of a social group. Stereotype representativeness may be differentiated from the direction of scores on the dimensions. For example, farmers and Christians may be rated as similarly highly warm and highly competent (direction) using scale averages, but if a perceiver uses mostly Warmth-related words to describe Christians but mostly Competence-related words to describe farmers, then these groups differ on which dimension is most representative of the group’s stereotypes. More representative dimensions may be more predictive of attitudes, decision making, and behavior (c.f., Fazio et al., 1986), a possibility we examine in the current paper.

response order. These facets of primacy may differ. For example, while Sociability stereotypes may be more prevalent in stereotypes, and more predictive of attitudes towards targets, they may be provided later in a free response list if the group labels provide more readily available information about Ability or Status (e.g., because the labels themselves contain objective information about these dimensions, such as in “rich”, “poor”, or “homeless” people). In general, we examine dimension primacy from the lenses of prevalence (related to the concept of spontaneous representativeness described above) and response times/order (related to time-based accessibility; see e.g., Fazio et al., 2000). In a nutshell, we introduce the Spontaneous Stereotype Content Model (SSCM), which proposes an initial comprehensive taxonomy of spontaneous stereotype content. Besides recovering more of the nuance and complexity of social reality, the SSCM also sheds new light on stereotype properties and enhances the prediction of general attitudes and decision making as compared to prior low-dimensional models. In what follows, Study 1 uses both data- and theory-driven codings of open-ended data (cluster analyses and dictionary classifications, respectively) to uncover spontaneous stereotypes’ taxonomic structure, general properties (e.g., valence), and accessibility (through response order). Study 2 uses a speeded version of the previous task to explore the robustness of the previous study’s findings, and to analyze stereotype accessibility through response times. Study 3 tests the robustness of the model using a variety of alternative methods, including dimension embedding coding and participant self-coding. In addition, Study 3 introduces spontaneous representativeness as a property that provides novel insights into perceptions of social groups. Finally, Study 4 examines the predictive value of the extended taxonomy in various decision- making scenarios.

Study 1: Initial Cluster & Dictionary Coding of Spontaneous Stereotypes Study 1 aimed to provide a first look at the content of spontaneous stereotypes. We obtained a large sample of societal groups salient to Americans and asked American online participants to provide the characteristics of the targets that they spontaneously thought about. Using natural language processing methods we examine the prevalence, valence, direction, and accessibility of spontaneous stereotypes. Method Participants Participants were 400 workers recruited through Amazon Mechanical Turk. Participants’ mean age was 36, more men (54%, 46% women), and mostly White (78%, 7% Asian, 7% Black, 4% Hispanic, 3% Multiracial). Excluding participants based on an attention measure did not significantly affect the results. For our initial study, we identified the sample size required to adequately power a between-subjects t -test to detect a small-to-medium effect, d = 0.4. Despite our design being within-subjects, we used this initial heuristic given the complexity of estimating power for generalized mixed models with crossed random effects and the lack of previous studies using these methods from which to draw an expected effect size. Thus, we chose what we considered to be a conservative and accessible test to estimate a sufficient sample size. Later, we were able to compute more precise power analyses via simulation, which indicated that our sample size achieved over 90% power to detect small ( d = 0.2) effects given our model specifications. Subsequent studies used these simulations to estimate sample sizes more accurately before data collection. Power analysis for Study 1 was calculated using G*Power (Faul et al., 2009).

Competence’s facets of Ability ( α = .89; items: “competent” and “skilled”) and Assertiveness ( α = .81; items: “confident” and “assertive”), as well as Beliefs ( α = .82; items: “traditional” and “conservative”) and Status ( α = .9; “wealthy” and “high-status”). We also asked participants to rate how society views the targets in general attitude/valence (i.e., global evaluations, which are used for subsequent analyses of predictive power), from 1 ( very negatively ) to 5 ( very positively ), as an exploratory measure. In a final block, participants completed a series of demographics and a question about ingroup membership in any of the social groups they rated (for exploratory purposes). An attention question was also included. Analysis Strategy To code the large number of open-ended responses that participants provided we made use of two different dimensionality-reduction approaches. First, we borrowed from modern natural language analysis techniques to obtain a data-driven cluster structure of content. Then, we corroborated these findings in a more confirmatory approach, coding responses through recently developed dictionaries of stereotype content covering multiple semantic dimensions (Nicolas et al., 2021). Both of these approaches allowed us to define an initial taxonomy of spontaneous stereotype content as well as measure how comprehensive the taxonomy is (i.e., how many of the responses it accounts for, a measure which is not possible through traditional approaches such as scales). Cluster analysis. We start by presenting data-driven results, based on a cluster analysis of the word embedding representations of participants’ responses. Word embeddings are numeric vector representations of words, which allow for quantitative analyses.

The specific word embeddings used in this paper are from a Universal Sentence Encoder model (USE; Cer at al., 2018; 600 billion words), a Fasttext model (Bojanowski et al., 2017; 600 billion words), a Glove model (Pennington et al., 2014; 840 billion words), all of which were trained on the Common Crawl (a vast sample of world wide web content), and a Word2vec model (Mikolov et al., 2013; 100 billion words) trained on Google News data. For Study 1’ cluster analysis we focused on USE embeddings, which are the most flexible and recently developed from the embeddings discussed (see Supplement for more information). In subsequent studies we averaged the results from the multiple word embeddings to diminish the role of distinctive biases from different models (e.g., due to being trained on different data sources; however, these decisions made little difference, see online repository). The word embeddings encode semantic relatedness from large corpora of text based on word co-occurrences (i.e., how often two words appear close to each other) by comparing the similarity of the context in which two words appear. Put differently, words that often co-occur with the same set of words tend to be more semantically related to each other. For example, both “liberal” and “democrat” tend to co-occur with words such as “political” or “government”, and are thus encoded by similar word embeddings, whereas “liberal” and “short” do not necessarily co-occur often with the same context words and thus their word embeddings are more dissimilar. Using word embeddings, we can get a numeric similarity score (called cosine similarity) between pairs of words (as in Study 1), or between a word and a set of words (as in Study 3’s “dimension embeddings”). To illustrate, for pairs of words, “liberal” and “democrat” would get high word embeddings similarity scores, while “liberal” and “short” would get lower scores. To identify the underlying dimensions in the participants’ responses, we first selected words that were provided at least 5 times across participants. We then computed a cosine

Status—as well as many other potentially relevant dimensions (see Table 2 for the names of all dictionaries/dimensions). We also had Warmth and Competence dictionaries which were simple combinations of their facets’ dictionaries (Warmth = Sociability + Morality, Competence = Ability + Assertiveness; if a word was in both facets, it was only counted once). A single response could be coded into more than one dictionary. Responses not included in any dictionaries were recoded into a single, separate variable, to quantify coverage. To simplify the analyses, we summed over each participant’s six responses for each dimension (see Table 1). Thus, the outcome response rate variable could range from 0 to 6. Given that we had a count outcome, we used Poisson or negative binomial (if overdispersed and when convergence allowed) mixed models, with participants and targets as random intercepts (models with random slopes did not converge). For presentation of results in tables and figures, we transform these values to percentages. In addition to coding whether a response was included in a dictionary or not ( prevalence ), we had variables indicating whether the word was low (-1), neutral (0), or high (1) on the dictionary (i.e., the direction ). For example, “friendly” would be high on the Sociability dictionary, “unfriendly” would be low. For the Beliefs dimension, direction is more arbitrarily defined (following Koch et al., 2016): high words indicate more conservatism/religiousness (e.g., “religious”), while low words indicated more liberalism/secularism (e.g., “democrat”). Traditional scales included are also a measure of direction. We also obtained a valence measure using a composite of sentiment dictionaries available through R (see Nicolas et al., 2021). The valence scores ranged from -1 (negative) to 1 (positive). For example, words such as “attractive” (.96) and “righteous” (.94) are scored as more positive, while words such as “unfortunate” (-.97) and “perverted” (-.96) are scored as more

negative. We also computed valence per dimension: for example, if a response was coded as being about Morality, we coded its valence score separately as an indicator of the negativity/positivity of Morality content. Given that the direction and valence indicators were continuous data, we averaged across all 6 responses and analyzed each using linear mixed models. We note here and throughout that valence and direction correlate highly for most dimensions. For example, being warm (high Warmth) is positive, while being cold (low Warmth) is negative. However, this is not always the case. For example, high Assertiveness could involve more positive traits, such as “confident” or “hard-working”, but also more negative traits, such as “aggressive” or “dominant”. In addition, the coding method for direction is more theory-driven: which words fall into the high or low poles of each dimension were selected based on the person perception and stereotyping literature. For example, which words refer to high Morality and which to low Morality were selected from items measuring these constructs, and then expanded using synonymy and other semantic relations (see Nicolas et al., 2021). On the other hand, valence scores are based on automatic sentiment analyses, which are often trained on different domains (e.g., product reviews), so they may be noisier. For completeness, we present both metrics. For analyses of response order accessibility, we used multilevel logistic regressions to predict whether each response was in a dictionary or not. We included trial number (participants provided 6 responses per group so this ranged from 1 to 6) in an interaction with the dimension label and had random factors for participants and groups. Thus, response order analyses examine change in content across the responses a participant gives for each social group. These variables: Dictionary prevalence, dictionary direction, and dictionary valence are used in most studies, so in Table 1 we illustrate an example coding of a participant’s responses to