R Workshop Session Details
Content Analysis Workshop Session Details
Session 1: Framing and Elevation of Constructs
Content analysis methods which focus on how issues are framed within different genres of narratives will be described. These methods rely on locating the frequency with which similar words or themes appear in the narrative together. The frequency of co-occurrences is then used to generate an Eigen vector matrix and similarity measures for clustering together themes and thus distinguishing between distinct ways in which narratives and issues could be framed. While discussing how distinct frames can be identified using content analysis techniques, it will also be discussed how individual level constructs can be used for higher levels of analysis. These content analysis techniques will be compared with more subjective discourse analysis techniques such as metaphor analysis which have the power to unravel the underlying dynamics of particular discourses.
Session 2: Longitudinal Content Analysis
Content analysis can be deployed as a form of longitudinal analysis to discern whether there is a change in issues and variables over a period of time. Has progress been achieved with respect to issues of interest over a period of time? These techniques rely on coding data over a prolonged period of time and issues of inter-rater reliability become important in order to build valid content analysis of change or constancy. Typically, chi-square tests are used to compare longitudinal data accessed through coding vast repositories of qualitative and textual sources. These content analysis techniques will be compared with subjective techniques of historical analysis of data, and the potential of interpretations based on historical conversations with data to uncover issues of politics, power, resistance and institutional crumblings and consolidations.
Session 3: Inductive, Deductive and Linguistic Analysis
Often, qualitative data needs to be seen in the light of the institutional context which informs the emergence of interactions, tensions and outcomes. For instance, styles of leadership or commitment to innovation can be coded using a variety of qualitative data that exists about organisations. The DICTION software allows us to calculate scores for various attributes of content analysis such as Insistence, Variety, Embellishment and Complexity. Insistence is “a measure of code restriction … semantic contentedness … a preference for a limited, ordered world” (Short and Palmer, 2008: 733). Variety allows us to discern the tensions between “overstatement and a preference for precise statements” Short and Palmer, 2008: 733). Embellishment “is a ratio of adjectives to verbs” and indicates how “heavy modification slows down a verbal passage by de-emphasizing human and material action” (Short and Palmer, 2008: 733). Complexity refers to the “average number of characters per word” and indicates “how convoluted phrasings make a text’s ideas abstract and its implications unclear” (Short and Palmer, 2008: 733). In comparison to these techniques of content analysis, the subjective methodology of critical hermeneutics will be discussed in the context of discerning how some discourses come to acquire dominance and normality over others.
Session 4: Latent and Manifest Variables
Issues pertaining to identification of latent and manifest variables are relevant to content analysis as well. It has been suggested that while manifest variables can be coded using computer assisted processes, it may still be useful to rely on human coders to code latent variables. With respect to content analysis, the measurement of latent variables cannot merely be seen as an aggregation of manifest variables. Instead iterative processes and pilots need to be conducted to arrive at a schema through which manifest variables can be related to the underlying latent constructs. These methods of content analysis are compared with subjective methods of deconstruction to assess how narrative data may point to the de-stabilisation and contradiction of normalised constructs rather than the coherence that is enforced on sanitised constructs.
Session 5: Hypothesis Testing
If we want to understand how organisational performance is socially constructed, then it may often be useful to pay attention to organisational communication. By identifying appropriate classification schemes, it may be possible to categorise espoused goals of organisations. It may thus be possible quantitatively code qualitative data to engage in hypothesis testing by conducting one sample t tests. Using content analysis, contextual and symbolic elements of organisational phenomena can be gleaned, particularly during times of organisational rupture, crisis and responses to significant events. These content analysis techniques will be compared with subjective methodologies of genealogical analysis which examine the travel of concepts through social events and the performance of power and hegemony.
Session 6: Scientific Robustness, Relationship Identification and Boundary Establishment
Content analysis techniques are useful for elaborating adolescent theories of organisation such as the attention based view (ABV) of the firm, which implies that if the top management of a firm pays attention to an issue, the organisation also pays attention to the issue, and consequently, there is organisational action based on the issue. The content analysis protocols which can increase scientific robustness are identification of variables, codebook creation and reliability. The protocols of relationship identification are frequency counts, keyword in context, context rating and expansion analysis. The trends for boundary establishment include trends and differences across levels of analysis, trends and differences across organizations and trends and differences across time. These content analysis techniques are compared with the performative tradition in qualitative analysis where data is re-articulated as performance to uncover its aesthetic and political potential.
Session 7: Sampling and Stages of Content Analysis
While content analysis largely draws from secondary data such as annual reports of corporations, whenever primary data is available, content analysis is often used in conjunction with ethnomethodology. Concept sampling is deployed to select the materials and texts which are used for content analysis. While creating the coding scheme, it is useful to identify the entire range of topics that are useful for the study and the major categories within which we can classify these topics. In the second stage of content analysis, there can be a focus on groups within the range of organisations from where data was initially accessed. For each of the units in the subgroup, it may be useful to locate references to the environment and then transcribe and cluster these references.
Session 8: Mapping Algorithms and Linguistic Indicators
Map analysis is a technique that is useful for “extracting, analysing and combining representations of individual’s mental models as cognitive maps” (Carley, 1997: 533). Mapping techniques require analysing “the data several times under different coding choices” as “coding choices have systematic effects on the complexity of the coded maps and their similarity” (Carley, 1997: 533). Mapping techniques help in combining individual maps to larger aggregates. Linguistic indicators such as the indicator of homogeneity depend on “the degree to which two or more actors use the same words, or do so with the same frequency” (Abrahamson and Hambrick, 1997: 519). These techniques of content analysis are compared with subjective methodologies associated with post structural theory which engages with issues of subjectivity, politics, discourse, affect, social relations, materiality, power and performativity.