Originally Written for Behavioral Economics at Emerson College, taught by Dr. Nejem Raheem
A large body of work has found that classroom participation correlates with communication skills, increased learning, and more-developed critical thinking skills. Studies have also found that collaboration, which is often primed for during the college years, increases creativity. And yet, despite many gains women and girls have made in education worldwide, from middle school to university there is a disparity in participation levels between male and female students.
In the past, female students have been told to “man up” and become more assertive. However, this paper will aim to place the responsibility of equalizing participation on the professors. I take this stance due to the centuries of socialization and structural inhibitors that have placed women in this position of subordination, including implicit biases that professors themselves have and may be acting on. I will focus on gender disparity due to its prevalence across intersections of identity (like race, sexuality, ability, etc), but the proposed intervention should also diminish those points of disparity in classroom participation as well, since it is a randomized method.
Benefits of Participation
There has been a large body of work to show the benefits of in-class participation. It’s been shown that discussions promote communication skills (Dallimore et al 2008), improves learning (Davis 1993, Huerta 2007), and improves critical thinking (Garside 1996). Sawyer and DeZutter (2009) found that creativity is not a purely mental process, and thus, collaboration contributes to creativity.
Sezer (2017) concluded that there is a positive correlation between students’ attention levels and class participation, which connotes improved learning. Brannon (2018) found that participation in diverse courses shows increased levels of tolerance and creates greater outgroup closeness, as well as more supportive perceptions of policies that address inequality.
It’s also been shown that students participate more in courses they enjoy, participate less when they’re feeling shame about their assignment or participation outcomes (Sanchez-Rosas et al 2016), and that they participate more when they’re receiving credit for the course (and for participation itself) (McCleary 2019).
Finally, Schwebach (2015) found that teaching methods of purposefully increasing in-class participation were associated with better grades and higher attendance.
Thus, we see that in-class participation has been accepted as something that benefits students and their learning outcomes, and yet, there is still a disparity between who is actually participating.
Current Participation Levels and Impacts:
In a middle school study, Fisher (2003) found that male students participated more than female students in unsolicited examples (raising their hands unprompted) but they participated equally in solicited participation, ie when a teacher asks for an answer (Fisher 2003). This may be because women seem to be less assertive (Bossuyt 2018) (or have been socially constrained by gender roles to be less assertive) and thus, won’t answer unless the professor is asking for an answer. In university, Muslimah (2018) found that male students participated more than female students. It was also found that an english education background is strongly related with participation. Here, distinctions between unsolicited and solicited responses were not made, but we can assume the gender socialization only grows stronger as students age.
Within this idea of gender socialization, we likely see some impact of the representativeness heuristic (Tversky 1974): having only ever seen male students participating a lot in class, female students may not (likely implicitly) believe that women can contribute to class discussion.
Potentially proving this, Jaasma (1995) found that female students show higher levels of apprehension to participating in the classroom than male students. Male students seem to be more comfortable offering opinions in a hierarchical structure, female students seem to be more comfortable in a structure of connection. Both male and female students showed lower levels of apprehension toward participating when the professor is less argumentative. Lawson et al. (2018) found that female students do better when they’re supported by their professors during office hours, encouraged to have in-class engagement, and provided career/network building help. Those activities even act as a “buffer” or offset to sexist actions that may have happened within a department, keeping female students’ comfort/engagement level up.
Female students are more likely than male ones to offset a hard class with extra work to maintain grades—hard either because of difficult content or because of a professor’s lack of good teaching ability (Stanley et al 2016). This may partly explain why women are still doing well in courses despite less attention from professors.
Overall, and outside of the gender dynamic, it seems that students don’t participate in class because they fear saying the wrong answer, they don’t believe they have anything valuable to contribute, they’re unprepared, or they just fear public speaking (GradePower Learning 2018). Meyer (2018) also found that students, particularly already low-participators, tend to overestimate how often and how deeply they participate. Giving reviews to tell them how often they actually participated didn’t change this. This plays into the overconfidence and overoptimism bias (Tversky 1974), in which people believe themselves to be better at any given task than is actually statistically possible
Students also seem to fall into the status quo heuristic (Tversky 1974), where each person has gotten used to their self-assigned role and are used to relying on the heavy participants in the class (Tversky 1974). And thus, students feel aversion to lose (Tversky 1974) that created status quo, even if they have been taught the benefits of breaking the status quo and logically understand them to be greater than the costs of speaking up. Finally, within this current system, students get trapped in a cycle of collective conservatism (Tversky 1974), where no one feels the need to participate because the norm is to not participate.
Finally, students are falling into a confrontation between system one and system two processing (Strack and Deutsch 2004). While they’re falling into these biases and heuristics, while they’re letting other students answer because that’s what they’ve always done, as well as when they’re panicking over getting called on, they’re operating in the “automatic” system one. Through this intervention, I seek to move students participation behaviors into system two—the thoughtful, “slow” method of understanding information, where they actually think through the asked question and spend time formulating their response.
As we’ve seen, there are some issues with current participation dynamics, and though there has been work in attempt to diminish these issues, few have done well enough in considering the disparity between identity intersections and participation levels. They have, instead, opted to increase overall participation without consideration for how it may reproduce this disparity.
Hamann (2012), in a study of students’ perceptions to the benefits of discussion, found that small discussion groups elicited the highest student satisfaction and scored highest in critical thinking skills, online discussions were the best forum to express complex thought-out ideas, and even though students did not favor all-class discussions, students agreed it also provided benefits. Despite this finding, I’ve opted to create an all-class discussion intervention, because it’s the least disruptive to the normal lecture-based class structure, as well as its ability to expose students to the largest amount of different ideas. This is also supported by Schwebach (2015), who tested four methods of increasing student participation—cold calling (where you randomly call on students), asking students to bring questions to class, giving students the option to be asked a question or to ask a question, and identifying “reporter” students to explain outcomes of small-group discussion. The study found that all these methods were associated with better grades, higher participation, and higher attendance, but that cold calling created the most diversified level of increased participation.
This cold-calling method is backed up by other studies. Devers (2018), in a study about cold-calling via using a randomized name generator, found that students’ grades were better when the randomized feature was included, and students reported increased levels of participation. In another study about randomized methods of calling on students, students said they liked that they liked how the method forced them to pay attention, attend class, and listen to other students more than they might have otherwise, but didn’t like the panic response of being put on-the-spot and the potential of not knowing the answer (Broeckelman-Post 2016).
And though there’s this discussion surrounding how shame and panic interplays with cold calling, there’s work that’s shown it’s not actually an issue. Dallimore (2006) found that, despite widespread belief by teachers that cold-calling would make students feel uncomfortable, students in this trial—graduate students in a required management accounting course—did not feel uncomfortable, and showed increased participation and increased preparation (Dallimore 2006). And, one study on test anxiety found that increased effort and preparation reduced the negative physical effects of test anxiety (Putwain 2018)
As such, my intervention is based around cold calling, but I’ve taken steps to reduce any left-over negative reactions students may feel.
Finally, it’s important to note that in-group learning requires mutual respect, a sense of responsibility, awareness of creating a constructive climate, and leadership (Triyanto 2019), all of which must be framed (Slovic 1995)—presented in a certain way to encourage students to understand this teaching as the “correct” way of doing so—and maintained by the professor.
Students, particularly female students, are falling into status quo, loss aversion, representativeness, and collective conservatism heuristics (Tversky 1974) which are reducing their interest or confidence in participating in class. Coupled with male students’ socialized acceptance in participating (Bossuyt 2018, Jaasma 1995), implicit bias that may be impacting professors (Staats 2017), and discussion structures, female students are participating at lower rates than male students. In order to combat this, and in refraining from telling female students to just participate more or be more assertive (ie: more like men), I am placing the responsibility of rectifying this inequality in the hands of professors.
However, due to implicit biases (Staats 2017) and overoptimism (Tversky 1974), we cannot just tell professors to call on students equally (Triyanto 2019). Therefore, I suggest we implement a structured method for calling on and grading students. Because this is a complicated issue, and because the intervention could be modified to fit a variety of classroom settings, I propose a fairly comprehensive plan, broken down into a few simple overarching ideas.
This project is designed with regards to Sunstein’s (2009) NUDGES (iNcentive, Understanding mappings, Defaults, Give feedback, Expect error, Structure complex choices) acrostic. It incentivizes both professors and students, it takes into account the reasons individuals behave the way they do (understand mappings) instead of just assuming they’ll behave rationally, it creates participation as a default option, provides feedback, expects error, and structures these choices into a framework that is easy to follow.
The core of this intervention is a grading scheme and mechanism for professors to implement in their classes. It will be best suited and most obviously successful during discussion-based classes based around conceptual ideas, rather than “correct” or binary answers. Some levels of customization have been built into the method, in order to maintain professors’ feeling of control over their classroom, as well as account for different teaching styles and technology uses. It also attempts to create the least amount of disruption within the flow of class, so it can be fully integrated into system 1 (Strack and Deutsch 2004) of both students and professors.
In essence, the intervention consists of a free training for professors on the method (which should be incentivized [Sunstein 2009] to attend), an introductory first-day in-class lesson plan, a comprehension check, a randomized calling method, and a loss-aversion-based (Tversky 1974) grading scale, all of which has some freedom and customization built in.
Priming for Success
Ideally, the first step to implementing this process of participation is to train the professors in it. This would include communicating the research behind the method and basic mechanics of how it works (as described previously) in order to expect and try to prevent error (Sunstein 2009). This training should be free to attend, and incentivized if professors actually implement it into their classes. An incentive could be having the implementation count toward a professors tenure evaluation or getting a salary bonus (Sunstein 2009) to encourage this grading scheme’s integration and to be able to track the benefits of it. This would also work against professor’s aversion to losing (Tversky 1974) their teaching method, since the incentives would be important enough to change their behavior (and, ideally, they’d just want to be a better teacher with higher levels of participation in their classes).
Once professors are trained and have planned on implementing it into their courses, they should take the first day of class to explain and justify the method. By introducing it on the first day, professors are immediately establishing a new status quo (Tversky 1974) for what is expected in the class, which makes students more willing to accept it (Devers 2018). And, by framing (Slovic 1995) the method as a way to increase student success and equality, students are potentially less likely to feel apprehensive about the professor if they know the reasoning behind the method, rather than just thinking the professor is stubborn and argumentative (Jaasma 1995) or imposing their will on the students.
Managing In-Class Participation and Grading
At the beginning of class each day, professors should administer a short, multiple choice “main point” quiz, with immediate feedback to confirm reading comprehension and thus manage students’ fears of having the “wrong” answer (GradePower Learning 2018). This could be administered via Canvas, the popular quiz game “Kahoot,” or on paper (to name a few options); all that is required is that it is short and the feedback (Sunstein 2009) is immediate.
I considered implementing an “any questions?” portion of the class, where professors ask students for questions about the readings. However, many times this can just reinforce the same fears students have about participating in general, sometimes students don’t know what they don’t know (Johnson 2013), or, students are overoptimistic (Tversky 1974) about what they know, and thus, don’t ask questions.
I also considered suggesting longer reading quizzes that are due before class, in an attempt to avoid class disruption, but I believe the quick format and immediate feedback (Sunstein 2009) of the proposed quiz will better mimic the panic/on-the-spot system 1 (Strack and Deutsch 2004) processing required while being called on at random. Mimicking this method in the quiz both encourages the “main point” class concepts to become more system 1/automatic, and by “Giving Feedback” (Sunstein 2009) the quiz ensures students’ understanding of the theory. This, in turn, encourages them to feel more comfortable when applying the information later in class when they have to engage system 2 in more difficult/conceptual questions (Smith 2000).
If professors already give reading comprehension quizzes, the “main point” quiz can be graded and replace the other version of reading comprehension aspect of their class, but I recommend it be ungraded to avoid the in-the-moment shame response that could diminish student’s willingness to participate (Sanchez-Rosas et al 2016).
Overall, grading should be set up where every student’s participation begins at 100%, and is deducted when they don’t participate (rather than building points when they do). This is based on loss-aversion (Tversky 1974), where students care more about losing the points that they already have than they do about the ones they could attain. This means, in the randomized calling, students lose points if they choose to skip answering a question.
When asking a question of the class, professors should use truly randomized calling methods. This shouldn’t just be picking a student at their own “random” will (commonly referred to as “cold calling”) because of the potential for error (Sunstein 2009) in which implicit biases (Staats 2017) single out particular types of students. So too, when professors call on students at their own decision, it can feel punitive to the student and like they’re being “singled out” for doing something wrong or because the professor doesn’t like them (Broeckelman-Post 2016). Instead, professors should use a truly randomized system, like pulling index cards or popsicle sticks with students’ names on them.
With these, a professor would ask a question, pick, and call a name. If the student answers, the name gets returned to the bin, since the probability of one student being called over and over again in a row is low. If the student chooses to skip, they keep that name out of the draw pool for the day—names that are out at the end of the day are the ones who have their participation points reduced. This also makes participation an opt-out rather than an opt-in, which will automatically encourage participation (Welch 2019, Sunstein 2009) by making it easier to participate than it is to skip.
In this method, students are still able to raise their hands and ask clarifying questions, or offer an opinion they feel very passionate about, but by tying the points exclusively to the randomized drawings, students are incentivised (Sunstein 2009) to be prepared to be called on at random (Dallimore 2006, McCleary 2019, Schwebach 2015) rather than being incentivized to dominate the conversation.
This method should result in an overall high portion of student participation, and overall higher participation grade, and easily trackable view of who avoids participating and how often they do.
Professors may be unwilling to make a change to their class structure or give up freedom in their grading; this is why the incentivised (Sunstein 2009) training and implementation, and freedom of customization, are all imperative.
Since the participation method is random, it should spread participation across the class equally and benefit everyone equally—in increased preparation, bettered grades, increased participation, and increased exposure to other thoughts and ideas (see “Benefits of Participation”). However, since some (already-high-participating) students may feel like they’re losing participation time, the initial framing and priming (Slovic 1995) as well as the allowance of some hand-raising participation, is important. However, this cannot be fully avoided, and will be a continuous effort; but professors already make continuous effort to make their classes engaging to their students, so it will just be a matter of different work.
This method may be less successful for larger class sizes or lecture-based classes, where professors would end up with a large portion of students getting a high participation grade even though they were only called on a few times throughout the semester. This might also make it difficult to thoroughly track the outcome and success of the method.
This method could become more data oriented and more exact using something like a phone application where professors ask a question, students put themselves into a pool, professors randomly draw from that pool, and students get points based on whether they entered themselves into that pool, not on whether or not they were called on or answered. This would allow professors to see on a case-by-case basis how often students were entering themselves into the pool—ie, how often they felt they knew the answer—and could then intervene one-on-one if some students still aren’t participating as often.
This would work to make students feel even less like they’re being called on at random (because they’re not, really), would allow students who feel like they know the answer or really want to contribute more likelihood of being able to, make it so professors have no chance of forgetting the popsicle sticks/notecards/etc, and make grading automatically integrated in the app.
However, this would also mean students and professors have to be on or near their devices during class, and would need to be integrated into something that all students and professors have access to, without extra downloading (Canvas, for example, would be a good place).
Bossuyt, Saar and Patrick Van Kenhove. “Assertiveness Bias in Gender Ethics Research: Why Women Deserve the Benefit of the Doubt: Marketing and Consumer Behavior” Journal of Business Ethics 150 no. 2 (2018): 727-739.
Brannon, Tiffany N. “Reaffirming King’s Vision: The Power of Participation in Inclusive Diversity Efforts to Benefit Intergroup Outcomes” Journal of Social Issues 74 no. 2 (Jun 2018): 355-376
Broeckelman-Post, Melissa, Alexandra Johnson, and J. R. Schwebach. “Calling on students using notecards: Engagement and countering communication anxiety in large lecture.” Journal of College Science Teaching 45, (5 May 2016): 27-33
Dallimore, Elise J., Julie H. Hertenstein, and Marjorie B. Platt. “Nonvoluntary Class Participation In Graduate Discussion Courses: Effects of Grading and Cold Calling.” Journal of Management Education 30 (2006): 354-377
Dallimore, Elise J., Julie H. Hertenstein, and Marjorie B. Platt. 2008. “Using discussion pedagogy to enhance oral and written communication skills” College Teaching, 56 no. 3 (2008): 163–172.
Davis, T.M. and Murrell, P.H. “Turning teaching into learning: The role of student responsibility in the collegiate experience” ASHE-ERIC Higher Education Report No. 8. Washington, D.C.: The George Washington University, School of Education and Human Development (1993).
Devers, C. & Devers, E. The Wheel of Discussion: A New Approach to Cold-Calling. Journal of Educational Multimedia and Hypermedia, 27 no. 4 (2018): 471-480
Fisher, Rebecca M. “The Relationship Among Gender, Gender Orientation, and Class Participation of Middle School Students,” 2003.
Garside, C. “Look who’s talking: A comparison of lecture and group discussion teaching strategies in developing critical thinking skills.” Communication Education 45 no. 3 (1996): 212–227.
Hamann, Kerstin, Philip H, Pollock, and Bruce M. Wilson. “Assessing Student Perceptions of the Benefits of Discussions in Small-Group, Large-Class, and Online Learning Contexts” College Teaching 60 no. 2 (Spring 2012): 65-75.
Huerta, J.C. “Getting active in the large lecture” Journal of Political Science Education, 3 no. 3 (2007): 237–249.
Jaasma, Marjorie Ann. “The Effect of Gender and Communication Style on Student Apprehension Regarding Classroom Participation on the College Level.” 1995.
Johnson, Ben. “The Right Way to Ask Questions in the Classroom.” Edutopia. George Lucas Educational Foundation, October 31, 2013. https://www.edutopia.org/blog/asking-better-questions-deeper-learning-ben-johnson.
Lawson, Katie M., Laura Y. Kooiman, and Olyvia Kuchta. “Professors’ behaviors and attributes that promote U.S. Women’s success in male-dominated academic majors: Results from a mixed methods study.” Sex Roles 78 (2018): 542-560.
Leraas, Bethany C., Nicole R. Kippen, and Susan J. Larson.“Gender and Student Participation.” Journal of the Scholarship of Teaching and Learning 18 (2018): 51–70.
McCleary, Daniel F., Brittany McCreary, and Jeremy Coles. “Cognitive Variables, Classroom Behaviors, and a Participation Intervention on Students’ Classroom Participation and Exam Performance.” International Journal of Teaching & Learning in Higher Education 3 (2019): 184.
Meyer, Megan L., Stacy A. McDonald, Lynn DellaPietra, Matthew Wiechnik, and Kimberly B. Dasch-Yee. “Do Students Overestimate Their Contribution to Class? Congruence of Student and Professor Ratings of Class Participation.” Journal of the Scholarship of Teaching and Learning 18 (2018): 44–54.
Muslimah, Maziyyatul. “Is Students’ Speaking Participation Related to Students’ Personality and Gender?” Alsuna, 1 (2018)
Putwain, David W., and Wendy Symes. 2018. “Does Increased Effort Compensate for Performance Debilitating Test Anxiety?” School Psychology Quarterly 33 (3): 482–91.
Sánchez-Rosas, Javier, Paula Belén Takaya, and Alicia Verónica Molinari. “The Role of Teacher Behavior, Motivation and Emotion in Predicting Academic Social Participation in Class.” Pensando Psicología 12 (2016): 39-53.
Sawyer, Keith R. and Stacy DeZuter. “Distributed Creativity: How Collective Creations Emerge From Collaboration” Psychology of Aesthetics, Creativity, and the Arts 3 no. 2 (May 2009): 81-92.
Schwebach, Reid, and Alex Johnson. 2015. “Four Methods for Engaging Students with Socratic Teaching Techniques in Large Enrollment STEM Courses Using Notecards.” https://search-ebscohost-com.proxy.emerson.edu/login.aspx?direct=true&db=edsbas&AN=edsbas.A372AEB9&site=eds-live.
Sezer, Adem, Yusuf Inel, Ahmet Cagdas Seckin, and Ufuk Ulucinar. “The Relationship Between Attention Levels and Class Participation of First-Year Students in Classroom Teaching Departments” International Journal of Instruction 10 no. 2 (2017): 55-68.
Slovic, Paul. “The Construction of Preference” American Psychologist 50 no. 5 (1995): 364-371.
Smith, Eliot R. and Jamie DeCoster. “Dual-Process Models in Social and Cognitive Psychology: Conceptual Integration and Links to Underlying Memory Systems” Personality and Social Psychology Review 4 no. 2 (2000): 108-131.
Staats, Cheryl, Kelly Capatosto, Lena Tenney, and Sarah Mamo. “State of the Science: Implicit Bias Review.” State of the Science: Implicit Bias Review. 2017th ed., n.d.
Stanley, Laura E., Emma M. Delmontagne, and William C. Wood. “Offsetting Behavior and Adaptation: How Students Respond to Hard Professors.” Journal of Education for Business 91 (2016): 90–94.
Strack, Fritz and Roland Deutsch. “Reflective and Impulsive Determinants of Social Behavior” Personality and Social Psychology Review 8 no. 3 (2004): 220-247.
Sunstein, Cass R., and Richard H. Thaler. Nudge: Improving Decisions About Health, Wealth and Happiness. London: Penguin, 2009.
Triyanto. “Understanding Student Participation within a Group Learning.” South African Journal of Education 39, no. 2 (2019).
Tversky, Amos and Daniel Kahneman. “Judgment under Uncertainty: Heuristics and Biases.” Science 185 no. 4157 (1974): 1124-1131.
Welch, Ned. “A marketer’s guide to behavioral economics.” McKinsey & Company. Accessed 29 October, 2019. https://www.mckinsey.com/business-functions/marketing-and-sales/our-insights/a-marketers-guide-to-behavioral-economics
“Why Students Don’t Participate In Class (And How You Can Help).” GradePower Learning. GradePower Learning, May 28, 2019. https://gradepowerlearning.com/why-students-dont-participate-in-class/.Wilkinson, Nick and Matthias Klaes. “Nature of Behavioral Economics,” in An Introduction to Behavioral Economics, 2-27. Red Globe Press, 2018.