Showing posts with label gender bias. Show all posts
Showing posts with label gender bias. Show all posts

Wednesday, February 16, 2022

Our Words Do Matter

by Jennifer VanAntwerp, February 16, 2022

Women and their work are valued less in our society. This cultural bias can be measured in the very way that we use, and change our use of, language in describing the types of work we do in STEM.

It's no secret that pushing STEM (science, engineering, technology, and math) participation is a hot topic in the U.S. What might be less well-recognized is that not all STEM is considered to be equal – and this has real (and negative) consequences for women.

New research by psychologists Alysson Light, Tessa Benson-Greenwald, and Amanda Diekman has revealed biases in how we describe, perceive, and value the various STEM fields. Study participants from all walks of life were presented with brief (and true) descriptions of different STEM fields (including some physical sciences, some social sciences, and some engineering). However, none of the descriptions were identified by name. In addition, a gender composition for the unnamed field was provided – but participants were randomly assigned to groups that were either told the field just described included mostly males or told that that same field was majority female. They were then asked to categorize each field as a “soft science” or a “hard science.”

It turned out that the exact same description was significantly more likely to be labeled by participants as a soft science if the respondent believed the field to include more women than men. The opposite was true for majority men and the hard science label – regardless of which discipline was actually being described. In other words, there is a strong cultural bias that women do something called soft science and men do something called hard science.


Image by Gerd Altmann (Pixaby.com)
Image by Gerd Altmann (Pixaby.com)


Wikipedia tells us that "hard and soft science are colloquial terms used to compare scientific fields on the bases of perceived methodological rigor, exactitude, and objectivity." This naturally sets up soft sciences as being inferior to hard sciences; it implies a societal judgement that soft science is pseudoscience – less difficult, less reliable, less valuable. In fact, additional results from this same research study make it quite clear how these perceived soft/hard sciences differences are valued. When Light and colleagues asked participants to evaluate the value of a discipline based only on its label as a hard or soft science, the soft sciences lost on every single measure. As the study authors conclude, “the general category of soft sciences is perceived as less rigorous, less trustworthy, and less worthy of funding than hard sciences.”

But why does the lower valuation of soft sciences matter to engineers? Clearly engineering disciplines are firmly in the hard sciences camp, right? Well, it matters because at a subconscious level, women are associated with these less respected and less valued fields of work. The spillover from this is that at a subconscious level, all women suffer a blow to their credibility, status, and contribution as members of STEM fields – including those women in engineering.

So perhaps it is no surprise, then, that women engineers are 2.9 times as likely as men engineers to report “I have to repeatedly prove myself to get the same level of respect and recognition as my colleagues” and 2.8 times more likely than men to agree that “After moving from an engineering role to a project management/business role, people assume I do not have technical skill.”  (And unfortunately, research also has demonstrated that engineers place more value on technical skills than communication, planning, managerial, or similar essential skills.) No surprise that women working in STEM are less likely than men to have their ideas endorsed by leadership or green-lighted for development.

They are judged, whether explicitly or implicitly, to be less of a “fit” for more “rigorous” or “hard science” work. And in some ways, it might not matter even if women do manage to get assigned to these more valued job assignments. Simply because more women are doing a job, it can come to be less valued. Yet one more example of women engineers facing the “damned if you do; damned if you don’t” barrier. 


Have you had a similar experience to Our Words do Matter or would you like to share a story, concern, or experience that relates to what you have just read?  Click here to share (all responses are private and kept confidential). 


 

 


Jennifer J. VanAntwerp is a professor of chemical engineering at Calvin University in Grand Rapids, Michigan. She researches how engineers learn, work, and thrive, beginning in college and extending throughout their professional careers. 


Wednesday, November 10, 2021

Gender Bias in Student Evaluations of Teaching

by Denise Wilson, November 10, 2021

Student evaluations of teaching (SET) were originally designed to be formative by providing  valuable input to instructors in higher education. When used as a tool for improving teaching, student responses to close-ended and short-answer answer questions on SETs can provide helpful feedback to instructors as well as those who mentor or otherwise work with instructors on professional development. 

Since their initial introduction, however, SET  have continued to shift from formative to summative.  In many institutions, the ratings provide by students on a just a small subset of items far overshadow the short answers and other items that in total, provide a more comprehensive picture of college teaching.  The numbers within this subset of ratings are often used in key decisions for promotion , tenure, hiring, and firing. 

Using this shorter list of SET numbers is convenient and quick. Unfortunately, there is plenty of evidence that these ratings are biased and are not consistent or adequate measures of student learning (a short review of the literature can be found here). Gender bias, where female instructors consistently receive lower ratings than their male peers for the same courses or for different sections within the same course, is especially well documented. Such bias generates concern about how SET ratings are used by higher education institutions to evaluate women instructors and how these ratings impact the future morale and effectiveness of the women who read them and take them to heart.     

Recently, our research team compared student perceptions of how well faculty supported them in their courses with how those same students rated those faculty on student evaluations of teaching in engineering courses at one large public university. We compared a pair of median scores:  Instructor Effectiveness in the Course (SET) and Students' Sense of Faculty Support (our survey).

Instructor Effectiveness was measured using the median score from one item on the university SET form:  "The instructor's effectiveness in teaching the subject matter was:" Students had the option of selecting Excellent (5), Very Good (4), Good (3), Fair (2), Poor (1).  

Students' Sense of Faculty Support was measured in a research-based survey that was not affiliated with the university's educational assessment office.   Faculty support contained eleven items that had been validated in multiple previous studies in higher education and had a high internal consistency (reliability) of 0.92:

  • The professor in this class is willing to spend time outside of class to discuss issues that are of interest and importance to me.
  • The professor in this class is available when I need help.
  • The professor in this class is interested in helping me learn.
  • The professor in this class cares about how much I learn.
  • The professor in this class treats me with respect.
  • The professor has clearly explained course goals and requirements.
  • The professor teaches in an organized way
  • The professor often uses real-world examples or illustrations to explain difficult points.
  • The professor often stays after class to answer questions.
  • The professor often stops to ask questions during class.
  • The professor is often funny or interesting.

All of the above items were rated on a scale from Strongly Agree (5) to Strongly Disagree (1).  

When we compared SET Instructor Effectiveness to Students' Sense of Faculty Support, we found that when asked a general question about instructor effectiveness, students exhibited negative bias toward women relative to more specific (and objective) questions about faculty support behaviors.  Students completed both the instructional support surveys (that contained the Faculty Support items described previously) and SET in the last 2-3 weeks of the term associated with the course being evaluated.   Four of the five female instructors (80%) in the study received higher Faculty Support ratings than their SET ratings would suggest while only three of the nine (33%) of the male instructors did so (female instructors are shown in red; male instructors in blue):


These results also show that students rated women lower in instructor effectiveness than men while still reporting that these same women offered them levels of support that were above the trendline for the entire dataset.    

What does this data really tell us?  First, it tells us that the correlation between general impressions of instructor effectiveness and more specific reports of faculty support is not particularly good.   Our data also reinforce the idea that women often receive lower SET scores than men for similar levels of instructional support and teaching quality.   Alongside other related research that demonstrates this type of negative bias against women, these results reinforce the call to rethink how we evaluate teaching among engineering faculty and instructors.  In a time when equity is central to the radar of many colleges of engineering around the country, gender bias in SET underscores the need to transform the conversation we have about student evaluations of teaching -- both those held in meetings with other faculty and administrators and those held inside one's head when SET reports deliver negative and deflating messages after a semester of long hours and hard work.  

As a country, we need excellence in teaching too much to compromise it by demoralizing good teachers with faulty rating systems. We can do better.


Have you had a similar experience to Our Words do Matter or would you like to share a story, concern, or experience that relates to what you have just read?  Click here to share (all responses are private and kept confidential). 



Denise Wilson is a professor of electrical and computer engineering at the University of Washington in Seattle, Washington. Her research interests in engineering education focus on belonging, engagement, and instructional support in the undergraduate engineering classroom.   

It's not just Engineering (the drowning dog story)

I regularly head down to a small marina near my home where I can get a waterfront property experience without paying ridiculous property tax...