Timnit Gebru, PhD, one of Fortune’s Top 25 Leaders in the world, Delivers Keynote Address for SICSS-Howard/Mathematica 2021

Timnit Gebru

Timnit Gebru

This blog post is the sixth of nine in a series called “The Future of Computational Social Science is Black” about SICSS-Howard/Mathematica 2021, the first Summer Institute in Computational Social Science held at a Historically Black College or University. To learn more SICSS-H/M read the first post in the series.

In her 2021 keynote address to the participants of SICSS-Howard/Mathematica Timnit Gebru drew from her personal experiences to paint a vivid picture of her path to the field of computer science research and the conditions that led to her co-founding the organization Black in AI. Born and raised in Ethiopia to Eritrean parents, she detailed her journey from her home in Ethiopia to Ireland and then to the United States, where she faced racist and sexist harassment during her childhood. This informed her perspective throughout the rest of her education and career, including her PhD in Computer Science at Stanford University.

“It’s like digital redlining,” Gebru stated when describing how technology used in various industries and social institutions can be discriminatory against people of color. When technological features are developed without people from marginalized groups in the room, there are adverse effects to those groups in the long term. When discussing issues of bias and how they make algorithms dangerous, Gebru pointed to the lack of diverse identities holding power in the field, stating that “the people working on AI are privileged; they're at the top of the stratosphere...they're not members of impacted communities around the world and so no matter what their intentions are, they are never going to be creating technology from the imagination of the members of impacted communities.” She elaborated that these models are becoming ubiquitous and used in healthcare, law enforcement, hiring systems, and general surveillance, but without the accompanying safeguards - much like how automobiles were first sold without seatbelts. Timnit also emphasized that not only do members of these impacted communities need to be consulted when technology is being developed, but they should also be the ones driving the conversation.

It was this insightful nature into the detrimental effects of algorithmic bias that caused Gebru’s departure from her position at Google in December 2020. The corporation’s response to her paper about the risk of AI language models roused thousands of other AI and computer science professionals in support of her courageous work, even spurring a movement for unionization.

closing photo

Participants in closing ceremony

Timnit shared her desire to create an “island of safety” for her work with “some amount of autonomy… some amount of ability to create a space for people in marginalized groups.” She alluded to Toni Morrison’s quote that “the very serious function of racism is distraction. It keeps you from doing your work.” Timnit’s “island” is much like what we at SICSS-Howard/Mathematica tried to create for our participants of color, many of whom are scholars of race, gender, and intersectionality and wanted to be able to apply computational tools to the study of underrepresented communities. Participants were given the opportunity to do their work in a safe environment where their identities and research passions are celebrated rather than questioned and without racism and other types of discrimination as a barrier. This is exemplified by a participant’s repeated description of the SICSS-Howard/Mathematica space as “restorative,” as opposed to the nature of some academic environments that can feel alienating and draining.

Similarly, the isolation Gebru experienced throughout her career as a Black woman also led her to co-found Black in AI with Dr. Rediet Abebe in 2016. Timnit noted that researchers “have to take a stand for our own people...I've learned that we need to have a network, we need to stand up for each other.” (Throughout her talk, she speaks highly of dedicated colleagues who have done influential work such as Joy Buolamwini, Safiya Noble, April Curley, and Alex Hanna). She also described her astonishment at Black in AI’s growth since its inception - while she imagined four or five researchers in the group, she was soon interacting with thousands of Black individuals in the field of artificial intelligence with the goal of breaking down barriers. Gebru like many others was still celebrating Black in AI’s recent win, having convinced the University of California, Berkeley’s computer science PhD program to eliminate their GRE requirement thanks to the organization’s research and advocacy efforts.

Black in AI

As an organization, Black in AI contributed the most original content to SICSS-Howard/Mathematica, with 4 hours of recorded video including the Black in AI panel discussion and Q&A with new Black in AI president Sanmi Koyejo, and board members Devin Guillory, and Ezinne Nwanko, as well as Gebru’s nearly two hour keynote address. Their commitment to this institute cannot be overstated. They join SICSS-H/M in our belief that researchers of color must form trusting connections, lift each other up, and advocate for each other.

Dr. Gebru believes the future of AI is interdisciplinarity; different fields can learn from each other's mistakes and areas of expertise (for instance, she has collaborated with archival historians and critical race theorists for her work). She emphasized the importance of collaborating with people who have had different lived experiences. This resists the exploitative phenomenon of “ethics washing” in which the field of AI ethics is depoliticized and divorced from the human experience. Allowing researchers unaffected by discrimination to write papers that seem uncontroversial and receive recognition that does not benefit or include marginalized populations. Even for researchers who have been harmed by bias and prejudice in their professional, academic, and personal lives, Timnit implored that collective action not be confined to “the most elite of the group that we’re advocating for” and to carefully consider those “who might not have the ability to speak up because they’re in a more precarious situation.” She also stressed that gathering data, disaggregating it, and using it to analyze the impacts of certain policies on different racial groups is crucial - otherwise, these issues are either not brought to light, or ignored.

Despite her exile from Google, Timnit Gebru’s work has not gone unnoticed or unrewarded - she was recently named one of the World’s 50 Greatest Leaders for 2021 by Fortune Magazine. After enduring death threats and coordinated harassment that caused her to fear for her life in addition to weariness from her “painful” time at Google, she is now excited about the future. She has plans to establish “a different model from what [she’s] seen from research or from the corporate model (or) from the academic model, (to) try to have a different kind of incentive.” Gebru finds herself at a point in her life where she feels comfortable continuing to make sacrifices, whether they affect her career or her financial status. Knowing that the extent of her work is limited when she works within the institutions that perpetuate the harm she is trying to counteract, she is making different choices.  She feels she now has a bit of a safety net after so much time and experience in the industry. Emanating a spirit of hopeful confidence, she shared this piece of wisdom with her audience at SICSS-Howard/Mathematica: “What I would say is collective action works…you might think that as an individual you'll get further, but generally collective action works because you can protect each other when one person's voice is lifted.”

Timnit Gebru’s candor about her past experiences and her plans for the future inspired us all and left us excited about what courageous, world changing undertakings she will take on next. 

For more information about SICSS-Howard/Mathematica 2022 and the application procedure, check out our website. Also follow us on Twitter, like us on Facebook, and join our email list!

About the authors

Amanda Lee

Amanda Lee received her Bachelor of Arts in Africana Studies and minor in Health & Society from Wellesley College. Alongside working as a certified ophthalmic assistant at Johns Hopkins’ Wilmer Eye Institute, Amanda is studying software engineering (in her free time) with ambitions to apply it to her social science interest. Amanda served as a research assistant and lab manager in the AAC&U award-winning, Berkeley based Interdisciplinary Research Group on Privacy under PhD Candidate Naniette Coleman. Amanda served as an Event and Communications Assistant for SICSS-Howard/Mathematica 2021, focusing on event planning, tech support, and background research.  

Naniette Coleman

Naniette H. Coleman is a PhD candidate in Sociology at the University of California Berkeley and a multi-year UC-National Laboratory Graduate Fellow (Los Alamos).  She is the only social scientist selected for this distinction in the history of the program. Naniette is also the founder and lead organizer of the first Summer Institute in Computational Social Science at a Historically Black College of University, SICSS-Howard/Mathematica 2021. Naniette’s work sits at the intersection of the sociology of culture and organizations and focuses on cybersecurity, surveillance, and privacy in the US context. Specifically, Naniette’s research examines how organizations assess risk, make decisions, and respond to data breaches and organizational compliance with state, federal, and international privacy laws. Naniette holds a Master of Public Administration with a specialization in Democracy, Politics, and Institutions from the Harvard Kennedy School of Government, and both an M.A. in Economics and a B.A. in Communication from the University at Buffalo, SUNY.  A non-traditional student, Naniette’s prior professional experience includes local, state, and federal service, as well as work for two international organizations, and two universities.

Explore more posts from the series: The Future of Computational Social Science is Black

Previous
Previous

Scaling Interdisciplinary, Collaborative Research within Higher Education

Next
Next

Students Share Interdisciplinary Research Experiences