SICSS-Howard/Mathematica’s 2023 Annual New Frontiers in Research and Technology Panel focuses on “AI + Automation + Work”

Howard University

Howard University

This blog is part of a 3-year ongoing series “The Future of Computational Social Science is Black” about SICSS-Howard/ Mathematica, the first Summer Institute in Computational Social Science held at a Historically Black College or University. To learn more about SICSS-H/M’s inaugural start, read the 2021 blog “Welcome SICSS-Howard/ Mathematica 2021” or our first blog “Uncovering new keys to countering anti-Black racism and inequity using computational social science.” If you are interested in applying to participate in SICSS-H/M 2024, check out our website.


The annual New Frontiers in Research and Technology panel, “AI + Automation + Work,” took place via pre-recorded videos published on July 16th, 2023 and a live Q&A session on July 25th, 2023. The panel included five distinguished speakers from academia, government, and the nonprofit sector with a wide range of expertise including law, technology, and management. Their discussionfocused on the book The Quantified Worker: Law and Technology in the Modern Workplace written by panelist Ifeoma Ajunwa, J.D., Ph.D. The Quantified Worker explores the current state of using data as a way to observe and restructure workplaces, describing how increased surveillance is due to technological advances that allow for more data to be collected about workers [1]. The other panelists included: Lindsey D. Cameron, Ph.D., Charlotte Garden, J.D., Gabrielle Rejouis, J.D., and Keith E. Sonderling, J.D. The panel was moderated by Ezinne Nwankwo, a Computer Science Ph.D. student at UC Berkeley and Secretary and Board Member at Black in AI.

 

New Frontiers panel live Q&A

 

Ifeoma Ajunwa, J.D., Ph.D.

Ifeoma Ajunwa, J.D, Ph.D, author of The Quantified Worker, is the Founding Director of the AI and Law Program and an AI Humanity Professor of Law and Ethics at Emory Law School. In the pre-recorded video titled, “AI In Employment Liabilities,” Professor Ajunwa presented the biases in AI technology and how they could be harmful in the workplace. “Automated decision making is often adopted as an effort to eliminate human bias,” she stated. “However,” she continued, “it is actually evidence that automated decision making serves not to replicate, but to amplify bias.” Such examples can be found in Facebook advertisements and Amazon resume training data. She next explained the importance of auditing AI and the “closed loop system of discrimination,” stating that “New York City does have a law that was recently instituted for auditing of automated hiring platforms… but generally there are no laws in most states, and there is no federal law requiring this auditing.” Without an auditing system, algorithms could lead to discrimination of targeted audiences. In the future, she aims to create an audited AI system to prevent these discriminatory practices.

During the live Q&A, Professor Ajunwa stated that “I do think that to have a successful working society that enables people to be innovative and creative, we do actually need regulations. We need those guardrails to make sure we don’t have technology that’s exploitative, technology that’s going to be discriminatory… the two are not incompatible.”

 

Lindsey D. Cameron, Ph.D.

Lindsey D. Cameron Ph.D. is an Assistant Professor of Management at the Wharton School at the University of Pennsylvania. In the pre-recorded talk titled, “The Future of Work: Is it Here Yet?,” Professor Cameron discussed the future of work as an umbrella term with four subcomponents - “who helps with the work,” “who does the work,” “when and where the work is done,” and “how the work is managed.” She highlighted the spread of non-traditional work, and discussed fluid work schedules and remote work particularly in the context of the pandemic. She also described different management structures that “very often use technology as being a core organizing principle.”

Professor Cameron shared her concerns about the increased use of algorithms in employee management, explaining that when AI and algorithms replace what are already broken systems, they amplify old issues and create new harms that transcend borders. She ended the conversation by asking us to reflect on the differences between work and algorithmic management on a global scale and along three axes: legal, social, and technological. With these three pillars in mind, we can start to distil the current challenges with AI in order to reach better solutions for impacted communities.

 

Charlotte Garden, J.D.

Charlotte Garden, J.D. is a Law Professor at the University of Minnesota who specializes in labor, employment, and constitutional law. In the pre-recorded video, titled “AI, Automation, and Collective Action at Work,” Professor Garden remarked that while the National Labor Relation Act does not mention the word surveillance, “it nonetheless does make it an unfair labor practice, so it’s illegal for employers to interfere with employees’ rights to engage in concerted activity.” She posed questions to consider regarding virtual workspaces and collective action, such as, “Can labor law play a role in constituting virtual break rooms or virtual hallways?” 

During the panel, Professor Garden gave insight into how AI is currently regulated and how it should be regulated moving forward. She described a “disconnect between issues that grab lawmakers’ attention but that aren’t all that prevalent, and issues that are really prevalent and important but are very slow to be regulated,” which allows for the proliferation of AI harms that disproportionately impact marginalized communities. Our current technological framework presents a false dichotomy that innovation must be unrestrained in order for it to thrive, but Professor Garden and the rest of the panelists offer an alternative approach that champions policy, offering opportunities to build technology so that all can benefit and not be harmed.

 

Gabrielle Rejouis, J.D.

Gabrielle Rejouis, J.D. is an advocate at the Athena Coalition and United for Respect, and has also been involved in lobbying for ending surveillance at work at the organization Color of Change. In the pre-recorded video, titled “Worker Surveillance,” she discussed how surveillance is now being used in several industries such as agriculture to ensure the quota of packaging items is being fulfilled at a certain time. As a result of invasive surveillance in the workplace, major issues are currently affecting workers, such as “unpredictable scheduling, uncertainty in pay, automated firing, and unsafe health conditions.” Automation largely impacts workers in every industry; hence, Rejouis proposed there needs to be further reinforcement in preventing granular surveillance and establishing stronger worker rights.   

During the live Q&A, Rejouis challenged us to think about who is harmed and how they are harmed whenever these technologies are introduced, bringing attention to the importance of accountability and safety in the workplace. Moreover, she encourages us to look to the past for lessons on how to reliably use this technology today and in the future.

 

Keith E. Sonderling, J.D.

Keith E. Sonderling is the Commissioner in the U.S. Equal Employment Opportunity Commission (EEOC) and a Professor Lecturer at Washington University of Law. In the pre-recorded video, titled “Employment Discrimination and AI”, Commissioner Sonderling  Sonderling explored the relationships between the Federal Government’s anti-discrimination laws and the growth of AI technology in the workplace. A large part of the discussion focused on automated technologies that are now being implemented in Human Resources departments, and how these tend to be more damaging than helpful. For instance, he states that “certain lawsuits assert that employers could create a tailor-made applicant pool by simply taking off boxes on the list of characteristics.”

During the live Q&A, Commissioner Sonderling highlighted the current and potential workers who will be subject to the technology, stating that “they're going to be the ones who need to know what their rights are if the technology is being used to make an employment decision for them.” He illuminated how these auditioning and AI hiring processes are increasingly affecting equal employment opportunities and U.S  employment policies.


SICSS-Howard/Mathematica prides itself on offering participants innovative presentations and discussions, and this year’s panel on “AI + Automation + Work” certainly enlightened us about how data is changing the modern workplace.  We are excited to see how our participants take inspiration from our speakers, and how they will continue to shape the frontiers of research and technology.

For more information about SICSS-Howard/Mathematica, check out our website, follow us on Twitter, like us on Facebook, and join our email list. The application for SICSS-Howard/Mathematica 2024 is open! Apply now!


About the authors

Naniette Coleman

Naniette H. Coleman is a PhD candidate in the Sociology Department at the University of California, Berkeley and the founder of SICSS-Howard/Mathematica. Her work sits at the intersection of the sociology of culture and organizations and focuses on cybersecurity, surveillance, and privacy in the US context. Specifically, Naniette’s research examines how organizations assess risk, make decisions, and respond to data breaches and organizational compliance with state, federal, and international privacy laws.  Naniette holds a Master of Public Administration with a specialization in Democracy, Politics, and Institutions from the Harvard Kennedy School of Government, and both an M.A. in Economics and a B.A. in Communication from the University at Buffalo, SUNY.  A non-traditional student, Naniette’s prior professional experience includes local, state, and federal service, as well as work for two international organizations, and two universities. Naniette is also passionate about the arts.


Arvin Wu is a senior at Diamond Bar High School. He served as an event assistant for the Summer Institute in Computational Social Science-Howard/Mathematica in 2023, focusing on program support and background research. He has an interest in history and psychology and enjoys learning about rising and current social issues. In his free time, he watches historical documentaries and movies. Arvin will begin college in fall 2024.


Explore more posts from the series: The Future of Computational Social Science is Black

Previous
Previous

Digital Methods for Social Research: OA Articles

Next
Next

Social media research and ethics: what does the user want?