The robots are here. They help us check out at the grocery store, target us with timely ads on Instagram for a new pair of shoes, turn off our lights at a simple voice command and even determine the songs we’re most apt to enjoy on our favorite music streaming platforms.
Though technology has given us more convenience, connection and access than ever before, the algorithms hidden beneath its seemingly harmless code, the algorithms shaping our lives, are also grossly discriminating against our community—and all too often with impunity. If you think this doesn’t affect you, think again.
For us, unchecked technology shows up as police departments disproportionately deploying facial recognition software within marginalized communities to target criminal behavior, or Black people being tagged as gorillas in Google image searches, or Facebook approving housing ads that are filtered to prevent them being marketed to minorities.
These practices are what Princeton University associate professor Ruha Benjamin, Ph.D., refers to as “the New Jim Code.” In her book Race After Technology, Benjamin explains that tech fixes often hide, speed up and even deepen discrimination, furthering racial stereotypes and codifying the very biases that human programmers place to create the technology.
Fortunately, four Black women are holding the code and its creators accountable. By rooting out bias in technology, these Black women engineers, professors and government experts are on the front lines of the civil rights movement of our time.
FAY COBB PAYTON, Ph.D.
A longtime researcher and expert in the field of information and decision systems, Payton has been asking tough questions through her research on Black women’s access to education and technical tools to address health disparities, including HIV and mental health. She examines issues around inclusion in technology innovation, workforce and entrepreneurship.
“I look at the roles of skills of health professionals and the risks associated with training AI systems to understand who will be the most impacted by disease,” explains Payton. Aside from her research, Payton serves on the faculty at North Carolina State University and works on behalf of the National Science Foundation, directing grants to deepen knowledge in computing and finding ways to foster greater inclusion within the industry.
TIMNIT GEBRU
Gebru grew tired of being the only Black person in a sea of thousands of engineers at the AI tech conferences she attended around the world. But being alone in rooms wasn’t exactly foreign to her. Gebru earned her stripes in electrical engineering and had worked at Apple on analog circuit design and other projects in which Black women were severely underrepresented.
She saw firsthand that colleagues designing sociotechnical products and systems were largely tone deaf to the issues facing Black communities. “I was having interactions with people who didn’t know anything about police violence or anything about discriminative systems,” says Gebru. In 2016, after a rant on Facebook following yet another experience at a nondiverse tech conference, Gebru established a private group on the social media platform to connect Black people working in AI, and a few hundred people asked to be added.
The following year Gebru co-organized the Black in AI workshop. Since then the event has attracted hundreds of Black researchers and engineers from around the world and helped to advance more Black people within the field. Attendees meet recruiters, share their research and find mentors to help them navigate applying to top-tier doctoral programs. Each year Gebru and her team have been able to provide travel grants through corporate sponsorships.
This year’s workshop will be held alongside the NeurIPS conference in Vancouver, British Columbia, with up to 500 people expected to attend. Black in AI’s Facebook group has more than 800 members, and a separate group on Google boasts 1,500, with pending requests.
JOY BUOLAMWINI
Buolamwini, a computer scientist and researcher in the Massachusetts Institute of Technology’s Media Lab, was behind a study released earlier this year evaluating facial recognition software from tech giants Microsoft, IBM and Amazon, among others. Their AI, she found, could identify lighter-skinned men but not darker-skinned women. Buolamwini, a Black woman, could not be seen.
“Imagine a scenario in which self-driving cars fail to recognize people of color as people—and are thus more likely to hit them—because the computers were trained on data sets of photos in which such people were absent or underrepresented,” Buolamwini explained to Fortune.
As she completes her Ph.D. program at MIT, Buolamwini is leading the charge on prohibiting companies from using facial analysis technology unethically. As part of that effort, she created the Algorithmic Justice League, a collective that pushes for oversight and regulation and asks businesses to commit to the Safe Face Pledge to mitigate bias and abuses.
MUTALE NKONDE
“We’re no longer having to march,” says Nkonde, a tech policy analyst and former fellow for the Data & Society Institute. While there she coauthored “Advancing Racial Literacy in Tech,” a white paper defining approaches to reducing discrimination in the field. “We have to be unplugging things and making sure tech is optimized for justice,” she adds.
When the landlord of a Brownsville, Brooklyn, housing project attempted to force residents to submit to facial recognition technology to gain access to their apartments, Nkonde was on Congresswoman Yvette Clarke’s team as a senior tech policy adviser helping to shape the No Biometric Barriers to Housing Act of 2019, which was introduced last summer.
The act prohibits the use of biometric technology in housing funded by the Department of Housing and Urban Development (HUD), protecting tenants from biased surveillance technology. For the past three years Nkonde has been researching the impact of emergent technologies on Black populations. Since the release of her paper in May, she has been giving talks throughout the U.S. and Europe on how to make antiracist products, policies and procedures.
As a fellow at the Berkman Klein Center at Harvard, she designs projects related to tech literacy and policy. In early November Nkonde will launch AI for the People (AIFTP), a nonprofit that uses celebrity culture to educate the Black community about the social justice implications of the deployment of advanced technologies.