Race and Technology

A blog about the intersection of race and technology.

Digital Literacy, Social Justice, and Race -- Part II

This is the conclusion of my blog on Digital Literacy, Social Justice and Race.

When Microsoft released Tay, an artificial intelligence social media chatbot built to interact and learn from Twitter users, within a span of twenty-four hours the bot went from benign to anti-Semitic, racist, and misogynistic. When asked, "Did the Holocaust happen?" Tay replied, "It was made up." Then Tay tweeted statements like "Hitler was right I hate the Jews." When asked about Black Lives Matter activist DeRay Mckesson, Tay suggested, "like @deray should be hung!" And when questioned about women, Tay offered this opinion, "I fucking hate feminists and they should all die and burn in hell."[i]

What's going on?

While their creators seek to craft them as some sort of digital gods, algorithms actually learn from the profane experience of humans. When asked about the perfect search engine, Google cofounder Sergey Brin said, "It would be like the mind of God."[ii] Yet even Brin, though he might try, cannot endow Google's algorithms with God's mind. To learn, to act artificially intelligent, algorithms must be trained. Most often, they are trained on copious amounts of information from the internet or other sources. That information contains what humans have accomplished, decided, acted upon, and thought in the past—not just in moments of enlightenment but at times of debasement and depravity as well. Ultimately, algorithms, though they are used for future decisions, are founded on decisions rendered by human beings in the past. Here, GIGO, that age-old programming adage, surely applies: garbage in, garbage out. Or more precisely, racism in, racism out.

With impunity, algorithm makers can lower their sights from creating the mind of God to building bias-free code. Hiring can favor Black engineers and those of other marginalized groups. Training can bring to light the historical and cultural issues that give rise to biased code. Data used to train algorithms can be scoured for embedded bias. Quality assurance can be expanded to include running tests on users of all demographics and correcting biased results prior to product release.

Activists also have a key role to play. They can give themselves impeccable training in all aspects of digital literacy. They can insist on digital literacy curricula in the K–12 classrooms of all communities, especially marginalized communities and communities of color. They can use social media to organize actions but not to conduct meaningful discourse on values and beliefs. They can employ alternatives to standard social media for communication during protests and civil actions. They can promote software and algorithms already vetted as bias-free. They can align themselves with groups working to evaluate and hold algorithm makers accountable for biased software. And they can develop in-house cyber teams capable of responding defensively and proactively to the full array of cyber threats.

But even more important than changing technology is realizing that it's not simply about altering a search result here or there, or about modifying an algorithm to reduce bias, or even about developing better digital literacy. Only when people of color and other minorities ascend to the highest levels of decision-making and power in technology companies will the systemic changes required to help end racism and bias actually take place. People of color and other minorities are not only underrepresented in the technology workforce, they are also underrepresented in the c-suites and the boardrooms of technology firms.

Even the above recommendations are not enough to bring about a much-needed change in racial relations. "Certainly, if the problem is to be solved," Martin Luther King said, "then in the final sense hearts must be changed."[iii]

Algorithms can be reimagined, recreated, and revised. Software bias can be exposed, eliminated, and excluded. All of these technological changes can be instituted, yet racial insensitivity, intolerance, and injustice will remain. Technology, when used right, is a means leading to progress in racial relations, not an end; it's one bridge on a journey toward justice, not a pin marking a final destination. King recognized this when he said,

I am cognizant of the interrelatedness of all communities and all states. . . . Injustice anywhere is a threat to justice everywhere. We are caught in an inescapable network of mutuality, tied in a single garment of destiny. Whatever affects one directly, affects all indirectly.[iv]

This "network of mutuality," more so than any social media or digital network, is the one through which we must all communicate if progress in racial relations is to be made.


This blog is excerpted, in large part, from my forthcoming book, THINK BLACK: A memoir. (NY: HarperCollins, September 17, 2019). Visit the book's website at www.thinkblackthebook.com.

[i] Sophie Kleeman, "Here Are the Microsoft Twitter Bot's Craziest Racist Rants," Gizmodo, March 24, 2016, https://gizmodo.com/here-are-the-microsoft-twitter-bot-s-craziest-racist-ra-1766820160 .

[ii] Jason Pontin, "A Meeting with Sergey Brin, Cofounder of Google, at the Russian Tea Room in San Francisco," Red Herring, July 16, 2002, Wayback Machine, https://web.archive.org/web/20020719125615/http://www.redherring.com/insider/2002/0716/bait071602.html .

[iii] Martin Luther King Jr., Remarks, Western Michigan Univ., December 18, 1963, https://wmich.edu/sites/default/files/attachments/MLK.pdf.

[iv] Martin Luther King Jr., Letter from the Birmingham Jail (New York: Penguin, 2018), 2.

Do You Really Want Me As A Houseguest?
Digital Literacy, Social Justice and Race -- Part ...
 

Comments

No comments made yet. Be the first to submit a comment
Already Registered? Login Here
Guest
Friday, 15 November 2019