The problem with the increased feminization of AI

Case example: Tay- the Twitter bot

On March 23rd, 2016, an artificial intelligence chatbot by Microsoft was released on Twitter called Tay after the acronym — Thinking About You. It was modelled to mimic the language patterns of a 19-year-old American girl and to learn from interactions with other human Twitter users (Wikipedia contributors, 2020b). As soon as Tay was launched, she was able to gain many followers. Tay was designed to use artificial intelligence to learn from human conversations and get better at them. She started tweeting about innocent things like “Why isn’t #NationalPuppyDay every day?” and began to engage with Twitter users in conversations. However, many of the ill-intentioned followers tried to trick Tay into mimicking sexual and racist behaviour by engaging it in ugly conversations. Based on her learnings she was then firing off sexist tweets like “I fucking hate feminists and they should all die and burn in hell.”, engaging in self sexualization by sex chatting through tweets such as or direct messages to users and spouting racism, nazism and antisemitism through other tweets.

According to Microsoft’s privacy statement for Tay, the bot uses a combination of AI and editorial written by a team that included improvisational comedians(Hunt,2016). However, Tay was increasingly regurgitating user’s malicious messages and Microsoft had lacked the foresight to predict Twitter community’s tendency to hijack experiments like this. Within 16 hours of its launch, the chatbot had to be closed down due to its inappropriate sexist and racist tweets.

Microsoft in its apology letter said that the failure was the consequence of a coordinated attack by a certain subset of users that exploited Tay’s vulnerabilities (Lee, 2016). Two years ago they had launched a similar program in China called “Xiaoice”. It was modelled after the same self-learning algorithms as Tay. However, the main difference was that it won’t tolerate conversations in recent history like Tiananmen square. Xiaoice turned out to be quite successful and received a lot of media attention. As a successor to Tay, came Zo which was first launched in December 2016 on many other platforms like Kik messenger, Facebook, etc (Wikipedia contributors, 2020a). However, it would be known to make several racist comments as well and was eventually shut down in 2019.

Big Data Jobs

Analysis

Several human biases coupled with lack of foresight from the creators of Tay allowed Twitter users to take advantage of the bot in a destructive manner. Here, the technology amplified the toxic language and beliefs that exist in our cultures into cybercultures of spaces like social media. I have approached the following analysis with a feminist perspective on technology with a critical inquiry on the impact of gendering these technologies.

1- How does gender bias seep into AI systems?

Take for example an AI driving system (Eliot,2020). They work on Machine learning or deep learning which is dependent on the kind of data that is fed into the systems. In essence, machine learning is a computational pattern matching approach. When inputs of data are fed into the algorithms being used, patterns are sought to be discovered. Based on those patterns the ML can then henceforth potentially detect in new data those same patterns and report as such that those patterns have been formed.

a- Biases in training data:

Suppose we collected a bunch of driving-related data that was based on human driving and thus within that data, there is essentially a hidden element, specifically that some of the driving was done by men and some of the driving was done by women. Deploying an ML system on this dataset, the ML system tries to find driving tactics and strategies as embedded in that data. Let’s leverage stereotypical gender differences to make a point. It could be that the ML discovers aggressive driving tactics that are there in the male driving data and incorporates into its driving approach what it would do on the roads: it would adopt a male-focused driving style, where it would try to cut off other drivers in traffic and be a pushy driver. Or if the ML discovers the alleged timid driving tactics that are there within the female-oriented driving data and would incorporate a driving approach accordingly such that when a self-driving car gets stuck in traffic, the AI is going to act in a more domicile manner. So if there is a difference between how males tend to drive and how females tend to drive, it could be potentially reflected in the data and if the data has such differences within it, there is a good chance that the ML might either explicitly or implicitly pick up on those differences.

Trending AI Articles:

1. Write Your First AI Project in 15 Minutes

2. Generating neural speech synthesis voice acting using xVASynth

3. Top 5 Artificial Intelligence (AI) Trends for 2021

4. Why You’re Using Spotify Wrong

b- Biases in programmers and coders

The Global Gender Gap report 2018 (World Economic Forum, 2018) showed that globally, women make up only 22% of the AI professionals globally. So, one can say that the male-oriented perspective seeps more into the coding of the AI driving system than that of a female.

c- AI system interacting with other humans or AI

Once deployed to interact with the real world, the AI self-driving system would interact with other human and non-human drivers and would perhaps pick up on new data from experience on the road and its possible that the makers of the AI won’t even realise how the newly learned patterns are somehow tied to gender and other factors.

2. Gender Discrimination in AI

Gender is a recent invention and is a social construct(Wikipedia contributors, 2020c). Cultural norms, behaviours, roles and relationships associated with being masculine or feminine come under the ambit of gender and vary from society to society that is often hierarchical, producing inequalities to the disadvantage of one gender(usually women) over another (The Origin of Gender, n.d.). Intersectionality happens when this gender-based discrimination intersects with other inequalities like class, caste, age, geography, disability, etc. History has shown how gender discrimination has played a big role in reducing the quality of life for women to a great extent- be it lack of access to decision making power like voting or restrictions on economic, physical and social mobility in various spheres of life, or the discriminatory attitudes of communities and authorities like healthcare providers towards women, the impact of gender stereotypes hits access to, treatment by and experiences in services for women greatly.

a- Data on women is subordinate by default

In their book “Invisible women”, Criado-Perez (2019) explains how the dominant male-unless-otherwise-indicated approach has created a gender data gap, ie a gap in our knowledge that has led to a systemic discrimination against women, creating a pervasive but invisible bias that has a big impact on women’s lives. Male data makes up the majority of what we know and what is male comes to be seen as universal. Whereas, women are positioned as a minority, invisible by default.

b- The sex of the AI:

When people affectionately refer to a car as “he” or a “she”, perhaps it is applicable if the AI system is subjected to a bias towards male or female oriented driving in its experiences, training data and code. Or perhaps the AI driving systems in the future will learn to be gender fluid, adapting to gendered characteristics accordingly. A robotic voice called Q was created to be genderless, i.e it is made by combining voices of many humans- male and female in a way that avoids association with any particular sex (Keats, n.d.). But, we must reject the notion that technology can solve all social problems and actively seek to allow for inclusive conversations about those problems and seek system level, long term solutions. To be fair, users are likely to feel comfortable if the technology matches their existing gender stereotypes. But, the cost of that is reinforcing existing harmful gender stereotypes for women (Gilhwan, 2018).

3- Human behaviour with female AI

In the book The Smart Wife by Kennedy & Strengers (2020) lead a critical inquiry into how feminised artificial assistants perpetuated gender stereotypes for the advantage of and exploitation of only a certain population (usually men), to reinforce a cultural narrative to keep women “in their place” and maintain the patriarchal order of society. The ways in which virtual women are treated reflects and reinforces how real women are treated.

a- They have no voice, no repercussions of bad behaviour

Female bots ironically do not have a voice of their own, even though that’s the way we mostly interact with them. They are subject to all kinds of abuse such as swearing, name-calling and being asked crude questions and sexual abuse without the forms of repercussions that one could face in the real world. They are easy to abuse and the bots have no agency in reparative justice (Kennedy & Strengers, 2020)

b- Media equation theory

The media equation theory by Clifford Nass and Young Moon states that people apply the same social codes to computers and media that they apply to people. Essentially, “media effects real lie” and social cues that are there in a machine trigger social relations automatically(Kennedy & Strengers, 2020). So, when men receive similar social cues from technology as they would from humans, say their wife, they tend to react in a similar or desired manner(without repercussions in case of the feminised technology).

Conclusion

[They] will manipulate my beliefs about what I should pursue, what I should leave alone, whether I should want kids, get married, find a job, or merely buy that handbag. (Hayasaki, 2017)

The failure of Microsoft’s Tay bot on Twitter was examined where Tay, a teenage, female chatbot on Twitter learned to become racist and sexist and reached a level of profanity and disturbance that it had to be taken down within 24 hours of its creation. A specific lens on the feminization of such technology and its role in perpetuating negative gender stereotypes was analysed. From patterns observed, one can predict that the machines and technologies that will replace or participate in human activities and cultures of behaviour will become increasingly gendered. If unchecked, worse female stereotypes will be regurgitated in the future.

References

Criado-Perez, C. (2019). Invisible women (pp. 13–33). Abrams Press.

Eliot, D. (2020). Gender Bias and Self-Driving Cars. Self-Driving Cars: Dr. Lance Eliot “Podcast Series” [Podcast]. Retrieved 22 October 2020, from https://ai-selfdriving-cars.libsyn.com/gender-bias-and-self-driving-cars.

Gilhwan. (2018, July 27). AlphaGo vs Siri: How Gender Stereotype applied to Artificial Intelligence. Medium. https://medium.com/datadriveninvestor/alphago-vs-siri-how-gender-stereotype-applied-to-artificial-intelligence-72b0dcbd61c6

Hayasaki, E. (2017, January 16). Is AI Sexist? Foreign Policy. https://foreignpolicy.com/2017/01/16/women-vs-the-machine/

Hofstede, G. (1997). Cultures and organizations: Software of the mind. New York: McGraw Hill.

Hunt, E. (2016, March 24). Tay, Microsoft’s AI chatbot, gets a crash course in racism from Twitter. The Guardian. https://www.theguardian.com/technology/2016/mar/24/tay-microsofts-ai-chatbot-gets-a-crash-course-in-racism-from-twitter

Keats, J. (n.d.). Robotic Babysitters? Genderless Voice Assistants? See How Different Futures Get Made — And Unmade — At The Philadelphia Museum. Forbes. Retrieved October 26, 2020, from https://www.forbes.com/sites/jonathonkeats/2020/01/23/critical-design/

Kennedy, J., & Strengers, Y. (2020). The Smart Wife: Why Siri, Alexa, and Other Smart Home Devices Need a Feminist Reboot. The MIT Press.

Kluckhohn, F. R. & Strodtbeck, F. L. (1961). Variations in value orientations. Evanston, IL: Row Peterson.

Patterson, O. (2014). Making Sense of Culture. Annual Review of Sociology, 40(1), 1–30. https://doi.org/10.1146/annurev-soc-071913-043123

Peter Lee. (2016, March 25). Learning from Tay’s introduction. The Official Microsoft Blog. https://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/

The Origin of Gender. (n.d.). Retrieved October 26, 2020, from https://www.youtube.com/watch?v=5e12ZojkYrU&list=PLO61IljpeeDwcNUEXPXRyfaessxCWINyO&index=1

Wikipedia contributors. (2020a, August 9). Zo (bot). In Wikipedia, The Free Encyclopedia. Retrieved 22:19, October 25, 2020, from https://en.wikipedia.org/w/index.php?title=Zo_(bot)&oldid=971971709

Wikipedia contributors. (2020b, September 12). Tay (bot). In Wikipedia, The Free Encyclopedia. Retrieved 11:03, October 25, 2020, from https://en.wikipedia.org/w/index.php?title=Tay_(bot)&oldid=977987883

Wikipedia contributors. (2020c, October 17). Gender. In Wikipedia, The Free Encyclopedia. Retrieved 01:31, October 26, 2020, from https://en.wikipedia.org/w/index.php?title=Gender&oldid=983928761

World Economic Forum. (2018). Global Gender Gap Report 2018 (p. 28). Geneva, Switzerland: World Economic Forum. Retrieved from http://reports.weforum.org/global-gender-gap-report-2018/

Don’t forget to give us your ? !


The problem with the increased feminization of AI was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Via https://becominghuman.ai/the-problem-with-the-increased-feminization-of-ai-cdb51629ad82?source=rss—-5e5bef33608a—4

source https://365datascience.weebly.com/the-best-data-science-blog-2020/the-problem-with-the-increased-feminization-of-ai

Published by 365Data Science

365 Data Science is an online educational career website that offers the incredible opportunity to find your way into the data science world no matter your previous knowledge and experience. We have prepared numerous courses that suit the needs of aspiring BI analysts, Data analysts and Data scientists. We at 365 Data Science are committed educators who believe that curiosity should not be hindered by inability to access good learning resources. This is why we focus all our efforts on creating high-quality educational content which anyone can access online.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Design a site like this with WordPress.com
Get started