top of page
Writer's pictureJalil Jalilov

Committee on Industry, Research, and Energy (ITRE)



#E-thics: AI is increasingly being turned to as a means of keeping people safe online with limited human intervention, but they are used alongside content-promotion algorithms that have been seen to fuel extremism. Against a backdrop of increasing awareness of both algorithmic bias and human confirmation bias, how can the European countries work to ensure that protection from hate speech and discrimination is enforced online?



Introduction

“Our intelligence is what makes us human, and AI is an extension of that quality. Artificial intelligence is extending what we can do with our abilities. In this way, it’s letting us become more human.” - Yann LeCun, Machine Learning Specialist.

In this day and age, Artificial Intelligence plays an immense role in preserving the guidelines across social media platforms. The paradigm that makes human intelligence look less valuable has a spectrum of advantages ranging from social media platforms to the vast majority of modern industries. The constant development of AI leaves no doubts about its involvement in people’s lives. Due to the rise in the benefits of AI, investments in Artificial Intelligence keep on rising. According to a Eurobarometer survey, the digitally illiterate margin of the human population considers the development of Artificial Intelligence as dangerous and perceives it as a threat to our society. Furthermore, as the rate of influence that AI possesses substantially increases, the side effects compose a high rate of threats on fundamental human rights. In accordance with a poll conducted by Eurobarometer, 61% of Europeans look favourably at AI and robots, while 88% say these technologies require careful management”.


With the benefits of AI development, there are negative sides as well. In fact, as the amount of social media platforms increases, therefore giving more opportunity to express one’s opinion, the percentage of hate speech, bullying and harassment online increases too. Internet users nowadays are provided with the ability of staying anonymous while expressing their views, which gives them more opportunity to spread hate, violence online without being punished.


In addition, the “human confirmation bias” can exert extremism, as it is amplified by filter bubbles, which display data and content for users that they are likely to agree with while excluding the opposing views. These biases contribute to overconfidence in personal beliefs, which makes the user ignore alternatives.

KEY TERMS AND CORE CONCEPTS

  • Artificial Intelligence (AI) is a wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence.

  • Data Protection - is the process of protecting crucial information from loss, falsification, or corruption. As data is created and stored at previously undetected rates, the significance of data protection grows

  • Algorithmic Biases are the systematic and recurrent mistakes in computer programs that result in "unfair" results.

  • Artificial Intelligence Bias is when a machine consistently produces different results for one group of users than for another.

  • Algorithm is a list of instructions, used to solve problems or perform tasks, based on the understanding of available alternatives.

  • Social media algorithm is a set of rules and signals that automatically ranks content on a social platform based on how likely each individual social media user is to like it and interact with it.

  • Machine Learning - an area of AI that enables machines to advance themselves by recognising patterns in provided data and making predictions with minimal human intervention.

  • Extremism is essentially a political term which determines the activities that are not in accordance with norms of the state, are fully intolerant toward others, reject democracy as a means of governance and the way of problem-solving and also reject the existing social order.

  • Hate speech is a public speech that expresses hate or encourages violence towards a person or group based on something such as race, religion, gender, or sexual orientation.

  • Bots – automated accounts that interact on social media platforms. These accounts are not actually run by real people, instead it is programmed to interact with other users.

  • Trolls – individuals who push narratives and bully people. Those accounts are usually found to promote false information and hate speech.

  • Deep Fake – synthetic media type in which a person, in an existing image or video, is replaced with someone else’s likeness.


KEY ACTORS AND STAKEHOLDERS

  • The European Commission (EC) is the Executive Body of the EU that is responsible for proposing and implementing legislations. The European Commission facilitates several communications, works together with international organisations, such as Global Internet Forum to Counter Terrorism to develop guidelines and frameworks for internet safety.

  • The European Parliament (EP) is one of the 7 institutions and legislative bodies of the EU. With the aid of the Council of the EU, it adopts legislation proposed by the European Commission. The European Parliament works together with EU institutions, specifically EUROPOL to establish a safer environment online.

  • European Agency for Fundamental Rights (FRA) is an independent body of the EU that focuses on the fundamental rights and values of freedom and tolerance, and most importantly, the right to privacy and protection of personal data.

  • Member States are countries that are a part of the EU, signatories to the founding treaties of the union and thereby share in the privileges and obligations of membership.

  • Private Sector is the component of the national economy that is not directly governed by the state.

  • The European Data Protection Board (EDPB) is the General Data Protection Regulation will be applied consistently thanks to the European Data Protection Board body with legal personality, and cooperation between EU data protection authorities will be encouraged.

MEASURES IN PLACE

An EU Artificial Intelligence Act for Fundamental Rights guarantees that artificial intelligence (AI) systems used in the Union are secure, adhere to current basic rights law, and uphold Union values. The European Union institutions have taken a globally-significant step with the proposal for an Artificial Intelligence Act (AIA). Insofar as Artificial Intelligence (AI) systems are increasingly used in all areas of public life, it is vital that the AIA addresses the structural, societal, political and economic impacts of the use of AI, is future-proof and prioritises the protection of fundamental rights and democratic values.

In order to prevent the viral dissemination of illegal online hate speech on internet platforms, the IT sector is collaborating with the European Commission and EU Member States to minimise the hate speech cyberbullying online. In May 2016, the European Commission and four significant IT firms such as Facebook, Microsoft, Twitter, and YouTube released a "Code of conduct on combating unlawful hate speech online" in response to the rise of racist and xenophobic hate speech online. A non-governmental organisation called Trust Project uses eight fundamental metrics to evaluate the validity of news reports and fact-check their accuracy. Major media outlets like Google and Twitter will integrate the framework to stop the development of extremism and hate speech by disseminating false information.


KEY CHALLENGES

Limitations of Freedom of Speech

With the spread of new social media platforms such as TikTok, the ‘For You Page’ recommends videos for every user based on their likes. The usage of the Machine Learning technique leads to showing more explicit and radical content based on the previous interactions made on the app. Media that includes hate speech and misinformation tends to show a greater rate of interactions across the platform. Furthermore, Facebook has stated that the mechanics of their platforms are not neutral. This suggests that to increase engagement, algorithms have detected that hate and misinformation are instrumental in the app activity.

The protection from hate speech by Trust Project or many other frameworks can also lead to detrimental suppression of freedom of expression of the EU by monitoring and filtering through information.

Artificial Intelligence used in identification

There are many uses of Artificial Intelligence in surveillance cameras. Artificial intelligence has powered the use of biometric technologies, including facial recognition applications, which are increasingly used for verification, identification and categorisation purposes. While facial recognition systems have real benefits for public safety and security, their pervasiveness, intrusiveness, and error-prone nature raise a number of fundamental rights issues, such as discrimination against certain groups of people and violations of the right to privacy and data protection.

Offensive content built by algorithmic bias

On the other hand, 'algorithmic bias’ removal can lead to failure to act against offensive content, in particular offensive content that is not legal. Other fundamental rights can also be affected by over-blocking content, depending on the particular context. These include the right to freedom of thought, conscience and religion. On average, IT companies removed 70% of all the illegal hate speech notified to them by the NGOs and public bodies participating in the evaluation. This rate has steadily increased from 28% in the first monitoring round in 2016 and 59% in the second monitoring exercise in May 2017.


FOOD FOR THOUGHT

While hate speech keeps on rising on social media platforms, how can European nations work to ensure that users are protected from hate speech and discrimination online in the face of growing awareness of both algorithmic and human confirmation bias? How can algorithmic bias underblocking be tackled? As the demand for AI to filter content on the internet increases, how can we use Machine Learning and Data Science to ensure optimal content for users across social media platforms? As we come across radical content pushed by far-right agendas, which frameworks should the EU endorse to prevent hate speech and misinformation that can lead to extremism?


FURTHER ENGAGEMENT

11 views0 comments

Recent Posts

See All

Comments


bottom of page