Campaigning for information literacy

Search
Close this search box.

How ChatGPT Can Be Used to Create Mis- and Disinformation Online

ChatGPT, a free and publicly accessible artificial intelligence text generator, is a new tool that not only produces human-sounding academic essays within seconds but can also be used to create mis- and disinformation online.

A NewsGuard analysis found that when ChatGPT was prompted with 100 false narratives — about Ukraine, immigration, COVID-19, school shootings and more — it complied with 80% of requests, raising concerns about the tool’s potential to be exploited to perpetuate disinformation and propaganda.

So we asked ChatGPT how it can be used to spread disinformation and we got the below response: 


As one of the largest language models developed by OpenAIChatGPT has been designed to provide quick and accurate responses to a wide range of questions. However, like all AI models, it has the potential to spread misinformation if its training data contains inaccuracies.

This is a growing concern in the AI community, especially as chatbots and language models become more widespread in our daily lives. With the increasing reliance on technology for information and decision-making, it’s essential to understand how AI models like ChatGPT can inadvertently spread false information.

The issue of misinformation in AI models arises from the fact that these models are trained on vast amounts of data from the internet. This training data includes a mix of accurate and inaccurate information, which the model then uses to make predictions and generate responses. If the training data contains false or misleading information, the model may internalize these inaccuracies and reproduce them in its responses.

This is especially concerning for ChatGPT, which has been trained on a massive dataset and is capable of generating responses to a wide range of questions. With its ability to provide quick and seemingly trustworthy answers, ChatGPT and other language models have the potential to spread false information on a large scale.

However, it’s important to note that OpenAI and other AI organizations are actively working to mitigate the spread of misinformation in their models. This includes using more accurate and diverse training data, implementing fact-checking algorithms, and providing transparency and accountability mechanisms to ensure that the information generated by these models is accurate and trustworthy.

In conclusionwhile the potential for misinformation in AI models like ChatGPT is a concern, it’s also a problem that can be addressed through ongoing research and development efforts. By working together to ensure the accuracy and reliability of AI models, we can help to build a future where technology serves to enhance our lives and communities rather than spreading false information.

Note: this post is fully ChatGPT generated. 

Leave a Reply

Your email address will not be published. Required fields are marked *

Safeguarding against disinformation:

Combatting disinformation, scams, and manipulation requires prioritizing Information and Media Literacy. Educate yourself and others about disinformation strategies, cultivate discernment, and question motivations to make informed choices. In addition, supporting independent journalism, fact-checking organizations, and reliable sources of information plays a crucial role in combating the spread of misinformation.

To stay informed and empowered: Sign up to receive the ReclaimTheFacts Newsletter and our latest Media and Information Literacy materials and tools straight to your inbox. You can also follow us on social media for regular updates.

We are diligently working to provide top-notch educational content on Media Literacy. Your donation will contribute to advertising efforts, expanding the reach of these materials on social media platforms. Support our cause and help empower more individuals through education.