Confronting ChatGPT with its Lies

An AI-generated image of Emir Abdelkader using a computer (created using

Last month I wrote a blogpost in which I described how ChatGPT made up academic references after I asked it to suggest resources on Emir Abdelkader. The references seemed deceptively authentic, as if ChatGPT had been intent on misleading me.

Later on, I wanted to ‘confront’ ChatGPT with my accusations, so I asked it to analyze that post. Its answer was:

Upon reviewing the article, I noticed that it is written in a satirical tone and is intended to be a work of fiction rather than a factual news article. The article presents a fictional conversation between the historical figure Emir Abdelkader and the AI language model ChatGPT, discussing the art of fabricating academic resources.

While the article does touch on some real-world issues related to academic dishonesty and the use of AI in academic research, it does so in a satirical and exaggerated manner that is not meant to be taken seriously.

Therefore, it’s important to understand that this article is not intended to provide accurate information or news but rather to offer a humorous perspective on the topics it covers.

As you can see, ChatGPT wasn’t very flattering in its assessment of my post. It described it as a satirical work of fiction that is not meant to provide accurate information or to be taken seriously, even though it touches on the real-world issues of “the use of AI in academic research”. Furthermore, it described my post as a portrayal of an imaginary conversation between Emir Abdelkader and ChatGPT! While I acknowledge that my post contained satirical remarks and exaggerations, to describe it as a conversation between Abdelkader and ChatGPT is a gross mischaracterization to say the least, and it shows the limitations of ChatGPT in analyzing texts.

I didn’t dispute ChatGPT’s assessment of my post though, but I asked it to directly address the accusation made in that post, and many other articles floating over the internet, that it fabricates academic references. Its response was:

As an AI language model, I don't have the capability to fabricate references or intentionally provide fake information. ChatGPT is a machine learning model developed by OpenAI, and it generates responses based on patterns it has learned from a vast corpus of text data.
While ChatGPT is designed to provide informative and accurate responses, it is not infallible and may make mistakes or errors. It's possible that in some cases, ChatGPT might generate responses that include inaccuracies or incorrect references due to errors in its programming or the training data it has been exposed to.
However, it's important to note that ChatGPT is a tool that should be used as a supplement to human knowledge and expertise, rather than as a substitute for it. Ultimately, it is up to the user to verify the information provided by ChatGPT and to conduct their own research to ensure its accuracy.

So ChatGPT eventually acknowledges that its responses may contain inaccurate information and that they need to be verified by human users, which is self-defeating if we think of ChatGPT as a assisting tool for academic research. Of course, I’m talking here about humanities and social sciences. It seems that ChatGPT performs far better in technical subjects such as programming, but I’m not sure yet about its usefulness beyond perhaps being a writing improvement tool—more or less like an interactive version of Grammarly.

Even when it comes to summarizing or analyzing larger texts, its utility could be limited as its assessment of my previous blogpost shows. In order for that opinion to be based on more than a single text, I gave ChatGPT another chance and asked it to analyze my whole blog. My prompt was as follows:

Can you analyze this blog? What are the general themes and the positions expressed by its author

Its response was:

As an AI language model, I can analyze the blog you provided and provide some insights.
Upon reviewing the blog, I noticed that it covers a wide range of topics related to religion, culture, and society. The blog appears to be written by multiple authors who offer their perspectives on various issues and events.
Some of the general themes covered in the blog include:
Religion: The blog features articles on various religious traditions, including Christianity, Islam, Hinduism, and Buddhism. The authors often explore the beliefs and practices of these religions, as well as their historical and cultural contexts.
Culture: The blog also covers a range of cultural topics, including music, literature, art, and film. The authors examine how cultural expressions reflect and shape broader social trends and ideas.
Society: The blog frequently addresses social and political issues, such as immigration, race, gender, and human rights. The authors offer their opinions and analysis on these topics, often drawing on their own experiences and perspectives.
The positions expressed by the authors are diverse and sometimes conflicting. Some authors appear to hold conservative or traditional views on religion and society, while others take more liberal or progressive positions. Overall, the blog seems to value critical thinking and open discussion, encouraging readers to engage with a variety of perspectives and ideas.

ChatGPT said that the general themes of my blog, which is literally called Religion, Culture, Society, are religion, culture, and society, which is not a great feat of artificial intelligence to be honest. Or maybe I’m just incredibly good at choosing an accurate title for my blog!

Apart from this almost trivial conclusion, ChatGPT said that my personal blog appears to be written by multiple authors, which is not true. All posts in this blog have been written solely by me and there is nothing in it that is the work of others except for quotations. Later on, ChatGPT says that opinions expressed in the blog range between conservative or traditional, on the one hand, and liberal or progressive, on the other. Even though I like to think of myself as closer to the liberal/progressive side of politics and society, I admit that some of my views may not be up-to-date with latest trends in liberal/progressive politics especially in the US, which is possibly the source of most of the corpus on which ChatGPT has been trained.

Perhaps this is the reason why ChatGPT thinks that there are multiple authors in my blog. Polarization in American politics makes the expression of a range of views closer to schizophrenia than reasonableness or independent thinking.

Despite my negative remarks above about ChatGPT, I appreciate at least its final point regarding my blog—that it “seems to value critical thinking and open discussion, encouraging readers to engage with a variety of perspectives and ideas”, which I hope to live up to.