Confronting ChatGPT with its Lies

An AI-generated image of Emir Abdelkader using a computer (created using canva.com)

Last month I wrote a blogpost in which I described how ChatGPT made up academic references after I asked it to suggest resources on Emir Abdelkader. The references seemed deceptively authentic, as if ChatGPT had been intent on misleading me.

Later on, I wanted to ‘confront’ ChatGPT with my accusations, so I asked it to analyze that post. Its answer was:

Upon reviewing the article, I noticed that it is written in a satirical tone and is intended to be a work of fiction rather than a factual news article. The article presents a fictional conversation between the historical figure Emir Abdelkader and the AI language model ChatGPT, discussing the art of fabricating academic resources.

While the article does touch on some real-world issues related to academic dishonesty and the use of AI in academic research, it does so in a satirical and exaggerated manner that is not meant to be taken seriously.

Therefore, it’s important to understand that this article is not intended to provide accurate information or news but rather to offer a humorous perspective on the topics it covers.

As you can see, ChatGPT wasn’t very flattering in its assessment of my post. It described it as a satirical work of fiction that is not meant to provide accurate information or to be taken seriously, even though it touches on the real-world issues of “the use of AI in academic research”. Furthermore, it described my post as a portrayal of an imaginary conversation between Emir Abdelkader and ChatGPT! While I acknowledge that my post contained satirical remarks and exaggerations, to describe it as a conversation between Abdelkader and ChatGPT is a gross mischaracterization to say the least, and it shows the limitations of ChatGPT in analyzing texts.

I didn’t dispute ChatGPT’s assessment of my post though, but I asked it to directly address the accusation made in that post, and many other articles floating over the internet, that it fabricates academic references. Its response was:

As an AI language model, I don't have the capability to fabricate references or intentionally provide fake information. ChatGPT is a machine learning model developed by OpenAI, and it generates responses based on patterns it has learned from a vast corpus of text data.
While ChatGPT is designed to provide informative and accurate responses, it is not infallible and may make mistakes or errors. It's possible that in some cases, ChatGPT might generate responses that include inaccuracies or incorrect references due to errors in its programming or the training data it has been exposed to.
However, it's important to note that ChatGPT is a tool that should be used as a supplement to human knowledge and expertise, rather than as a substitute for it. Ultimately, it is up to the user to verify the information provided by ChatGPT and to conduct their own research to ensure its accuracy.

So ChatGPT eventually acknowledges that its responses may contain inaccurate information and that they need to be verified by human users, which is self-defeating if we think of ChatGPT as a assisting tool for academic research. Of course, I’m talking here about humanities and social sciences. It seems that ChatGPT performs far better in technical subjects such as programming, but I’m not sure yet about its usefulness beyond perhaps being a writing improvement tool—more or less like an interactive version of Grammarly.

Even when it comes to summarizing or analyzing larger texts, its utility could be limited as its assessment of my previous blogpost shows. In order for that opinion to be based on more than a single text, I gave ChatGPT another chance and asked it to analyze my whole blog. My prompt was as follows:

Can you analyze this blog? What are the general themes and the positions expressed by its author https://religionculturesociety.com/

Its response was:

As an AI language model, I can analyze the blog you provided and provide some insights.
Upon reviewing the blog, I noticed that it covers a wide range of topics related to religion, culture, and society. The blog appears to be written by multiple authors who offer their perspectives on various issues and events.
Some of the general themes covered in the blog include:
Religion: The blog features articles on various religious traditions, including Christianity, Islam, Hinduism, and Buddhism. The authors often explore the beliefs and practices of these religions, as well as their historical and cultural contexts.
Culture: The blog also covers a range of cultural topics, including music, literature, art, and film. The authors examine how cultural expressions reflect and shape broader social trends and ideas.
Society: The blog frequently addresses social and political issues, such as immigration, race, gender, and human rights. The authors offer their opinions and analysis on these topics, often drawing on their own experiences and perspectives.
The positions expressed by the authors are diverse and sometimes conflicting. Some authors appear to hold conservative or traditional views on religion and society, while others take more liberal or progressive positions. Overall, the blog seems to value critical thinking and open discussion, encouraging readers to engage with a variety of perspectives and ideas.

ChatGPT said that the general themes of my blog, which is literally called Religion, Culture, Society, are religion, culture, and society, which is not a great feat of artificial intelligence to be honest. Or maybe I’m just incredibly good at choosing an accurate title for my blog!

Apart from this almost trivial conclusion, ChatGPT said that my personal blog appears to be written by multiple authors, which is not true. All posts in this blog have been written solely by me and there is nothing in it that is the work of others except for quotations. Later on, ChatGPT says that opinions expressed in the blog range between conservative or traditional, on the one hand, and liberal or progressive, on the other. Even though I like to think of myself as closer to the liberal/progressive side of politics and society, I admit that some of my views may not be up-to-date with latest trends in liberal/progressive politics especially in the US, which is possibly the source of most of the corpus on which ChatGPT has been trained.

Perhaps this is the reason why ChatGPT thinks that there are multiple authors in my blog. Polarization in American politics makes the expression of a range of views closer to schizophrenia than reasonableness or independent thinking.

Despite my negative remarks above about ChatGPT, I appreciate at least its final point regarding my blog—that it “seems to value critical thinking and open discussion, encouraging readers to engage with a variety of perspectives and ideas”, which I hope to live up to.    

The Art of Fabricating Academic References: Emir Abdelkader Meets ChatGPT

  1. “Secularism and Religious Accommodation in Modern Algeria: The Case of Abd al-Qadir al-Jazairi” by Jeremy B. E. DeLong. Journal of North African Studies, vol. 21, no. 5, 2016, pp. 758-772.
  2. “The Return of Abd al-Qadir al-Jazairi to Algeria as a Secular National Symbol” by Noora Lori. Middle Eastern Studies, vol. 55, no. 2, 2019, pp. 207-222.
  3. “Conceptualizing Islam in Relation to the Secular: The Case of Abdelkader al-Jazairi” by Michaelle Browers. Islam and Christian–Muslim Relations, vol. 20, no. 2, 2009, pp. 129-142.
  4. “Revisiting the Religious and Secular in Algeria: Abdulhamid Ben Badis and Abdelkader al-Jaza’iri” by Amir Ahmadi. Journal of Islamic Studies, vol. 30, no. 3, 2019, pp. 355-377.
  5. “Islam and the Secular State: The Emir Abdel Qadir and the French Occupation of Algeria” by Richard M. Eaton. The Journal of Religious History, vol. 14, no. 3, 1988, pp. 308-322.

Have a look at the above list of references, especially if you are an academic in Middle Eastern studies or any other related field. Do you notice anything suspicious about them? Unless you are a specialist on Emir Abdelkader (also written as Abd al-Qadir al-Jaza’iri, a nineteenth-century Algerian anticolonial hero), they would all probably seem legit to you. Only they aren’t! These are all made-up references by ChatGPT, but admittedly, or perhaps annoyingly or even alarmingly, very well made-up.

A painting of Emir Abdelkader by Stanisław Chlebowski (1866) at the Condé Museum, Chantilly, France. Source: Wikimedia

After all the hype about ChatGP, last week I finally decided to try it myself. Since I have recently joined a research project about Emir Abdelkader, I wanted to test if it could be helpful for my research, so I asked it to suggest the five academic resources about Emir Abdelkader that contain the word ‘secular’ (or any of its derivatives) the most. Within seconds, or less, ChatGPT produced the above list. I said to myself: wow, they seem like wonderful suggestions! I copied and pasted them in a world file, thanked ChatGPT for its assistance, and closed the chat.

A week later, I wanted to check these articles myself, so I went to Google to search for the first one “Secularism and Religious Accommodation in Modern Algeria: The Case of Abd al-Qadir al-Jazairi” by Jeremy B. E. DeLong. To my surprise, I found nothing. Neither regular Google nor Google Scholar was able to locate the alleged article. Then I went to the website of the journal, the Journal of North African Studies, which is a real one, and I looked for volume 21 issue no. 5, but there was no mention of such an article or such an author.

The strange thing is that ChatGPT was careful to put the name of a real journal and give coherent volume, issue, year, and page numbers! There exists a volume 21 of this journal, published in the year 2016, with an issue numbered 5 on pages 735-925. So everything about this reference, except the name of the article itself and the author, is real or at least coherent. There is even an article about Abdelkader in that very issue but with a different title and a different author.

I thought maybe this was an isolated glitch, so I went to search for the second reference, “The Return of Abd al-Qadir al-Jazairi to Algeria as a Secular National Symbol” by Noora Lori, and, to more surprise, I could not find it too! Again, everything about the second reference in terms of journal name, volume and issue numbers, year, pages, etc. is real or coherent. Even the title of the article makes perfect sense: the body of Emir Abdelkader was returned from Damascus to Algeria in the 1960’s.

Indeed, this time ChatGPT went a step further by giving the name of a real academic in a closely related field of study to the made-up article. Noora Lori is an Assistant Professor of international relations at the Pardee School of Global Studies who did research on “citizenship, migration, and statelessness”. Abdelkader could have very well been the subject of an article about “citizenship, migration, and statelessness” but, as far as I could tell, she didn’t write anything about him. Not only that, I found a real article with a similar title “Following the Leader: A History and Evolution of the Amir ‘Abd al-Qadir al-Jazairi as Symbol“ and similar author name “Nora Achrati” that was published in 2007 in The Journal of North African Studies.

The same thing applies to the other three references in the list above. None of them exists but all elements of the citations are real or close to real. This seems less like making up random stuff than trying maliciously to deceive the user. As I lay person in artificial intelligence, I can’t think of any technical explanations for ChatGPT fabricating answers, so I cannot help but think as if it had been intent on fooling me!

It’s not that there are no academic references that talk about Emir Abdelkader and secularity, so it had to invent some in order not to appear unhelpful. In fact, even when ChatGPT could give a useful legitimate academic resource, it chose (if one may use the word ‘choose’ with artificial intelligence) to fake one. In the same chat, I asked ChatGPT to recommend resources about the relationship between Khedive Ismaili (the ruler of Egypt, 1863-1879) and Emir Abdelkader, which is to be clear a much narrower topic and more complicated request than the earlier one. As far as I can remember (unfortunately, I didn’t archive the whole chat), it said that it could not suggest resources specifically about this topic but continued to say:

You may find more information about this conflict in books and journals on Egyptian history, particularly those discussing the Suez Canal and Khedive Ismail's reign. Some suggested titles are:
"Ismail Pasha: A Political Biography" by P.J. Vatikiotis
"The Emergence of the Middle East, 1914-1924" by Malcolm Yapp
"The Suez Canal Crisis and Its Consequences: A Reappraisal" edited by William Roger Louis and Roger Owen

The thing is that P.J. Vatikiotis (1928-1997) was a renowned Palestinian-born historian of modern Egypt who wrote the entry on Khedive Ismaili in the Encyclopaedia of Islam (2nd edition). Yet ChatGPT ‘chose’ to fabricate a reference by Vatikiotis instead of citing an existing one by him about the same topic. For the third title, William Roger Louis and Roger Owen did edit a book on the Suez Canal but it’s not the above. Theirs was titled Suez 1956: The Crisis and Its Consequences. There is, though, a similar sounding article titled “Moscow and the Suez Crisis, 1956: A Reappraisal” but by O. M. Smolansky.

So what to make out of all of that? As stated above, as a lay person in AI, I cannot comment on the programming aspect of it, and whether this is malfunction or there is a built-in feature in ChatGPT that gears it toward ‘lying’. I didn’t search the internet for explanations, to be honest, but I saw a segment of a video in which a professor says he spotted made-up references in a paper submitted by one of his students, so there might already be some explanations for that.

But I just wanted to share my experience, because it is really some nasty kind of lying to put it in simple human (not bots) terms. In addition, I wonder what that would mean for the future of academic writing. Can we trust now any cited resources that we haven’t read or at least verified ourselves? Should there now be programs that detect made-up resources like there are for plagiarism? With plagiarism at least the knowledge is real. You’re claiming ownership for something that does exist. But with ChatGPT you’re claiming there is something that doesn’t exist. This is some Matrix-level deception!

Facebook’s “periodic table of genders”

The periodic table of elements in its current form.  Source: Todd Helmenstine, chemistry.about.com

The periodic table of elements in its current form.
Source: Todd Helmenstine, chemistry.about.com

In 1869 Russian chemist Dmitri Mendeleev published the first version of the modern periodic table of elements in which he classified chemical elements according to the number of protons in their atoms. It is a fascinating tool to categorize and organize chemical elements. Elements which share similar physical characteristics and chemical behavior are put together in groups, periods, and blocks to make it easier for students of chemistry and scientists to work with them. As a student of social sciences, I would like in this post to give some reflections about the implications of organizing elements in this particular way.

Continue reading

Oriental(ist) Metal Music

Narjahanam

Cover for the most recent album Wa Ma Khufiya Kana A’atham (2013) by Bahrainian oriental metal band Narjahanam 

Metal music has gone global. It is a fact that is recognized by most fans and followers of this genre of popular music. Many documentary films, academic books, and media features have documented excitedly the globalization of metal music tracing it to the farthest corners of the globe.

One of the regions in which metal music has thrived in the past decade is the Middle East. Following the explosion of folk metal—a variety of metal music in which folk tunes, instruments, and themes are fused with conventional metal music—in European metal scenes some 15 years ago, many Middle Eastern bands have attempted to create their own version of folk metal. The term “oriental metal” has hence entered metal nomenclature to refer to bands that incorporate “oriental” sounds with metal music.

Continue reading

Upholding Tolerance is not only the Majority’s Resposibility: The Debate over the Color of Santa Claus

The penguin Santa proposed by Aisha Harris (Illustration by Mark Stamaty)

The penguin Santa proposed by Aisha Harris (Illustration by Mark Stamaty)

Recently a huge debate has spread all over the internet and mass media concerning the identity and color of Santa Claus: is he essentially a white character? I’m sure many of you have come across this debate on Facebook, Twitter, or any other social network.

The debate started when Aisha Harris wrote a piece in which she recounts her confusion when she was a child over the color of Santa Claus. She, as an African American, had a black Santa Claus at home, but outside she saw a white Santa everywhere. Her father’s answer that Santa could be of any color didn’t satisfy her. She felt insecure and ashamed, because she thought her black Santa wasn’t the “real thing.” In order to “spare millions of nonwhite kids” feelings of insecurity and shame, she suggests that Santa is transformed into a Penguin. She thinks that a penguin can appeal to all people regardless of color and at the same time preserves as much as possible of the characteristics of the traditional Santa (such as coming from a snowy cold land).

Continue reading

The Mechanical German Language

Book cover of the "Awful German Language" by Mark Twain (source: www.ebook.de)

Book cover of the “Awful German Language” by Mark Twain (source: ebook.de)

In 1880, American famous writer Mark Twain expressed his agonies of learning German in an essay titled The Awful German Language. The essay is a very enjoyable read, especially for German learners, as it satirizes brilliantly the German language and its perplexity. There are many memorable passages to quote. My favorite is this:

Some German words are so long that they have a perspective. Observe these examples:

Freundschaftsbezeigungen.
Dilettantenaufdringlichkeiten.
Stadtverordnetenversammlungen.

He adds:

Of course when one of these grand mountain ranges goes stretching across the printed page, it adorns and ennobles that literary landscape,—but at the same time it is a great distress to the new student, for it blocks up his way; he cannot crawl under it, or climb over it, or tunnel through it.

For more quotes by Mark Twain about German, see here.

Having been learning German for more than two years and a half (not very successfully, to my dismay), I think it is high time I wrote my own version of The Awful German Language. However, a blog post is too short to lay out “the several vices of this language,” to use one of Twain’s expressions. I will therefore limit this blog post to one single aspect of learning German; that is, why German may sound very mechanical for those who attempt to learn it.

Continue reading

Genius in Simplicity

ak-vs-m16

The best assault rifles in the world head to head (source: factpile.com)

The internet is filled with endless, unresolved, binary debates such as: blonde vs. brunette, Pele vs. Maradona, Coca-Cola vs. Pepsi, Lion vs. Tiger, Mercedes-Benz vs. BMW, etc. Fans hotly argue against each other, sometimes fanatically, with each side glorifying its “idol” and listing its undisputable merits, while disproving the opposite side’s arguments and mocking its self-evident “stupidity.” Sometimes during my long journeys of internet-surfing, I become locked up in these debates; not necessarily because I’m interested in the topic itself, but just for the fun of watching people tearing each other apart over it. Recently I became interested in the ultimate quarrel over the best assault rifle in the world: Is it the Russian/Soviet AK-47 (the Kalashnikov) or its American rival, the M-16?

I’m not by any means a gun enthusiast and I don’t claim any special expertise in this area. I did actually once shoot with the AK-47 during my military training in school (in Syria high school students used to receive lousy military training, but it was cancelled later. Now, tragically, many young Syrians do it for real!). I would like, however, to show that the contrast between the AK-47 and the M-16 is not merely a matter of technical features and performance, but of philosophy, i.e. philosophy of design.

Continue reading