There's a new kid in text town. The name is ChatGPT.
The child prodigy spits out text about everything between heaven and earth faster than we can say "source". It writes quickly. Confident and articulate in a bunch of languages. Sure, I've heard it's not flawless, but who is? Is ChatGPT a source of concern? As a copywriter, have I met my machine overlord?
Where does ChatGPT not get information from?
Autumn's big talking point for everyone who is even the slightest bit interested in technology is the advanced chatbot ChatGPT. It is worth noting what the abbreviation in the name stands for: Generative Pretrained Transformer. That means it generates text based on information it has been taught to use. By the end of 2022, ChatGPT can generate surprisingly well-formulated texts in most genres in a multitude of languages..
It can be used as a search engine and information bank for all imaginable and unimaginable subjects. It is worth noting that the information was available no later than October 2021 - and that someone has chosen to feed the machine with it. It has limited knowledge of what has happened in the last 14 months for the time being. As the example shows, it has very little to tell me about the football World Cup in Qatar, even after Messi has lifted the trophy. And even with a good portion of goodwill, I would argue that the description of the relationship between Russia and Ukraine can be called flawed at best.
ChatGPT obtains information from a range of sources that, as a copywriter, I can only dream of reaching. It uses vast amounts of data, articles, books and websites to generate unique, reasonably well-formulated texts within seconds. But where does it get its information from? And where does it not get information from? Who decides what it will learn?
Let's be honest. There is great entertainment in asking ChatGPT to write a Bible verse about how to remove a slice of bread with peanut butter from the VCR. Or to ask it to problematize itself. And it is not only fun, but also impressive how it is possible to train a machine to write such good, unique texts about almost anything. But perhaps Neil Postman had some good points when he problematized the relationship between form and content in message delivery in his book "Amusing ourselves to death" (1985)?
It's about seeing content in context.
The tech community cheers and teachers despair. Journalists and writers wonder if it is time to put the keyboard on the shelf and look around for a new career. And I would like to ask a couple of questions.
As a technology enthusiast, I rejoice in the fascination of such a powerful tool. I can hardly wait to see the continuation and what possibilities the development opens up.
Nevertheless. I have spent years in reading rooms belonging to Norwegian universities. A significant proportion of this time I have used to write correct source references. To double-check the facts, reflect on what I read, find conflicting arguments and put claims against each other. With referrals. I have been drilled in source criticism and encouraged to think for myself. Admittedly, I was lazy enough to fail my first university exam - with good reason.
That reason was a lack of source references.
So how come I can't find the sources for ChatGPT? Where does this seemingly intelligent machine get its knowledge from? How can I know that the information is correct?
And why do I think these questions are so important?
I have no reason to believe that the people behind ChatGPT are not doing their best to present objective truths and verified facts. But after what I would call a moderately thorough search, I can't find any source lists. The searches give me general phrases like "based on huge amounts of information" and that they have used "Reinforcement Learning from Human Feedback" (RLHF) to train it. OpenAI, which is behind the technology, lists the service's obvious limitations in its own article. Here you can read, among other things, that texts that appear plausible may contain errors. But how many of the chatbot's over 1 million users (so far!) will accept all the information the machine presents as true?
There is reason to reflect on whether ChatGPT is objective. If it presents the whole and full truth. After all, it only has access to the information someone has chosen to feed it with. And who are these “someones”? How can I know what they - now, or at a later point - want to achieve?
Yes, I know that OpenAI is behind it. And I know who is behind OpenAI. But hasn't our old friend Elon Musk given us reason to question his power and motives after the recent months of Twitter turbulence? Can Elon Musk, as one of those responsible for the well-formulated chatbot's diet, control what we swallow as truths?
The discussion and the problem surrounding fake news and disinformation has become a major problem in the public debate worldwide. Obvious falsehoods spread like wildfire and have major consequences for elections and political governance in a number of countries we previously thought of as enlightened, well-functioning democracies. Powerful media moguls with dubious motives control the information voters are served, and bots of unknown origin interfere in the open debate on social platforms. The danger of being trapped in echo chambers of subjective exchange is real.
In Norway, this has led to the creation of actually.no. The non-profit organization is owned by VG, Dagbladet, NRK, TV 2, Polaris Media and Amedia. Their purpose is "to be an open laboratory for source awareness and critical media use in Norway". Faktisk.no is part of the International Fact-Checking Network (IFCN), which means that their work also has a control body.
When the public debate is no longer controlled by editor-controlled media, it feels safe to have such a body in Norway. This does not mean, however, that we should stop thinking for ourselves, and be aware of what information we base our opinions on.
ChatGPT is therefore open about its limitations. At the same time, it is extremely good at writing and being convincing in its argumentation. That makes it difficult for us (with our obvious human limitations) to uncover errors around topics that are outside of our expertise. And surely there are mostly things we don't know before we seek information about?
Aftenposten illustrates the problem well with its attempt to get the chatbot to write a good, informative text about the cod (article in norwegian). It is probably limited how much damage it can do if ChatGPT is unable to tell us which family the cod belongs to, but it shows how easily it can deceive us. That we should not swallow the text about the cod without both tasting and chewing first. In the article, professor of artificial intelligence at NTNTU, Jon Atle Gulla, emphasizes the following: - It is important to note that this is not a knowledge model, but a language model. Therefore, it must be used with caution.
The chatbot thus derives its knowledge from large amounts of data and learns to use it from AI trainers, but it does not tell us who the sources are. It is built on artificial intelligence, but what is intelligence? Doesn't that include the ability to reflect?
Intelligence is used in psychology as a collective name for people's abilities to perceive, think and solve problems, and especially in those areas where individual differences are found.
Perception, thinking and problem solving that is. This is where we humans have something to contribute. ChatGPT is based on artificial intelligence, but what is it?
Artificial intelligence is information technology that adjusts its own activity and therefore appears to be intelligent.
My intelligence (which clearly has its flaws) urges me here to note that technology only appears to be intelligent. It adjusts its own activity, but the information it has available is limited to what someone teaches it. If putting information together in a new way is thinking for yourself, can ChatGPT be used as a source?
A source must have something new to offer. So it has to think for itself. This requires intelligence and the ability to reflect. Putting information together in new ways, using what others have thought before to see connections that you have not previously discovered. According to academia, one should always refer to those who said something first - this is how research builds on research. Large resources are put into shared knowledge databases with verified research results. In Norway, such services are administered by Sikt - the knowledge sector's service provider, and ideally the work leads to research institutions all over the world working together to establish new knowledge.
Content marketing is largely about creating trust. To establish itself as a heavy, professional player that potential and existing customers can trust. A serious partner they turn to for expertise in a given field.
In order to succeed in establishing such a position in the market, shortcuts should be avoided. This applies to the use of facts and source references in all published content. The Marketing Act is crystal clear:
Claims in marketing about facts, including about the performance's characteristics or effect, must be able to be documented. The documentation must be available to the advertiser when the marketing takes place.
Influencers fall under this law. They are obliged to mark sponsored posts in social media (which they do with varying degrees of clarity). Frontkom has developed one service for the Danish Consumer Authority for marking retouched and manipulated still images and videos in traditional and social media.
In the myriad of blog posts of varying quality, it is not always easy to know what is valuable content or where factual information is taken from. We who work with content marketing should take pride in raising the quality by learning from academia's rules about references and citations. And then I think that maybe the source shouldn't be called ChatGPT.
Because what happens if I ask the chatbot if I can quote it in a blog post?
Yes, yes. It is therefore happy to provide a source, but is no less restrictive than requiring intellectual copyright. This feels a bit like going around in circles. I think I need to read the paragraph I just wrote about intelligence again.
So what can the texts ChatGPT generates be used for? What value do they have and can they be used in content marketing? Is there reason to be skeptical?
ChatGPT compiles information that is already available and assembles it into a new text. In a new way. A text that can perhaps be trusted. But one thing is certain, it is an emergence of ideas - which have value if they are used correctly.
In many contexts the tool is more than enough to do the job for you, but of course it depends on what your job is. In my case as a copywriter, I think that it is both fun and educational to have a new colleague. Someone who is bubbling over with ideas, who works extremely fast and never says no. The history of technological development is full of moral panics, much of which tends to fall into the category of naive and ignorant in retrospect. I refuse to swallow the technology's text about cod without chewing first, but I welcome the bot to the team with its strengths. Then I will continue to use mine.
Did you find this article helpful?