If you have read the article "GA4 in 4 minutes" on the Frontkom blog, you have seen what a collaboration between a content producer and a robot can look like. Read on to see how you can tame and shape this robot's often unreliable responses.
There will probably come a time when semantic AI writes an article, Google's AI reads it and ranks it on the search page, and an AI ​​purchasing robot reads the case and decides who the company will buy services from. But for now, while articles are still read by humans, the robots need a lot of help to be useful.
If you set ChatGPT to write an article on a certain topic, you get back something that looks reliable, but often isn't. If you use it incorrectly, it will invent "facts" and present them with the confidence of a five-year-old. If you use it correctly, it can save you a few hours of work in certain contexts.
What the robot is very good at is coming up with ideas and inspiration for further processing. As a content producer, you are often very exposed to write blocks, and here ChatGPT will be able to help to let the creative juices flow.
The article about GA4 is a good article type for robot help. It's good at lists and instructions, which is why I used the robot there. I began with a general description of the article:
I got back a fairly general description, which was relatively lacking. I therefore went through the article step by step to focus on each point separately:
It just told me that the Measurement ID is in settings, so it didn't immediately understand that I wanted a slightly more precise explanation. However, further questions about elaboration did not give very good results either, from a beginner's point of view. For example, I asked:
Here it gave me outright incorrect instructions. After all, GA4 has changed quite a bit since launch, so what it asked me to do was possibly based on the 2021 version of the service. It is therefore important to note that you must always fact-check everything the robot writes, in addition to all the other problems it has.
This was a control question to check if the robot knew what I was actually looking at when I was inside GA4. It did so only partially.
I therefore had to go into Google Analytics and create a description myself. I was going in there anyway to get screenshots, but this was the first sign that the robot does not produce material worthy of publication without adjustment.
Testing changes in GTM is a very important part of the process. A reminder to test in Preview mode must therefore be included in such an article. One happy thing here was that it remembered that I had asked about the Measurement ID earlier, and put in a paragraph about it without being specifically asked about it.
In the first draft, the robot only asked me to "create a tag". Maybe it would have explained it to me if I had been more specific to begin with, adding what level of detail I expected. Possibly added something about "step by step" etc. But for an 'instructional article' I had actually expected slightly more specific instructions.
This part was actually pretty straightforward, as I didn't want to go into how to decide which targets to use in this article. Setting up goals can be quite a complicated process, depending on how complex the website is, how you have tagged content in the source code, etc. So what I asked for here was just to be a little extra helpful. The goals it proposed were quite rational.
After cajoling the robot for a framework, I had a good starting point for an article. The robot writes quite human-like, but is unable to put any particular soul into the text. The reason for that is that it uses a sort of average of all the articles it has read on a topic, and the language therefore becomes quite neutral. It also does not take the audience into account, and the text will in many cases require a number of adjustments and elaborations in order to be used in a professional blog like this.
If you let ChatGPT run on its own, sooner or later it will produce a lot of text that appears reliable, but is really just nonsense. So you have to give it very specific instructions, and constantly follow up to make sure it doesn't start fantasizing about things that aren't true.
All in all, it saves me a few hours in specific situations. On list articles like the one I asked it to write, it will give a half-baked answer that serves as a pretty good framework. Even when specifically asked to dress the framework, it will be quite ornery. A bit like a five year old. Occasionally I got the feeling of negotiating with the genie in the lamp who needs extremely detailed descriptions of what you want so that things don't go wrong. If you ask a genie based on ChatGPT to make you the strongest person in the world, it is as likely to make everyone else weaker as it is to make you able to lift a car. "Malicious compliance" I think it's called.
Do you need help producing content or creating a solid content strategy? Feel free to leave a message via our contact form, and we will respond quickly.