New chatbot can do a lot, but can you trust it?

New chatbot can do a lot, but can you trust it?

TECH
New chatbot can do a lot, but  can you trust it?
(Photo: Reuters)

Over the New Year break, I was digging a bit more into artificial intelligence and especially how the ChatGPT can be used and how it could affect society.

- ChatGPT looks like it will impact the way we do business, program and write. Students will be using it to write assignments. Coders are already asking it to write code for them. Presenters are asking it to write their presentations on subjects they may not be fully familiar with. You can get a test account fairly easily at chat.openai.com.

- I started by asking it about quantum gravity, then string theory and then a recipe for lamingtons. Next, I asked it to "generate a new poem based on the famous Raven poem" and away it went giving what looked to me like a reasonable response, but I'm no poetry expert. Then I asked for a C# code sample for an audio VST3 wrapper, and it generated one. Unsurprisingly, it had no proof for the existence of God and it equivocated it to life on other planets. The current version has a knowledge cut-off of 2021, so it would not theorise on the outcome of the 2024 US election. I also asked about when human life started, but you can try that one for yourself.

- From these few examples, I was able to make some observations. When it comes to fact-based information, it does much better than the average Google response. On the more difficult life questions, it tends to play both sides of the fence, which in itself is not necessarily a bad thing. I did like the response to my question on the scientific method to which it responded that "the goal of the scientific method is to arrive at an evidence-based understanding of a phenomenon that can be tested and refined over time". This came after "sharing the results of the investigation with others", something that is often lacking in modern studies.

- The system can be used by an undergraduate to write a full paper if the right questions are asked. I tried "write a 1,000-word paper on the theory of leadership". It came back with a paragraph each on four leadership types with an introduction and summary. With a bit of extra research, formatting and editing this could be handed in as a first-year university response to an assignment. I asked for references and it generated five of them as places to get additional info and include in such an assignment. ChatGPT can reduce the time needed to prepare a paper and even expand it to a larger document.

- The result is potentially being able to allow someone to appear more informed about subjects than they really are. Or alternatively, provide a way to gain knowledge faster depending on how much additional effort was put in and if verification of the material was carried out.

- Is it AI? Not in the sense that it has any awareness of what it is doing. It's a well-trained rules engine that puts together pieces of information in a logical manner based on the available information and the rules it has been given. We also do that, but then we will potentially look at what we have done and decide if it has ethical implications, if it makes sense in a wider context, how it makes us feel, if we should just delete it, and so on. ChatGPT doesn't have any of this capability as far as I know.

- I did try some examples starting "With the context of a xxxx" in front of the same questions and it did generate different responses. In some cases, the length of the response changed depending on what I replaced xxxx with. Not sure if this indicates some kind of bias or just the depth of the training material it was using as I chose a controversial subject for my test.

- The bottom line is that ChatGPT is one alternative to Google and Wikipedia. It can also get things wrong depending on context and how the question is phrased. Schools will need to start paying attention to AI-generated material because people can be lazy and take the easiest solution. It will become more difficult to find out who actually knows and understands things unless people start asking questions to confirm knowledge, and with the continued hybrid work environment that becomes more difficult.

- Does it really matter? If the job gets done, it works and it is reliable, does it really matter how you get there? Some would argue on both sides of that question so it will come down to who the task is being done for and if they are happy with the results. I don't think objectively there is a right or wrong answer here. I do expect the capabilities of these engines to improve over time and I'm not sure where we'll be in 10 years but using this approach it will still not be human-similar AI. I do wonder what impact it will have on the general ability of long-term users to engage in critical thinking, another version of what I call the first Google response syndrome.


James Hein is an IT professional with over 30 years' standing. You can contact him at jclhein@gmail.com.

Do you like the content of this article?
COMMENT