AI's promises and problems
text size

AI's promises and problems

AI's promises  and problems

It's almost impossible to write an article these days and ignore the rapid increase in what are called AI applications. GPT-4 is out, Midjourney 5 has been released, and more new AI applications seem to turn up every day.

- GPT-4 is OpenAI's latest ChatGPT iteration, so consider the comments of OpenAI's own CEO Sam Altman. In a recent television interview, he claimed to be "a little bit scared" of the power language models pose to society. I'm not sure I agree fully with that sentiment. Yes, there are some risks, but education is a better approach than fear. We all need to understand that the latest versions of AI can generate speech, pictures and videos in just about any subject area that can be fakes. Yes, disinformation campaigns and a plethora of spoof videos can be generated. You can also use the same technology to read out a book with your face and voice.

- I played a bit with the new GPT-4. It's faster and has greater depth in subject areas that is immediately apparent. Responses also seem to come a lot faster. So, basically everything GPT-3 did has been enhanced (you can try it out at I was disturbed by Altman's statement that "there will be other people who don't put some of the safety limits that we put on it". Who decides what the safety limits are? What might be completely safe for some could be banned from access for others under those rules. There may be a good argument to have some kind of age-based information filtering, but there is to date no reliable method of doing this. I'll revisit this in a few weeks when the commentariat has had a chance to really dig into the GPT-4 release. In the meantime, I think it's an exciting development.

- So, can you really detect if something has been written by a large language model like GPT-4? The latest literature suggests it is getting more difficult. As shown in recent studies, by simply rephrasing output the ability to detect text as AI-generated drops from 97% to as low as 57%. (I wonder what would happen if you asked GPT-4 to rewrite the output so it could not be detected as AI generated?)

- Social-media platform Discord has jumped into the AI world, and the technology does provide some additional features. Their new chatbot, named Clyde, can generate text, images, memes, jokes and other materials. Discord did this by combining Chat-GPT and Stable Diffusion. They've also hinted they would be adding voice and video processing in the near future. This is of course the next logical step, to combine different AI models to provide a wider range of services. Yet incorporation of AI hasn't been without glitches. At first, Discord saw a need to alter its policies regarding data collection of users. After criticism, however, Discord reversed the policy change.

- Among other disturbing AI-related events is a recent attempt by the four largest book publishers to shut down the Internet Archive and seek damages. The Internet Archive functions as an online library, lending out digital copies of books. The reason I describe this as disturbing owes to the current trend by major publishers of editing books to remove or change content currently defined as politically incorrect. If an author wants to do this, then he or she has that right. But if the author is no longer around to defend their material, then this is just plain wrong.

- Apple, meanwhile, has taken another hit, this time in Russia, where Vladimir Putin has told staff to get rid of their iPhones, or give them to their children. The same goes for Google Android phones. This is for security reasons, ie to get rid of American technology and replace it with Chinese or Russian alternatives. Russia has the Aurora Linux based smartphone operating system based on the Open Mobile Platform. From the Russian perspective, this is probably a smart move.

- The free and open-source software (FOSS) initiative ostensibly pertains to software that anyone is allowed to use and copy as well as change in any way, and all source code is openly shared. The group may need to change its name to PFOSS, for Partly FOSS, as two recent incidents blocking Russian developers indicate. The first was a refusal on the Linux kernel mailing list, the other a more general block on Github. No warnings were given, but to be fair there may be some US sanction elements impacting this. In general, sanctions against things like FOSS development are not a good idea. It doesn't stop improvements being made, and those that work well only benefit the countries being sanctioned while at the same time potentially depriving those enforcing sanctions.

James Hein is an IT professional with over 30 years' standing. You can contact him at

Do you like the content of this article?