As the cognitive power and proliferation of artificial intelligence take the world by storm, the case for authenticity and originality paradoxically becomes more compelling and carries higher premiums. It is now a widely accepted reality that AI is on its way to master human thought processes and proceed beyond them. This means that it will be more difficult for humans to differentiate between what comes from AI and what does not. As such, the time has come after nearly 40 years of being published -- including more than 25 of them with this newspaper -- that this column goes subjective.
I have long wanted to write in the first person, but Jerry Petrini, my ninth-grade teacher, admonished me otherwise. Mr Petrini taught me to write in the third person. Who was I to write subjectively as "me"! He said never write in first person because it's not about you but about the content of your composition. This was a time way before the contemporary era of I, myself, and me.
It was also a time when I read Joseph Conrad's Heart of Darkness, a thin but dense and difficult book, which somehow activated me into a thinking person for the first time. It was a passageway to other novels and books, and the likability of reading. Mr Petrini also made me memorise by rote 20 vocabulary words every week to be quizzed on Fridays. Many words have proved useful, such as authenticity, but some have hardly been used, like pusillanimous and non sequitur. However, I would know these when I see them.
Other indelible lessons included common misspellings and misdirected meanings, such as when a "principal" should hold at least a "principle" but not the other way around. An apostrophe is a big deal between "it's" (as in "it is") and "its" as a possession. To "compliment" people is entirely different than to "complement" them. It's always "between" the two of us, but "among" the three or more of them. I have "fewer" words to say when they are countable, but not "much" to express when lost for words. A stationary bike is different from stationery paper. Unless it's deliberate for intended effect, the trick is to avoid using the same word to start two consecutive sentences for fluidity and variation.
All these little lessons have now been gobbled up by AI. Authenticity, originality and the first-person voice face the threat of extinction because of what AI can do. AI is so powerful because it never eats, sleeps, or falls ill like we humans. AI gains more power by the day because computational technologies are marching faster (and consuming more energy) than ever. It's not just AI but Generative AI, incorporating so much smartness that it takes on a life of its own, tantamount to having a super-mastermind with endless information and knowledge. And then there are AI agents that use software design to tailor knowledge and intelligence in niche pursuits, such as medical care, customer service, and education. According to the experts, AI robots known as humanoids will be the next big thing. Theoretically, there could be millions and billions of these, and they are supposed to be a great benefit to humankind, although robotic humans conjure up nightmare images from sci-fi movies where humans lose control to robots.
Yet AI cannot be me, at least not yet. And never, if I succeed in the time that I have left. I actually purchased ChatGPT over a year ago while it was on a discount. ChatGPT is terrifyingly smart because it can interact with me and even with itself. I started by pouring my own written essays and papers into it -- it also placed a limit on usage based on price programs -- and prompted it to edit and refine. The results were amazing, with sharper and more fluid phraseology, reasoning, sequence, and additional tidbits which I hadn't thought about. But it seemed to do the same good job every time. Soon, I felt my ownership and authorship had been lost, eclipsed and replaced by a vast but clinical and methodical software. Originality comes with warts that require editing and refinement. ChatGPT's problem is that it goes for perfection, which humans cannot and arguably should not try to achieve.
After fewer than a dozen tries, I stopped using ChatGPT and left it to my wife, also an academic, who is steeply inclined to try new tech. But this AI app has a way of finding out when the buyer is not the user. So my wife has migrated to Claude, a competing AI app, which is reportedly better with papers and written work. There is no avoiding AI, as it motors our smartphones and PCs and all kinds of devices we use and overlook, including Netflix, cars, and music.
But I'm not an active AI user at this time because I'm afraid it's going to blur and disarm parts of my brain. It's the same reason I use a calculator but still try to do basic arithmetic in my head or by hand. I'm also afraid AI could become an addiction, similar to the way I don't leave the car park without opening the Google map app, only to discover that it sometimes saved me a few minutes but gave me a lot more hassles on the road.
I liken AI to a forever expanding sponge that infinitely sucks in and mops up information and data and spits them into a boundless and deepening ocean to be accessed, processed, and manipulated. Writing while connected to the internet will provide "autofill" and "autocorrect" options, conditioning people not to write properly anymore because of overreliance on tech tools and giving AI apps more to work with. What humans do on the internet is all fair game for AI.
One upside is that AI allows me to be an eager and active student, tuning in daily on this or that podcast or YouTube clip. It's amazing that AI has already transformed so much in the world, and yet it has barely begun. Thanks to it and to the necessity of authenticity, I'm finally able to move beyond Mr Petrini's impactful instructions, thereby opening up and liberating new horizons to write about.