Do you own the hardware and software you purchased? Yes, no and possibly, so let's dive into an example. A man buys a second-hand Microsoft Surface from the Internet. It is one of a batch. He uses it for a few years until one day a massage pops up on the screen advising that Mastercard has locked the device and it should be returned to Mastercard. The man does some research and finds out that Microsoft has embedded some software in the firmware and BIOS that has enabled this to occur. It also turns out that this software can be found in other Microsoft and Apple devices, is very difficult to detect and requires a high skill level to remove, or you can just install Linux.
- The software is made by Absolute, and you can read about their agreement with Microsoft here: absolute.com/partners/device-manufacturers/microsoft/. Absolute's Computrace sets up a bi-directional connection that allows "unprecedented visibility and control" and is embedded in the firmware during production. It can survive formatting, uninstalls and other attempts to remove it. It provides access to every computer including the apps and data stored on it. It can be used to locate, lock and delete data on devices no matter where they are.
- Think about it for a moment. Any Microsoft Surface, corporate or otherwise potentially has a backdoor allowing full access to the device. In this case, it was probably a device that was part of a group that was sold as part of an upgrade process. It happens all the time in the corporate world. Years later, some bean-counter somewhere triggered an end-of-life process that reached out and shut down every one of those devices. This was probably a mistake, but try calling Mastercard and getting it sorted out. This is the same process that Apple uses to brick your phone or remove software you are using. As mentioned above, installing, say, Ubuntu Linux on the device will deactivate the block, but this highlights how little control many have over the devices that they think they own.
- I have written about Large Language Models (LLMs) a few times now, but I've never really covered what they are models of. If you want an in-depth answer, then search for the new paper "Large Models of What? Mistaking Engineering Achievements for Human Linguistic Agency" from Abeba Birhane and Marek McGann. Some claim that LLMs are capable of understanding language. This is kind of true, but they still get caught up when it gets complex. It is also, according to the paper, a misuse of the terms "understanding" and "language". Instead of the direct interactions between humans, the terms as used tend to refer to interactions between machines, a different focus. This misunderstanding can give greater ability to the LLMs that really exists in their current form.
- The AI industry wants people to think their products are better than they really are because this means more money and to that end they have convinced governments to support them. As a result, policymakers believe the misleading claims and get excited about what AI can do for them. All too often the reality falls short. It's true that a LLM often builds their capabilities from processing large parts of the Internet. Now consider how much of that is generated by language experts versus random thoughts from everyone. Are all of the elements of a "language" covered by such a process? Assuming you can actually fully define a language, can a LLM duplicate this based on its training?
- Some would argue that language is a behaviour or can generate behaviour based on the elements of the language and how the native speakers interpret this. This is different to a heap of words that have some kind of relationships defined between them. It takes a combination of elements like tone of voice, gesture, eye contact, emotional context, facial expressions, touch, location, and the setting that influence what is written or said. If true, then a machine cannot fully represent a language from a machine perspective as the written piece is but a part of the whole. Claims of language processing ever being "complete" are fanciful. The LLM will capture some of the elements of the language but not the essence. Think of your language today. How often do ambiguity and misunderstanding pop up? How often do you argue over the meaning of a word? Since there is no sense of satisfaction experienced by a LLM after a social interaction, then ChatGPT takes no risks responding to a query but humans experience risk in communication in varying degrees all the time.
- The current set of LLMs are unreliable, they make stuff up, they are biased due to the source and training rules. The world in general and companies however, have bought a ticket on the AI train and some are working very hard to reach a General AI level. If that happens, things will get interesting but until then be wary about any AI capability claims.
Last but not least, Elon Musk has finally announced his new PI smartphone. More on that next time but I have covered what they might include some time back.
James Hein is an IT professional with over 30 years' standing. You can contact him at jclhein@gmail.com.