The Twitter situation is complex and somewhat confusing. On the one hand, all kinds of people from The Babylon Bee satirical website to former US president Donald Trump have been allowed back on the platform. The stated aim is to allow freedom of speech to be supported by Twitter once again. On the other hand, you can be banned by linking to a public photo of a public person on a public platform. The rule for the latter appears to only be for friends of Elon Musk. A YouTube channel I enjoy watching, The Quartering, did this after someone else had been banned and was also almost instantly banned himself. This is of course wrong in every respect especially given the individual in question, apparently now hypocritically, is always banging on about freedom of speech. Update, the ban is permanent.
- Some might argue that as one of the owners, Elon has the right to do this, but I'd argue there are published rules of conduct and that linking to a publicly available image from a video six years old and still up does not contravene those rules. One of the criticisms of old Twitter and other platforms was the everchanging rules of conduct and not knowing what was allowed or not. In this case, it appears to be a petty set of bans based on ignoring the terms and conditions and the whims of Elon Musk. The hope here is that these are all teething problems of a new company owner and it will all settle down. If it doesn't then it will just become a situation of a new boss just like the old boss, as per the song Won't Get Fooled Again by The Who, and that would be a tragedy.
- I came across a YouTube video on an artificial intelligence-based picture generator called Blue Willow. You will need to be a member of Discord or join up. Do a search for "Blue Willow in Discord" in your browser, follow the link and you should get an invite. Navigate down to any one of the Rookie areas and then type /Imagine, and in the prompt box that appears just about anything you like, such as a picture of the US Senate in the style of Hieronymus Bosch, detailed, HD, 8K. I tried a number of different examples and the results were quite good. You can specify watercolour, pencil drawing or anything else you like. You get four images and you can upscale one or get variations. It is a fun thing to try.
- Another AI product I looked at recently is Voice.AI. This is still in beta and allows you to sound like a famous person or character like Rick Sanchez of Rick And Morty, or Tom Baker of Doctor Who. Ratings vary and it is a little glitchy, but if you want to speak into a microphone and sound like someone else without practising for days then at the very worst it will be a bit of fun. It's currently available on Windows, Mac, iOS and Android. It works in real-time in Apps like Messenger, Telegram, Twitch, Zoom, Skype and in popular games. I expect we will see a lot of this kind of thing appear over the coming year as trained AI is having a surge in popularity.
- So now onto how trained AI can be easily misused or trained to give incorrect results. Go to images.google.com, Google's image search platform. Type "straight white couple" and note the results from the first few lines. Then do the same for "straight black couple" and note the difference in the contents of the search results. In the former, there are same-sex couples and mixed couples in the first row and further down. In the second example, they were all as per the search term. Note also that the results will vary based on what country you are in when you execute the search.
- These results are provided by the poorly named Machine Learning Fairness (MLF) engine. This engine is trained by Google so the results you see were designed to come up that way such as same-sex couples and black and white results for "white straight", but no same-sex couples and all couples consistently black for "black straight". The computer didn't unilaterally generate the results, it was trained to do so for whatever reasons Google had for generating the results as seen. It would appear to make a mockery of the last word in the engine's name, Fairness. Remember that AI engines are only as good as their training and the desired bias of the people training them. The better way to train AI is to give it a very basic set of learning rules and let it loose across, say, the whole of the internet.
- Finally for this week when you send an email, it may end up in the Spam folder that many ignore. If the email is important always follow up, never assume someone will see your email, particularly if it is sent from a company.
James Hein is an IT professional with over 30 years' standing. You can contact him at jclhein@gmail.com.