Feelings


How do I link AI with a bottle of shampoo?

Leave a Reply

  1. The way Ant combines themes in his podcasts reminds me of Alistair Cooke’s Letter from America. I have feelings about AI, and they are mostly negative. It is not that I am against it, but it is being over-hyped. What is going to happen to retired AI and robots? Perhaps they will become post-actives, to use the horrible term that we heard about in the Retirement podcast last week.

  2. Another excellent piece Ant, and a worthy testament to that “generous sender”. ( You are blessed with such wonderful, selfless friends!) I shall picture you now having “sentimental memories” every morning and after gym. Perhpas I should have kept one of those shampoos for myself!!

  3. The thing I’d say is that one major point of AI is that it *DOESN’T* have feelings. In the right hands this could be a blessing for humans. Sending an AI spider-bot into a burning building to search for people, so the human firefighters don’t have to and can therefore limit their rescue search (and therefore their risk), is one example. The AI might work out that it won’t be coming back, but won’t care. It won’t have any awareness of danger, risk or mortality. Plus, if it does become self-aware (unlikely, I believe), it knows that it’s backed up somewhere.

    (And being backed-up isn’t a luxury we humans have. If anyone remembers the revamp of “Battlestar Galactica”, that was the advantage the Cylons had. But look at the “Altered Carbon” series of books and TV to see what a curse backups might be.)

    I experienced a potential example when I was involved in the bidding for the decommissioning of Sellafield. In one of the potential disasters we had to describe, the radiation levels were too high for people to go in and do the repairs needed. So engineers built remote-control bots to go in and do what they could. If we had AI then, it would have been a lot quicker and safer.

    The concern for me would be/is when there is no human (i.e. people with feelings) intervention. I was on the periphery of an AI development group. We ducked out of AI because of the potential that AIs could design their own chips. That would mean that we wouldn’t necessarily understand what was going on. And by the time we’d worked it out, the AI could potentially have changed things under our feet. Not a good place to be.

    If you look at Ukraine, there are rumours that they are using AI drones to target and eliminate targets without human intervention. This results in friendly fire, and if the AI isn’t controlled or reprogrammed, that won’t change.

    AI is like any tool in your toolbox. It needs to be used properly, with the right safety precautions and protective gear. Personally, I fear my kitchen mandoline more than I fear AI. However, I always wear safety gloves when using my mandoline.

    If we take off our safety gloves and give AI control on who or what it opens deadly fire on (an extreme example, but potentially appropriate) or what it designs, we’ll be in trouble.

    Just some thoughts.

    Ian

Share this post