FYI, I have been studying AI (ie Machine Learning) for the past year or so. It has gone from something that only 'geeks like myself' were interested in and knew much about, into something that most people have now heard of (eg Chat-GPT) since references to AI exploded in the mainstream news media.
I have a few thoughts about AI, which might partially allay some fears.
I wouldn't get too freaked out about AI just yet, as AI systems do not have an inherent ability to truly think, reason or to be contextually aware of the world that we live in. They are essentially a large mathematical model that has been trained on a large amount of training (example) data, and once trained they are basically able to mathematically guess/interpolate/extrapolate how to behave or act in some unknown future situation.
A valid concern is there is definitely potential for some *repetitive aspects* of some jobs to be carried out by an AI system, which is obviously a real concern for society as a company might not need to employ quite as many people as they used to. This will probably increase as time goes on and as the systems get better. However, you still need a human involved in the process somewhere as AI systems can get things horribly wrong at times (example below). At some stage you will need to interact with other humans and you will also need to make decisions based on a wide variety of contextual information. Remember, AI systems do not fundamentally understand the world we live in, they are just really sophisticated BullS**ters.
If an AI system encounters a situation well outside what it was trained on, it will typically just 'hallucinate' an output result. There are some well publicised examples of this recently, where a lawyer submitted to a court some legal document which he used Chat-GPT to write, instead of bothering to do it himself. The document looked ok and referenced a number of legitimate looking prior legal cases. However when others went to lookup these prior legal cases, no-one could find any reference to them whatsoever. Chat-GPT basically just made them up. Having seen many references to a legal prior case in it's training data Chat-GPT knew roughly how to generate a reasonably plausible sounding prior case, but it had no true awareness of them.
https://www.bbc.com/news/world-us-canada-65735769
I like to think of AI (Machine Learning) as both more amazing and capable than you could have thought possible a few years ago, and at the same time, way dummer than it might appear on the surface.
Like any technology, there are going to be lots of really good (useful for society) uses for it, and a lot of really bad uses of it. AI systems can be used to 'double check' cancer scans to pickup and highlight cancer growths than a doctor missed (something that probably happened to my Dad). Self driving cars could enable someone to move around who has some sort of illness or disability (eg poor eyesight) that prevents them from being able to drive. Likewise I am sure militaries the world over are in the process of creating mechanised weapons that identify targets and can automatically pull the trigger on anything that is flagged as being on the opposing side. Scary stuff.
In summary, I do think there will be some negative disruption to society from AI, but likewise there is also potential for real gains for society. In many ways AI is a bit like a firearm. Firearms are neither good nor bad. What matters is the person behind the trigger and the use to which it is put.




131Likes
LinkBack URL
About LinkBacks



Reply With Quote


Bookmarks