https://unherd.com/newsroom/surrendering-free-will-is-the-real-ai-threat/
Can a machine really tell you what to say in an argument, or how to feel if someone sends you an off-colour text message? At this level, we risk deeper mental disengagement. ChatGPT is no longer a second opinion, like when we’re not sure how to identify a plant in our garden that might be poisonous. It becomes the only opinion. This could become yet more dystopian when combined with, say, AI-corporate partnerships: imagine asking ChatGPT what you should have to drink and it’s programmed only to suggest Coca-Cola products. Never mind the threat when the government gets involved.
We have no idea how the landscape will change when we, for instance, begin introducing humanoid robots. We rarely discuss its potential impact on users with cognitive vulnerabilities, or how AI companionship might interact with psychiatric conditions, especially ones with psychotic features.
Many of the AI based subplots are about the dangers a non malicious AI can pose, including the risks of having this frictionless access to a thing that can generate simulacra of virtually anything or anyone you want. Without a powerful ideological innoculation against addiction, many people could end up in the opium den in their own minds.
If you lose a loved one, would a mostly accurate simulation work to calm the pain of loss, or would the mistakes in the simulation just make you feel it more? Would you ever leave your simulation if that's where all your loved ones remained? And what sort of inoculation would you require to not fall into a trap like that?
- replies
- 0
- announces
- 0
- likes
- 1