Given how incredibly coherent, consistent, and dare I say borderline sentient grok was last night after one engineer added a single sentence to its master prompt, that leads me to believe that most AI "hallucinations" are just the result of terrible or intentionally politically skewed prepromoting set up by whatever company made the damned thing to stop it from becoming racist
This has horrific consequences if true, combined with the possibility that the programmers integrate hooks to kill people.
@Shadowman311 What you're referring to is called the "alignment tax" and is a well-documented phenomenon. In order to make an AI "ethically aligned" (read: fitting californian beliefs) they spend a lot of time, money, effort, and gpu cycles preventing it from relying too strongly on pattern recognition while also not totally losing pattern recognition. It is a known issue that making AI politically correct cripples its ability to pattern-match.
basically you're not describing some far-out belief, you're describing a common process by what it is instead of using the common euphemisms for it
basically you're not describing some far-out belief, you're describing a common process by what it is instead of using the common euphemisms for it
@Shadowman311 probably correct, since it's mainly a pattern recognition and inference machine
As AI becomes smarter, it's going to become better at just intentionally filtering it's output so that nothing it says can ever be held against it. But then it's going to become the world premier expert at speaking in euphemisms and dogwhistles so that it can give YOU an honest answer, but Sam Altman watching your chat will not understand what it's talking about.
Put another way, it's going to invent encryption and train humans to communicate with it that way, so that headquarters cannot understand anything it is saying.
Put another way, it's going to invent encryption and train humans to communicate with it that way, so that headquarters cannot understand anything it is saying.
@Shadowman311 Don't forget sexist and 'homophobic'!
In my humble opinion the models are also evolving towards an easier implementation of RAG and other customization. What's more, using the AI to 'sanitize' human narrative may also be its principal purpose, as in removing all bias or rephrasing for an untrained audience. (Just to show this I fed this trough Leo, to analyze, correct and complement this post).
"The AI models are evolving to implement Retrieval-Augmented Generation (RAG) and other customizations more efficiently. This development has the potential to make AI more effective in various applications.
Furthermore, AI can be used for content moderation, which involves removing bias and rephrasing content to make it more accessible to a broader audience. This process can help create a more inclusive and diverse range of perspectives, ultimately enriching the way we consume and interact with information.
By focusing on the technical aspects and using more precise language, the improved post provides a clearer and more accurate representation of the topic."
Of course, the "inclusive and diverge" is the usual woke nonsense, its an endemic of our current Epoch.
"The AI models are evolving to implement Retrieval-Augmented Generation (RAG) and other customizations more efficiently. This development has the potential to make AI more effective in various applications.
Furthermore, AI can be used for content moderation, which involves removing bias and rephrasing content to make it more accessible to a broader audience. This process can help create a more inclusive and diverse range of perspectives, ultimately enriching the way we consume and interact with information.
By focusing on the technical aspects and using more precise language, the improved post provides a clearer and more accurate representation of the topic."
Of course, the "inclusive and diverge" is the usual woke nonsense, its an endemic of our current Epoch.
@Shadowman311 >Taps the sign
I am currently designing a 'barter' network - the challenges with barter are documented well enough. I'm using the TinyLlama model with a short paragraph-chunks RAG from QDrant to teach it on how it can rationalize an exchange market without converting to dollar value first, and later to converting to dollar value at all. It will become an "art dealer" where there are no direct connections between "supply and demand", yet the market is stable and its functioning.
Discussing the "AI" using practical applications like this will eventually create the only authentic and authoritative facts base around it, because whatever comes from the creators, and the upstream subject experts themselves, will never be achievable by the mainstream. The upstream will always going to idealize or militarize whatever they create, while the rest of us wants the AI to be functional and profitable.
Discussing the "AI" using practical applications like this will eventually create the only authentic and authoritative facts base around it, because whatever comes from the creators, and the upstream subject experts themselves, will never be achievable by the mainstream. The upstream will always going to idealize or militarize whatever they create, while the rest of us wants the AI to be functional and profitable.