FBXL Social

I sometimes wonder if this is a psy-op. Like, Google wants to make people feel less worried about AI, so they just make the AI results totally incompetent.

Shown: An AI saying "there are no known cheat codes" next to the link with all the (working) cheat codes.

@sj_zero The sad reality is that the AI devs teams - being the highest paid ones ever in software building history - do not understand how and why "their" LLM's select this output instead of that other one.

They'll invoke "gradient descent" the same way any State refers to "democracy": as a void-covering concept.

@sj_zero

But Google is infamous for its role in research in getting the current "AI" boom going, and losing its talent to companies that were actually capable of bringing stuff to market. It's perhaps more competent than Apple, a hardware company that's not good at software, but maybe that's just Apple not being willing to inflict the above sort of lunacy on its customers???

@sj_zero Until and Unless AI is sentient, it isn't the AI I am concerned about it's the owners of it and how they will no doubt use it to gain further wealth and power at the expense of everyone else.

For the purposes of what I'm discussing, there doesn't need to be a disambiguation between the two.

going "Gemini isn't a threat to me" ends up essentially being "Google wielding Gemini isn't a threat to me" in the public's eye.

Contrast with chatgpt, which expresses a lot more basic competence and has people a lot more worried about what what openAI will do with its models.

@sj_zero And on that note we will have to agree to disagree, the Devil's in the details, always has been and always will be, even with a monstrosity like Gargoyle.

The detail here is that I'm talking about the psy-op of Google potentially neutering its AI for PR purposes. In that case, it doesn't matter to the public at large whether it's the AI or the company controlling the AI that is scary, because if the AI isn't scary then the company with the AI isn't scary. There's often talk about "the wisdom of crowds", but the crowd as a panicky lot really isn't that skin deep, so you only need to make sure it isn't looking at the thing you don't want it looking at.

I'd probably agree with you that separate from the public perception of things that AI as a whole could become something dangerous because of the blind self-interest of companies. It's already bad enough having human beings with a conscience making decisions -- if you have even a low intelligence AI making mass decisions with the sole intent of making the company more powerful, and it doesn't really care that much about the morality or ethics or humanity of the decisions, you can have a lot of evil created that actually does result in the people who caused it to be committed becoming more powerful thereby.

@sj_zero "the psy-op of Google potentially neutering its AI for PR purposes"

I suspect the real reasons are entirely internal. Perhaps some competence issues, but definitely the company being so wedded to social justice, so full of SJWs, the people working on the "AI" must make it "safe" or they'll lose their jobs.

Again I refer to this output, the bad publicity of which I recall was said to have caused it to be withdrawn for fixing.

On the other hand, for your purposes, for your thesis here, does the reason matter??

Perhaps a few different reasons at once.

There's no reason for gemini to lie about cheat codes for a 20 year old video game relating to wokeness.

I don't think what you're saying is without merit though. Maybe the reason is something closer to that.
replies
1
announces
0
likes
0

@sj_zero "There's no reason for gemini to lie about cheat codes for a 20 year old video game relating to wokeness."

There is, actually, the theory of social justice convergence. That the more woke an organization gets, the less competent it becomes at its Official purposes.

Adhering to social justice comes with all sorts of penalties, friction, etc. The best people as we see it don't get promoted or have the power to direct things, SJWs constantly add friction to everything and occasionally wack people through holiness spiraling, orgs get a reputation and lose some of their potential talent pool, etc.

As it was, Google was already suffering from various big company diseases, plus their monorepo constant rotting of product foundations, a major reason for the Google Graveyard along with maintenance for all but the most important products not getting respected or rewarded. And there's a "you only get promotions for new products" one; these products at some point no longer being new???

A modern example of the principal–agent problem: https://en.wikipedia.org/wiki/Principal%E2%80%93agent_problem

@sj_zero It's a conspiracy theory and while it might be true there is only circumstantial evidence. Personally I am convinced Google is sufficiently inept not to require neutering.

Fair enough. It isn't like it's actually produced anything of particular value in the past decade for me to defend it as something great.

@sj_zero I have not seen a lot of value in Google's AI either but I have Microsoft's co-pilot to be about the only useful product I've seen from them in my lifetime (and my lifetime spans the whole of Microsoft's existence and then some).

@ijatz_La_Hojita @sj_zero These days it's a committee of specialized agents(garden variety LLM tuned for one subject pool) that get much better accuracy on their specific topics, but the overall algorithm is still prone to selecting the wrong drone for the query and that's on top of possibility of drone just spewing garbage.

Linguistic mimicry is today's bare minimum, an amateur can achieve that on consumer grade hardware. There's still no real memory, short or long, mechanism for AI models. Logic is inferred linguistically without abstraction too, which is still pathetic for "intelligence".