FBXL Social

I wonder how you would train an AI to recognize intelligent writing. I guess you can use any old GPT to generate arbitrary amounts of midwit slop.

@cjd reddit already exists, why would you bother a graphics card to generate midwit slop?

Well I expect if I were to start scraping it, they'd stop me pretty quick. But that doesn't necessarily matter because any AI which has been trained on it can be made to regurgitate the similarly structured text, so basically the dataset gets laundered from one model to another until it ends up in an open source model anyway..

@r000t @cjd chukie
replies
0
announces
0
likes
0

@cjd archive team distributes docker containers that basically scrape reddit all day, and they seem to hit it p hard. you could probably get away with quite a bit before getting throttled.

I'd like to make a chrome extension which squirrels away everything I look at. Also it should rank it by how much time I spend looking at it. For now, the best I can do is to copy everything interesting to the fedi and let my postgres snare it.

@cjd I have to assume this is a hard problem or Google would be better at removing fake articles from search results.

Well you'd also think Google would be able to make a half decent distro, but here we are.

@cjd as a lark, try building Chromium or Android from scratch sometime, Google is _incredibly_ bad at internal stuff.

i :burningtransheart: love that building android needs like 300GB of storage

isnt chromeOS pretty much... fine? It's not really meant to be a linux distro

I guess if you set the bar low enough, it's perfect. But it's not something that anyone can really use as a primary computer. Ubuntu CAN be used as a primary, so in that way they're worse than Ubuntu.

@cjd @Moon
I thought we already did have a left/right wing bias recognition in AI.

@cjd @mischievoustomato if you are fully bought into the google app environment, like gmail and docs, and otherwise mainly use webpage applications, chromeos works really really good (under the hood it is still a mess)

@cjd @Moon
Obvious jokes aside, the problem is, that you cannot create a dataset by humans, since humans are incapable of making this distinction themselves.

The entire concept of schizophrenia and inteligence being 2 sides of the same coin does apply here 10fold. Because, briliant people do see paterns that you cannot visualize, that means, that you cannot know, if they actually are smart, or if they are bullshitters.

This is, why most atempts at doing this end up just with recognizing, how niche words you use, since the niche words are needed to make a scientific article. But, you immediately turn to social science loons, who cannot form a single sentence without going full systemic prejudice against margenalized methaphors for cheese.

@LukeAlmighty @cjd I would actually like a left wing bias classifier to play with if anybody has a link to one

@cjd @Moon
> Space and time are the same thing
Literally is either the most briliant thought of 20th century or something a local greenhead said after his 5th joint.

> All matter is created from energy
Now, you're either talking to quantum physicist, who understands the nature of matter, anti-matter etc, or you're talking to Teal Swan.

See the issue?

@cjd @Moon
And these are both examples, where we both know, that the basic statement is generally considered to be true.

But, once you lower your IQ to 100, these statements are indestinguishable from free energy videos, pyramid schemes and literally the dumbest pseudo science under the sun.

@Moon@shitposter.club @cjd@pkteerium.xyz No they promote fake articles https://www.theverge.com/23753963/google-seo-shopify-small-business-ai

It's not clear why, though.

I suppose it might be something like they have an (AI) algorithm, and people reverse-engineer how it works to some extent, and provide content that does not necessarily make sense but makes the algorithm to give them higher ranking, and then they train event AI to generate such content.

The result is AI talking to AI, and the need for people to find some other place to look for information.

@usagi @cjd I do actually use Kagi now which is not perfect but is a lot better.

So in my vision, this depends on the depth of the network. If you're doing simple word recognition then yes, you're going to end up with the most midwit of the midwit.

But, and this is a simple implementation: Suppose you use a text classifier on the individual sub-phrases, then for each one of those, you output neural layer snapshot represented as an image, then you take the images making up a sentence, and you feed them through a net to pattern recognize similar sentences and again you're outputting a neural snapshot as an image.

At each level of this, you can train using 2 similar phrases and one different. The reward function is based on the neural image of the similar phrases being more similar (XOR of the pixels is less) than the different one.

Feed those images back in, this time per-paragraph, and you should have a form of paragraph level classification. Then you feed that output into a network which which classifies text into a score and you train on things you find worthwhile.

@cjd @Moon
And to finish this, take conspiracy theories.

They are theories. By definition, they are something, that people who believe in them weren't able to disprove.

But, when you talk to a normie, they take the word theory, and believe, that it means by definition, that it has to be false. This is the inteligence on the 100IQ level. But, it is based on a deeper problem. That is, that we are working with an insanely limited amount of measurable data. And AI can only compate concepts to concepts, but It cannot compare concepts to data. That means, that it has limit to it's potential only to things, that humans have already measured.

@cjd @Moon
I don't understand, how that's supposed to measure inteligence.

Well the point is you train it on what you consider intelligent writing vs. fluff and midwit slop, then you teach it to distinguish.

BTW humans have a way of signaling and detecting intelligence - that is through humor. It's like the first man-made proof-of-work: It requires more brain cells to be funny (prove) than it does to laugh (validate).

@Moon @cjd "Google is incredibly bad at internal stuff."

One of the reasons https://killedbygoogle.com/ et. al. are a thing is that aside from a couple of smallish git repoes, like maybe one for their version of Linux?, everything is in a huge monorepo and unless something is constantly maintained it'll rot.

You don't get rewarded for pure maintenance of anything that's not huge, while new product launches are the best way most of their people can get rewards, promotions (AKA more $$$, and promotion opportunity is stack ranked) etc. Also see how fast they sunset Google Cloud offerings, one reason it's a very distant third or forth to AWS and Azure.

We're also told this is why Google Reader was killed, which was an inflection point in perception for the company. The thrived in Ballmer stack ranking Microsoft pajeet they hired for Google+ with a remit to run roughshod over everything else would have had to expend effort to keep it working as the repurposed some of its stuff for what was a while the only project after advertising that mattered.

https://steve-yegge.medium.com/dear-google-cloud-your-deprecation-policy-is-killing-you-ee7525dc05dc

@cjd @Moon
> Well the point is you train it on what you consider intelligent

Well, I guess I wrote the paragraphs of text in vain then.

You WILL get a Redditor AI. There is no way around that.

The origin of this thread was me saying "I wonder how...." which is about how to avoid that failure mode. You say it's impossible, I'm not convinced.

@cjd @Moon
I didn't say it was impossible, I said, it was impossible for a human to create a dataset.

I also said, that intelligence isn't based around the word structure, but about how well the mentioned concepts align with the world, that AI has no access to.

If we're talking about making an AI which generates text, then I agree. But I'm talking about an AI which classifies text (that I expect will be written by humans).

> isn't based around the word structure

Well, yes and no. REALLY stupid text has an identifiable structure. Midwit text looks smarter than it is. What I'm looking for is how to make the model deep enough to identify quality fedi banter.

Of course midwit diarrhea is a moving target w/ Goodhart's law, especially if people are start training GPTs against my "quality posts" classifier...

@cjd @Moon
Also, just consider the biggest problem with intelligence is in fact communication. Smart people do see a problem in it's entierity, where describing route from A to B is a language problem with exponential growth or words needed to describe each step and needed connections. While this is not a problem for AI (for obvious reasons), It shows exactly, why humans cannot create the dataset.

We were so confident about this issue not existing, that we even created a logical fallacy to describe logic too advanced for our understanding. We call it a slippery slope.

@cjd @Moon
I see.... you don't get it

@cjd @Moon
Again. By discarding simple language, you WILL get a reddit AI.

@cjd @LukeAlmighty I was at a party several years ago, I bumped into a guy I knew in elementary school, that had just stopped living in a van. Nice guy but we had a conversation and everything he was saying was fluff words and not really saying anything. Years later and now he owns his own business and he has a lot more money than I do. I don't have a point to bringing this up other than I guess actual intelligence doesn't always correlate to the most success.

My thinking here is to train it on things that *I* find interesting (e.g. I hit the like button). So that's going to contain some complex language, some simple language, some grammatical errors, etc. Not to give the AI an easy way out here...

IQ v. Income is weak.
IQ v. Net Worth is basically non-existent.

@Moon @cjd
This guy wrote his own OS. Does he sound "smart" though?
https://www.youtube.com/watch?v=3CC8EopC4hU

@LukeAlmighty @cjd I think this quote was part of Einstein's misguided opposition to quantum physics.

He sounds like half the people I talk to here.

@LukeAlmighty @cjd I am not sure what you're getting at, can you be more explicit?

@cjd @Moon
If you want to make an AI to learn what interests YOU, then even the dumbest "find words I like" system will do the job. As long as the text contains "linux, boot, freeware, software, hardware", It's great.
If it contains "republican, democrat, trump, fuck, Nigger" It's BAD.....

But you changed the goal entierly now. Your original post was about finding intelligence.

@Moon @cjd
It still is a simple quote that sends a smart message though.

Ok that's a fair point. I'm trying to find things that *I* will consider intelligent (or at least amusing), and I just wrote my original post poorly.

But the thing is, I can't think of any particular set of words that communicate what I would find interesting - it's like trying to word-filter for what you'll find funny. What do you do? Filter for "knock knock"? Maybe if you're 5.

Most political takes are boring and repetitive. Most Science is horrifying midwittery. Most blockchain takes are spam and get-rich-quick. Most conspiracy takes are aliens and flat earth bullshit. BUT, there's 1% in each category which is a flash of brilliance (IMO) and I'd really like to try to filter for it...

@cjd @Moon
In that case, as long as you have a dataset, it should be unbelievabely simple.

I wrote a word frequency based spam filter as a homework at Uni. I didn't even have to go into neural networks. If I find it, I might even send it to you. :D

As I said earlier in this thread, I have not read the papers, but right now I'm strongly suspecting that most models are incredibly stupid and they just throw 10mn$ at GPUs to train the crap out of them. So something like hashing every word, every 2 words, every 3 words, and so on, then adding up the bits in an array of 256 float32s might give you a pattern that you a text similarity filter that beats a lot of way more complex neural nets...

Also I make a lot of grammar errors, sorry about that.

@cjd @Moon
I literally couldn't care less about grammar

@cjd @Moon
And... I found the filter :D
I will have to look through the files to check, if there is anything sensitive, and I have to go now to a pub, but would you want a working spam filter from a 1st grade Uni homework?

I'd say don't break your back over it. I'm supposed to be doing a bunch of other stuff so it's highly doubtful I would give it more than a couple minutes of reading...

@cjd @Moon
Also, goodbye friend. If you turn that thing on, I will disappear from your universe

@Moon @LukeAlmighty @cjd Einstein's "misguided opposition to quantum physics" was I think more opposition to the "Copenhagen interpretation" than all of quantum mechanics, which I'm restudying right now, Thirty Years That Shook Physics which I last read in the 1980s.

As in, per (((Otto Robert Frisch's))) autobiography (he was a top experimentation of the era and the guy who asked the key question which led to practical atomic bombs), Einstein is credibly claimed to be the first scientist to take Max Planck seriously. Although you could possibly add "... and get results."

Plank just about tore his hair out (seriously, look at photos of him around this time) trying to explain the emission of light, and came up with the idea light is packaged up in quanta, small packets of energy we now call photons. He was using evidence from emission, and of course assumed absorption was the same.

Einstein took the empirical, experimental laws of the photoelectric effect where light hitting metal throws off electrons, and explained how quanta of light perfectly explained that thus closing the circle, which really got the ball rolling. So important that was the specific thing cited for his Nobel.

It's pretty amazing stuff from a historical perspective. The next theoretician to make headway was (((Niels Bohr))) and per the book one of the assumptions he made was that hydrogen had only one electron. Nobody knew!

https://www.amazon.com/dp/048624895X

@cjd
have it exclude writing that include too many adjectives

Provided the AI understands the concept of an adjective and its relationship to a noun

@cjd
Can this AI read books? If so, then have the AI read certain "men of letters" (blank slate AI).

Perhaps beforehand, have it learn to distinguish verbose writing (too many adjectives, too many adverbs, and buzzwords)

A bank of buzzwords can be maintained quite easily, and have any pop-culture article containing these words be flagged as stupid or low-brow. (This is giving the AI some agency.)

Anyway. My wheelhouse is mathematics and Tech. Comm. and not software engineering, so throw rocks if you like.
@LukeAlmighty @Moon

The problem is that what makes a classical book great are the same words and phrases that make for some horrific drivel, if they're not used in exactly the right way.

Training an AI on quality writing is interesting, but tweets and fedi posts are perhaps better for training because they're short and so it doesn't take reading over pages and pages of long words to determine whether you're dealing with genius or reddit-tier poop.

@cjd @pepsi_man @Moon
I am sorry, but I thought you are very educated when it comes to the IT.

So, why would you believe, that less information is good for learning?

@Moon @LukeAlmighty @cjd delayed gratification is honestly more valuable to most people than intelligence, making plans are good and all that shit but seeing them through is also important.

thabkfully you improve them both when ur young probably harder when ur older but it csn srill happen tho

Nope, got nothing but a highschool diploma. You have the wrong guy.

@cjd ngl im ai should be a kino educational tool, bit retarded im n(tusing it as debate buddy yet slash knowledge coach/question generator