FBXL Social

Wow. Yes.

Repost with alt-text:

Screenshot of a Bluesky thread:

First post, by Joseph Fink:
“It turns out this whole time that the Turing Test was the wrong way to think of it. Thinking a chatbot is alive is not a test of how good the chatbot is, but of your own ability to think of other human beings as real and complete people.”

Second post, by Greg Stolze:
“I heard some professor put googly eyes on a pencil and waved it at his class saying "HI! I'm Tim the pencil! I love helping children with their homework but my favorite is drawing pictures!"

Then, without warning, he snapped the pencil in half.

When half his college students gasped,
he said "THAT'S where all this Al hype comes from. We're not good at programming consciousness. But we're
GREAT at imagining non-concious things are people.”

It's definitely the anthropologic fallacy at work, and it's much easier to anthropomorphize an ai when it can form a sentence, which is typically a skill we associate with humanity and human intelligence.
replies
0
announces
0
likes
0