FBXL Social

"I think the core, critical sin was choosing the advertising model to begin with. Brand advertising is not like direct advertisement, which is more programmatic. It requires something like a Disney to essentially give you a favor, because the only players that matter to them are Google and Facebook. Snapchat, Twitter, everything else did not matter. And these are ads that are essentially throwaway for them. But we made that choice in order to go public."

,

https://www.piratewires.com/p/interview-with-jack-dorsey-mike-solana

> I think the core, critical sin was choosing the advertising model to begin with

Refreshing to see Dorsey admit that this was a mistake.

"... if you truly believe in censorship resistance and free speech, you have to use the technologies that actually enable that, and defend your rights. I find it interesting to watch people who say they believe in these things, but aren't invested in learning about Bitcoin or something like Nostr. Because those are technologies no company or government can compromise in any way."

https://www.piratewires.com/p/interview-with-jack-dorsey-mike-solana

Before you were making millions building a DataFarming censorship machine...

... some of us were already working with decentralisation protocols and non-corporate organising models.

"I think it's less of a question about how free is the speech on these platforms, or how free is the policy, but more like, how will all these AI models and LLMs — and people using them to manipulate — impact the election? That seems like the unpredictable variable here, and that's the one I would probably pay more attention to. I don't think [AI] is necessarily a bad thing. But it's a much bigger unknown than loosening policies around speech, to me."

https://www.piratewires.com/p/interview-with-jack-dorsey-mike-solana

Fair.

One big thing is that everyone is focused on AI and LLMs as if they're doing something novel, but the reality is that there were already bad actors getting their fingers into sites like these. If you're trying to swing an election and you're either a nation-state or a political organization or even an NGO, it's really easy and surprisingly cheap to hire a bunch of people to say whatever needs to be said, and then you aren't using an AI, you're using a bunch of actual human beings to write and respond, and it's got all the same potentials but with the additional danger of an actual human intellect behind the keyboard on the other side.

In general, it's that kind of power that's most dangerous, whether it's on social media or if it's in proprietary software. It's easy when you have a lot of money to throw it at a problem, because while everyone else has to fight in their spare time and keep a roof over their head and food on the table separately from that, minions fight as their day job, that's how they keep a roof over their head and food on their table.
replies
1
announces
0
likes
0

@sj_zero
> If you're trying to swing an election and you're either a nation-state or a political organization or even an NGO, it's really easy and surprisingly cheap to hire a bunch of people to say whatever needs to be said

The CCP are well known to do this to influence online discourse on domestic politics, and is presumably using it as a soft power tactic outside China too. What AI potentially does is make the same scale of operation available to a much wider range of actors.