FBXL Social

AI systems with unacceptable risk are now banned in the EU.

techcrunch.com/2025/02/02/ai-systems-with-unacceptable-risk-are-now-banned-in-the-eu/

What risks you ask.

The EU actually answers that, sort of.

"Unacceptable risk" AI is Class 4, and Class 3, which is not banned but regulated, includes AI systems for recommending medical treatment. Fair enough; medical anything tends to be regulated, and there's no reason not to subject medical AI to standards and tests.

Under Class 4, banned outright, we see:

* AI used for social scoring, where the social scores are applied outside the context in which they are calculated - e.g. firing someone because of their Reddit posts
* Inferring a person's likelihood to commit a crime unless you are the police and already have the criminal banged up because you think they done it
* Subliminal advertising, which doesn't work anyway
* Something so broad that it encompasses all advertising, which will be interesting
* Anything that can infer someone's emotional state
* Biometric analysis except when the government really wants to

So yes, midwits gonna midwit, and the legislation has enough holes to drive the Bagger-288 through.

Companies (anyone operating however tangentially in Europe) are expected to be in full compliance by, um, yesterday.

Europe reminds me of China before the century of humiliation. So sure of their moral superiority That they can just bureaucracy anything they don't like away, but the rest of the world continues to exist.
replies
0
announces
0
likes
2

@Aether This legislation is unenforceable, to many loopholes that can be exploited, with devastating consequences to civilians.

@Aether The Ukraine was the blueprint for every country run by these people. Complete economic and population collapse in service of goals that benefit none of the residents.