Email or username:

Password:

Forgot your password?
European Commission

First to propose, first to deliver: the AI Act enters into force today.

It sets comprehensive rules to address the risks of AI:

🔴 Unacceptable risk AI, such as social scoring, is banned
🟠 High-risk AI, like in medical devices, must meet strict requirements
🟡 Limited-risk AI, like chatbots, must inform users they are interacting with AI
🟢 Minimal risk AI, such as spam filters, can operate with no added obligations

Europeans can now safely seize the opportunities of AI: europa.eu/!t4QTc8

The image shows, on top, the text, "Artificial Intelligence Act." Close to it is the EU flag. In the middle, a search bar includes the text, “World’s first AI law enters into force.” At the bottom, there is a colourful network made with strings.
41 comments
Maikel 🇪🇺

@EUCommission it's too vague, also AI for recruitment should not be allowed. That's no different from social scoring and AI is incredibly biased against minorities.

It's far far too vague.

FifiSch

@maikel @EUCommission I believe it's vague on purpose. Makes it more future-proof...

Suwul

@maikel@vmst.io @EUCommission@ec.social-network.europa.eu this is a very short summary, not the actual regulation. If you click on the link at the top you can read a longer writeup, on that page there is also a link to the full regulation which is less vague.

Tonik
@EUCommission wow, this actually looks pretty neat. Realistic about the capabilities and drawbacks of AI, with clear guidelines and pretty nice fines for companies that don't comply.

If I'm reading it correctly, there's also a fine for AI giving false information, which could finally stop search engines from telling people to put glue on pizza and eat rocks
Feyter

@EUCommission the idea is very good. Now it has to be proven if execution was it as well.

Master Don

@EUCommission

😜😁😂 Like sticking your finger in the hole to plug that first leak in the damm.

Dr Ro Smith

@EUCommission are there any protections for the creators that so-called 'AI' has stolen from for its databases? Or to limit destruction of the environment? These are the two biggest issues and yet not mentioned in your summary.

Suwul

@Rhube@wandering.shop @EUCommission@ec.social-network.europa.eu reading the article and regulation, this regulation only deals with where and how you can use AI systems, so it doesnt deal with either of these points

guetto 🧑‍💻 🇪🇺

@EUCommission

What about face recognition in public places?

What about using AI to select CVs in enterprises?

What about using AI to restrict rights like in Galicia, Spain, where they plan to use AI to detect “people who don’t want to work” and pull them out of unemployment lists?

I see your case list as a shortsighted list.

Suwul

@guetto@mathstodon.xyz @EUCommission@ec.social-network.europa.eu
from clicking the link:
1. "categorising people or real time remote biometric identification for law enforcement purposes in publicly accessible spaces" is considered Unnaceptible Risk
2. Selecting CVs could fall under using companies biometrics which is Unnacceptiple Risk, otherwise it would probably fall under High Risk
3. Restricting citizens rights is explicitly listed as the criterion for being Unacceptible Risk, so thats where that would fall

DELETED

@guetto @EUCommission@ec.social-network.europa.eu Indeed these are just words I think. The Swedish #fastfashion company I work for in #Spain has an HR employee who ignored to respect GDPR data towards other employees and only care in words about bullies as many remains in their team. It's all a load of bollocks when justice only serves the one who can afford it and never the working class or disfranchised.

Alvaro Montoro

@guetto @EUCommission the linked page is short, and it includes examples of high risk and unacceptable risks that cover all the scenarios that you mention:

- face recognition in public spaces would fall as biometric identification in public spaces which is classified as unacceptable
- AI to select CVs is explicitly mentioned as a high risk: "AI systems used for recruitment"
- AI to restrict rights is described as an unacceptable risk: "systems that allow ‘social scoring' by governments"

Hyenrådjuret Elisabeth ΘΔ

@EUCommission This will hopefully stop cops from using it then just blaming that it is the technology and not their politics that is wrong

BibbleCo

@EUCommission Baysian spam filters have been around for 20 years now and are not remotely AI... I hope the definition has been carefully worded!

cholling

@BibbleCo @EUCommission "AI" is a moving target and has been applied to wildly different technologies since it was coined in the 1950's. Expert systems (essentially just a lot of nested case statements) were once considered AI, as were simple ML techniques such as Bayesian filters. The use of the term to be interchangeable with deep learning is less than a decade old; the further narrowing to mean "large language models" barely two years old.

As others have said, "artificial intelligence" is a marketing term, not a technical term.

@BibbleCo @EUCommission "AI" is a moving target and has been applied to wildly different technologies since it was coined in the 1950's. Expert systems (essentially just a lot of nested case statements) were once considered AI, as were simple ML techniques such as Bayesian filters. The use of the term to be interchangeable with deep learning is less than a decade old; the further narrowing to mean "large language models" barely two years old.

As others have said, "artificial intelligence" is a marketing...

MTRNord (they/them)

@EUCommission Hm how come that spam Filter Ai is minimal risk? Ai is usually Trainer with some Bias. Meaning the Filter will have introduced Bias beyond just comparing ham and no-ham files. This feels quite dangerous imho. Ignoring the whole Environmental issues Ai already introduced

GolfNovemberUniform

@EUCommission too lenient. Privacy control, copyright respect and ban in medical area when?

ꮇ :estelada:

@EUCommission Unacceptable risk AI systems are banned with "narrow" exceptions. But exceptions are never narrow for criminal states where literally anything can be considered "terrorism".

Martin

@m @EUCommission To be fair I doubt it changes much. If the law is too restrictive for the criminal state, it will always find another way to bend the law, or use surveillance without AI, non union states would just change its own laws. What's good is that we actually have proper protection for citizens in non criminal states.

RolingMetal

@EUCommission And how is the EU addressing the missing intelligence, and the wastefulness?

TripTilt /// tt

@EUCommission
don't fall for meta's try to tell the world, that releasing the weights of their model and just the weights for free is open source. it is not!
We need to see the training data!

DELETED

@EUCommission therefore, the risk of intelligent beings is ordered by professions.
can we have the same for humans too?

DELETED

@EUCommission anyway, only owned understanding models subject to those rules.
free ones remain free.

Miguel de Icaza

@EUCommission @drewmccormack if the act is what is described in this toot it looks quite good.

HellPie

@Migueldeicaza @EUCommission @drewmccormack

I was typing out a long ass post with how (expectedly) abusive and oppressive it gets, but I found an article from Euractiv explaining the major pain points.

Tl;dr: there are limits even for the police, but the second the words "national security" are uttered (e.g. they have been screamed over and over during pro-Palestine protests, anti-coal protests, etc) the police can do whatever the fuck they want, no limits of any kind.

euractiv.com/section/artificia

@Migueldeicaza @EUCommission @drewmccormack

I was typing out a long ass post with how (expectedly) abusive and oppressive it gets, but I found an article from Euractiv explaining the major pain points.

Tl;dr: there are limits even for the police, but the second the words "national security" are uttered (e.g. they have been screamed over and over during pro-Palestine protests, anti-coal protests, etc) the police can do whatever the fuck they want, no limits of any kind.

artisanrox

@EUCommission

USA: can we have this too?

also USA: :blobfoxdealwithitfingerguns: NO :blobfoxdealwithitfingerguns:

USA: :blobfoxangrylaugh:

KasTas

@EUCommission I'm a little bit worried that "spam filters" are being seen as "minimal risk". Despite being the technology which may limit the access to information for huge numbers of people. And bunch of secretive manipulations on filtered/unfiltered content could be baked in the model.

xs4me2

@EUCommission

Great initiative, the lawlessness of the AI Wild West needs to be addressed.

Ditol

@EUCommission
Very good! Do I get it right that the German SCHUFA is outlawed now?
#SCHUFA #AIact #socialscoring

Evan

@EUCommission From what I’m seeing this seems good, putting reasonable limitations on use without an outright ban that would push the technology underground and out of public view

nyx

@EUCommission Spam Filters. So AI that scans and categorizes content and decides how to proceed based on its findings is low risk. Got it.

David

@EUCommission I do not see there anything against implement an AI as a new EUComission.

Maybe we will have a well defined law to control and limit AI that hasnt being tailored for tha American companies developing AI. 😒

Caleb Maclennan

@EUCommission How is spam filtering *not* social scoring ... obviously nobody familiar with even 20 year old Bayesian training much less modern infrastructure (as implemented by the likes of Gmail) had any input into this mess.

Go Up