Email or username:

Password:

Forgot your password?
Top-level
pinkdrunkenelephants

@mcc They should just make a license that explicitly bans AI usage then.

19 comments
mcc

@pinkdrunkenelephants The licenses already bar this because they govern derivative works. If you can make the derivative work non-derivative by defining it "AI", then if we add a nonsense clause banning "AI", the AI companies can simply rename "AI" to "floopleflorp" and say "Ah, but your license only bans 'AI'— it doesn't ban 'floopleflorp'!"

pinkdrunkenelephants

@mcc They can rename it Clancy for all it matters. AI is still AI and actions don't just lose meaning because of evil people playing with language.

mcc

@pinkdrunkenelephants But AI is not AI. The things that they're calling "AI" are just some machine learning statistical models. Ten years ago this wouldn't have been considered "AI".

pinkdrunkenelephants

@mcc Doesn't matter, what matters is the definition behind the word. That is what licenses ought to ban outright.

It's like saying rape is perfectly legal so long as we call it forced sex. Who would believe that that wasn't already predisposed to rape?

Don't fall for other people's manipulative mindgames?

mcc

@pinkdrunkenelephants The definition behind the law is, again, decided by humans, who are capable of inconsistency or poor decisions. Rape is legal in New York because rape there is legally defined by the specific use of certain specific genitals. See E. Jean Carroll v. Donald J. Trump

pinkdrunkenelephants replied to mcc

@mcc And no one accepts that because of what I'm saying. A rose by any other name would smell as sweet. People need to start recognizing that fact. That's the only way things will change.

mcc replied to pinkdrunkenelephants

@pinkdrunkenelephants Well, per my belief as to the meaning of words, ML statistical models are derivative works like any other, and my licenses which place restrictions on derivative works already apply to the ML statistical models

pinkdrunkenelephants replied to mcc

@mcc The situation is sad all-around.

h3mmy :v_enby: replied to mcc

@mcc @pinkdrunkenelephants I agree that ML models trained on a set of data that contains my code would be a derivative work.

Personally I want my open source work to be used as a public good. A derivative work that is proprietary and used for corporate profit is not for the public benefit, and I find that distasteful.

I want to say that I'd feel less spurned if the ML models were open source, but I don't really know if that's true. Generative models are easily weaponized against the public good as well.

@mcc @pinkdrunkenelephants I agree that ML models trained on a set of data that contains my code would be a derivative work.

Personally I want my open source work to be used as a public good. A derivative work that is proprietary and used for corporate profit is not for the public benefit, and I find that distasteful.

datarama

@pinkdrunkenelephants @mcc That doesn't work if copyright *itself* doesn't apply to AI training, which is what all those court cases are about. Licenses start from the assumption that the copyright holder reserves all rights, and then the license explicitly waives some of those rights under a set of given conditions.

But with AI, it's up in the air whether a copyright holder has any rights at all.

pinkdrunkenelephants

@datarama @mcc I don't see how it would be up in the air. Humans feed that data into AI and use the churned remains so it's still a human violating the copyright.

mcc

@pinkdrunkenelephants @datarama Because humans also are the ones who interpret and enforce laws and if the government does not enforce copyright against companies which market their products as "AI", then copyright does not apply to those companies.

pinkdrunkenelephants

@mcc @datarama I guess that's more of a bribery problem than a legal precedent one, then.

datarama

@pinkdrunkenelephants @mcc In the EU, there actually is some legislation. Copyright explicitly *doesn't* protect works from being used in machine learning for academic research, but ML training for commercial products must respect a "machine-readable opt-out".

But that's easy enough to get around. That's why eg. Stability funded an "independent research lab" who did the actual data gathering for them.

mcc replied to datarama

@datarama I consider this illegitimate and fundamentally unfair because I have already released large amounts of work under creative commons/open source licenses. I can't retroactively add terms to some of them because the plain language somehow no longer applies. If I add such opt-outs now, it would be like I'm admitting the licenses previously didn't apply to statistics-based derivative works

datarama replied to mcc

@mcc I consider it illegitimate and fundamentally unfair because it's opt-out.

pinkdrunkenelephants replied to datarama

@datarama @mcc I wonder why it is people don't just revolt and destroy their servers then. Or drag them into jail.

Why do people delude themselves into accepting atrocities?

datarama replied to pinkdrunkenelephants

@pinkdrunkenelephants @mcc I think if there was a simple clear-cut answer to that, the world would be a *very* different place.

Duchamp Pérez

@mcc @pinkdrunkenelephants @datarama In reality no, it is not in the air. OpenAI and the rest are in violation of copyright. If it were you and me, the media companies would be getting ready to make an example of us.

It is only "up in the air" because the laws don't apply to wealthy powerful people

Go Up