Email or username:

Password:

Forgot your password?
Darius Kazemi

It seems that US Supreme Court Justice Neil Gorsuch and I are in agreement -- I have long claimed that the output of a bot is a speech act by its creator. If my bots say something shitty, that is on me. I think the creator of an AI should be liable for its outputs.

washingtonpost.com/politics/20

27 comments
Darius Kazemi

Opposing arguments are like this one, also quoted from the Washington Post article above:

"generative AI typically produces content only in response to prompts or queries from a user; these responses could be seen as simply remixing content from the third-party websites, whose data it was trained on"

If you are taking that broad a definition of "it's just a remix" then buddy, I have news for you about literally all creativity, language, and culture. By that logic, no one is liable for anything

Darius Kazemi

Btw follow @willoremus for interesting reporting and analysis like the original linked article, you won't regret it

felix stalder

@darius Yeah. Frankly, I don't get how "it's just remix" could be construed as the entity doing the remix is not responsible for the quality of the remix.

This does not even hinge on the question of whether everything is a remix or whether the original creation exists. (I strongly lean toward the former, even if there are very original remixes....).

Darius Kazemi

@festal right. It's just a very weak argument from someone desperate to deny liability

Jesse Janowiak

@darius My devil’s advocate argument would be that the user’s prompts are not just triggering responses, they are actively forming them. There is only so much safeguarding that an AI developer can implement against a user intent on making the AI say something dangerous or libelous.

gerbi

@darius AI (or more exactly machine learning) just moves the responsibility to whomever chooses the training data. And not even a conscious AI changes that, because even a conscious AI had someone choosing training data for it.

Ali Alkhatib

@darius the apparent crisis this seems to be causing people is a little baffling. i'm honestly a little curious what outcome people thought we would be heading toward that would've been at all coherent.

Darius Kazemi

@ali people will bend over backwards to come up with the most barely coherent framework to ensure they are not liable for their actions

Ali Alkhatib

@darius i can't help but wonder if this is why tech companies are so weirdly keen to entertain goofball claims that AIs are sentient

if they can just clear that gorge, they'll be home free! lol

Kevin Karhan :verified:

@darius +9001%

This is the same in #Germany where one cannot deny or disavow #copyright and that one is always responsible for the outputs of one's product unless they can evidence otherwise [i.e. sabotage by the end user]...

Jesse Baer 🔥

@darius Seems to me the bigger issue with this approach will be determining who created a given bot.

andeux

@darius What exactly is the alternative? If creators are not responsible for the outputs of algorithms, then ... nobody is?
The people who get all worked up about the existential threat of "superintelligent AI" seem awfully blasé about not-so-intelligent systems spouting libel, giving inaccurate medical advice, or running people over with no accountability.

Chip Stewart

@darius I wrote about this & summarized a number of approaches a few years ago in my sci-fi & future media law book, with examples from folks like @marklemley & @gunkel. In short - maybe assign liability to programmers or hosts, but maybe also terminate the bots themselves?

Cadence Larissa Beth Beresford

@darius Who is responsible for what an AI creates? The AI's creator, the AI's user or the AI itself?

I say the answer is yes, all the above. What it is trained with, is important, what it is asked to do, is important and it's own capacity for reason, is important.

Frank Cote

@darius it's an interesting point. I hope you're not a pompous arrogant bastard like he is.

Anthony DiPierro

@darius Seems the question for 230 protection is whether the information was provided by another information content provider.

Which depends on the information. Was it a hallucination or was it just repeating someone else?

DELETED

@darius What about the direction the copyright office is taking where the output of AI is not eligible for copyright? msn.com/en-us/news/politics/us

Calling these outputs the speech of the person/company who coded the AI would seem to imply that SOMEONE should be able to copyright these outputs.

Personally I prefer the idea that all these outputs are automatically public domain, given that they didn't get permission for most/any of the code/images/text in the training corpus.

But if strict liability hamstrings these LLMs, that works too.

@darius What about the direction the copyright office is taking where the output of AI is not eligible for copyright? msn.com/en-us/news/politics/us

Calling these outputs the speech of the person/company who coded the AI would seem to imply that SOMEONE should be able to copyright these outputs.

SJ

@darius @anildash okay now it’s going to get interesting

jack the nonabrasive

@darius @irwin Yet the output of a generative ML model isn’t copyrightable. It seems like, if this becomes precedent, holding a bot creator who uses generative tech to create the bot output is a wedge that could be used to pressure copyright for generative tech.

Which might not be desirable. Unless it’s a derivative work and the author owes royalties to holders of copyright on the training corpus?

Avi Rappoport (avirr)

@darius Those who make the profit must also take responsibility — @j2bryson has been saying this for years

Jules 🍺

@darius I was going to say that it would mean we are responsible for what our kids say, but then I remembered that these ai bots are not independent entities unlike our progeny so yes the corporations are liable.

Go Up