Email or username:

Password:

Forgot your password?
Top-level
Gawain(DarkGriffin)

@aeva This really isn't that surprising. Roll a 20 sided die a hundred times, there most likely will be some 20s.

If I had to guess, the "creative" setting is triggering sometimes when word is followed by word, you go elsewhere instead. And so it starts spilling out untuned data because no search context. Of course what data you put in will be able to come back out, that is kinda the point?

Not to say this isn't a problem, but it's a problem from the training data, not the "AI" algorithm.

7 comments
Gawain(DarkGriffin)

@aeva We've had this issue from the moment big companies use the pile and other such sources to build these. No amount of engineering can fix this issue, because the ai becomes useless as an information/idea/concept search if it truly has no base.

What these big companies really should be doing is spending a lot of money making fresh training data, from scratch. But we are talking like billions of writing/drawing assignments that can't be used elsewhere. I don't think any can afford to.

Gawain(DarkGriffin)

@aeva Also, it should be noted even humans can't make 100 percent fresh data. We too memorize things and repeat them. That is kinda the core principle of how language has meaning. So I don't think it is possible to obtain the goal proposed in this article without making the robot speak a unique language that has nothing to do with reality.

Just from a philosophy perspective, I couldn't begin to describe an answer without repeating words or phrases that have meaning elsewhere.

Gawain(DarkGriffin)

@aeva edit: my apologies, that was uncalled for. Been having a rough day and took it out here.

You are correct that I do not have citation for any of the above. I've messed with these (scripted/trained my own from scratch just to learn the techniques). I'm no expert. As a writer these machines fascinate me because they push the edge of how language is just a way to capture much more complex ideas, thoughts.

To swing this more positive, what are your thoughts on the vulnerability?

aeva

@darkgriffin I think studies like this that reveal failure cases in neural nets are important, because dispelling the magic is necessary for people to be able to think critically about the limits and real applications for the technology. Otherwise people think it's something it isn't and run full speed in directions that hurt people. Besides the tragic consequences to human lives, willful ignorance is also bad because it inevitably poisons the well for legitimately good uses for the technology.

SnoopJ

@darkgriffin @aeva the authors are not contending that it's "surprising" and I don't think any experienced practitioners are surprised. The work is novel just the same.

Gawain(DarkGriffin)

@SnoopJ @aeva I agree there.

It's just it feels obvious to me, like saying Google search shows you websites.

I do appreciate the efforts to research and compile data on this though. We have to all be aware how these work and stop calling them magical ai boxes. 😄

Go Up