Email or username:

Password:

Forgot your password?
Top-level
FeralRobots

@NewtonMark the nature of #LLM training is that all training data is inherently shared with all users of the LLM. So their "opt-out" is Frankfurtian #bullshit. I'd love to see what precisely they're claiming to do for the customers who "opt out."

7 comments
grahamsz

@FeralRobots @NewtonMark I think you could reasonably fine tune an LLM on a per-customer basis. I'd personally like our slack search to work better and thing that could be a really compelling business tool. But I've already opted our company out from the general models, because I can't see how it won't share at least some data.

FeralRobots

@grahamsz @NewtonMark
That's why I'd like to know what they're promising. The disclosure in the ToS is pretty vague. I'm having a hard time imagining how they make non-pooled training work cost-effectively, so my bias is going to be to assume it's possible to prompt hack to get at a company's Slack chat content.

grahamsz

@FeralRobots @NewtonMark You can definitely fine-tune a pre-trained GPT model pretty cost-efficiently. Considering my mid-size company spends 5 figures a year on slack I expect they can afford it. Though I suspect if it's any good there will be a hefty upcharge for it.

FeralRobots

@grahamsz @NewtonMark
Right, but
a) is fine-tuning enough? we've been told before that tuning would keep stuff from showing up that neverthless keeps showing up.
b) how much will that surchage be?
c) will that surcharge actually end up covering cost? I.e., are Slack setting themselves up for an fall in a couple of years?

grahamsz

@FeralRobots @NewtonMark Fine-tuning will definitely stop stuff coming up in other people's results, because you make an extra layer over the existing model with new weights.

You also maybe wouldn't even need that, you could use an embedding model to place each conversation into a high dimensional space, then when you ask a question of the model it searches all relevant conversations to build a better prompt. For a lot of use cases i think that could work just fine.

@overflow 🏳️‍⚧️

@grahamsz @FeralRobots @NewtonMark Even if you could train an LLM per workspace, there's a possibility of prompt injection attacks leaking private messages+channel within the workspace. (And the privacy policy doesn't allow you to opt out of that.) #Slack

Go Up