Email or username:

Password:

Forgot your password?
Top-level
FeralRobots

@grahamsz @NewtonMark
That's why I'd like to know what they're promising. The disclosure in the ToS is pretty vague. I'm having a hard time imagining how they make non-pooled training work cost-effectively, so my bias is going to be to assume it's possible to prompt hack to get at a company's Slack chat content.

4 comments
grahamsz

@FeralRobots @NewtonMark You can definitely fine-tune a pre-trained GPT model pretty cost-efficiently. Considering my mid-size company spends 5 figures a year on slack I expect they can afford it. Though I suspect if it's any good there will be a hefty upcharge for it.

FeralRobots

@grahamsz @NewtonMark
Right, but
a) is fine-tuning enough? we've been told before that tuning would keep stuff from showing up that neverthless keeps showing up.
b) how much will that surchage be?
c) will that surcharge actually end up covering cost? I.e., are Slack setting themselves up for an fall in a couple of years?

grahamsz

@FeralRobots @NewtonMark Fine-tuning will definitely stop stuff coming up in other people's results, because you make an extra layer over the existing model with new weights.

You also maybe wouldn't even need that, you could use an embedding model to place each conversation into a high dimensional space, then when you ask a question of the model it searches all relevant conversations to build a better prompt. For a lot of use cases i think that could work just fine.

Go Up