@NewtonMark the nature of #LLM training is that all training data is inherently shared with all users of the LLM. So their "opt-out" is Frankfurtian #bullshit. I'd love to see what precisely they're claiming to do for the customers who "opt out."
Top-level
@NewtonMark the nature of #LLM training is that all training data is inherently shared with all users of the LLM. So their "opt-out" is Frankfurtian #bullshit. I'd love to see what precisely they're claiming to do for the customers who "opt out." 7 comments
@grahamsz @NewtonMark @FeralRobots @NewtonMark You can definitely fine-tune a pre-trained GPT model pretty cost-efficiently. Considering my mid-size company spends 5 figures a year on slack I expect they can afford it. Though I suspect if it's any good there will be a hefty upcharge for it. @grahamsz @NewtonMark @FeralRobots @NewtonMark Fine-tuning will definitely stop stuff coming up in other people's results, because you make an extra layer over the existing model with new weights. You also maybe wouldn't even need that, you could use an embedding model to place each conversation into a high dimensional space, then when you ask a question of the model it searches all relevant conversations to build a better prompt. For a lot of use cases i think that could work just fine. @grahamsz @FeralRobots @NewtonMark Even if you could train an LLM per workspace, there's a possibility of prompt injection attacks leaking private messages+channel within the workspace. (And the privacy policy doesn't allow you to opt out of that.) #Slack |
@FeralRobots @NewtonMark I think you could reasonably fine tune an LLM on a per-customer basis. I'd personally like our slack search to work better and thing that could be a really compelling business tool. But I've already opted our company out from the general models, because I can't see how it won't share at least some data.