Email or username:

Password:

Forgot your password?
Top-level
Elizabeth

@juliana @jonny out of interest, if use of LLMs were limited to this, would the energy demand per request be less, or is it just the number of requests would substantially decrease? (Does that question even make sense?)

7 comments
takin' a break

@luminesce @jonny the energy consumption per demand would likely increase because there's more overall computation involved in the process. the LLM itself would be only one step in an arbitrarily complex chain of queries depending on the precise application

from what i understand, the primary energy consumption of LLMs comes from the training process, but i don't know the full details

the specific application i was seeing discussed was to create a way for doctors to use human language to query medical records and collate data to facilitate diagnosis. so while it is important to keep in mind the environmental impact of using LLMs, it's also important to weigh that against potential benefits as well.

frankly, i'd rather we weren't using LLMs at all, but i figure this is one of the less bad ways they can be used

@luminesce @jonny the energy consumption per demand would likely increase because there's more overall computation involved in the process. the LLM itself would be only one step in an arbitrarily complex chain of queries depending on the precise application

from what i understand, the primary energy consumption of LLMs comes from the training process, but i don't know the full details

jonny (good kind)

@juliana
@luminesce
Yes totally^ it depends on the application re: energy use. Chaining a bunch of models together would probably use more energy, but google et al want to use smaller models hooked up to knowledge graphs to make runtime inference feasible as a consumer product too, so that kind of application would be designed to use less.

The medical case has its own set of fun complications 💓
jon-e.net/surveillance-graphs/

@juliana
@luminesce
Yes totally^ it depends on the application re: energy use. Chaining a bunch of models together would probably use more energy, but google et al want to use smaller models hooked up to knowledge graphs to make runtime inference feasible as a consumer product too, so that kind of application would be designed to use less.

takin' a break

@jonny bestie how sad are you that xanadu never happened lol

jonny (good kind)

@juliana
Like on the scale of 0 to semantic web, about an 8 lol

jonny (good kind)

@juliana
Anytime Xanadu comes up I also think of @beka_valentine 's threads where I first learned why it would have been so good.

Elizabeth

@juliana @jonny yes, the key to limiting the environmental impacts is limiting use cases to those that are truly beneficial, rather than generating demand for trivial or mis-leading/chaos-inducing uses

jonny (good kind)

@luminesce
@juliana
Thats one of the reasons they're pursuing it, yes. Smaller models that can be conditioned on factual information/formal, domain specific models. Eg. See
arxiv.org/abs/2203.05115

Unfortunately this kind of application has its own kind of Really Bad Outcomes that the AI critical folks largely have not caught up to yet :(, see

jon-e.net/surveillance-graphs/

The tech could be really cool. The problem, as with everything, is capitalism.

@luminesce
@juliana
Thats one of the reasons they're pursuing it, yes. Smaller models that can be conditioned on factual information/formal, domain specific models. Eg. See
arxiv.org/abs/2203.05115

Unfortunately this kind of application has its own kind of Really Bad Outcomes that the AI critical folks largely have not caught up to yet :(, see

Go Up