@fasnix @pluralistic it does not find solutions. It spits out what is likely to match the context, i.e. not only does it not think out of the box, it even stays relatiely narrow to the centre of the box
Top-level
@fasnix @pluralistic it does not find solutions. It spits out what is likely to match the context, i.e. not only does it not think out of the box, it even stays relatiely narrow to the centre of the box 2 comments
@fasnix @pluralistic that is of course relative to the total size of the box and the context; I’m sure they would not use somethint like ChatGPT for that but something specialised. |
@mirabilos
In a scientific context, "AI" was prompted to "find highly toxic chemicals" (not exact wording).
It found some 40.000+ combinations, many of them also highly explosive, if I remember correctly.
Scientists themselves would have never thougt about those.
After some debate, whether to publish them, they were published.
https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx
---
How far "in the centre of the box" is this example, in your opinion?
@pluralistic
@mirabilos
In a scientific context, "AI" was prompted to "find highly toxic chemicals" (not exact wording).
It found some 40.000+ combinations, many of them also highly explosive, if I remember correctly.
Scientists themselves would have never thougt about those.
After some debate, whether to publish them, they were published.