Just a reminder that the "existential risk" from AI is not that somehow we'll make Skynet or the computers from The Matrix.
Nobody is going to give a large language model the nuclear codes.
The existential risk is to marginalized people who will be silently refused jobs or health care or parole, or who will be targeted by law enforcement or military action because of an ML model's inherent bias, and that because these models are black boxes, it will be nearly impossible for victims to appeal.
The existential risk is that the incredible repository of nearly all human knowledge that is the internet will be flooded with so much LLM-generated dreck that locating reliable information will become effectively impossible (alongside scientific journals, which are also suffering incredibly under the weight of ML spam).
The existential risk is that nobody will be able to trust a photo or video of anything because the vast majority of media will be fabricated.