

Current LLMs are just that, large language models. They’re incredible at predicting the next word, but literally cannot perform tasks outside of that, like fact checking, playing chess, etc. The theoretical AI that “could take over the world” like Skynet is called “Artificial Generalized Intelligence”. We’re nowhere close yet, do not believe OpenAI when they claim otherwise. This means the highest risk currently is a human person deciding to put an LLM “in charge” of an important task, that could cost lives if a mistake is made.
Fully agreed. There’s some stuff in the list that could leak server info or metadata about available content to the public, but the rest seems to require some knowledge before being able to exploit it, such as user IDs.
That doesn’t mean these aren’t issues, but they’re not “take your jellyfin down now” type issues either.