Okay. They fed Google's Notebook AI a book called "The History of Philosophy Encyclopedia" and got the LLM to write a podcast about it where it "thinks" humans are useless.
Congratulations? Like, so what? It's not like it's a secret that its output depends on its input and training data. A "kill all humans" output is so common at this point, especially when you have a vested interest in trying to generate content, that it's banal.
Color me unimpressed.