cross-posted from: https://lemmy.world/post/23009603
This is horrifying. But, also sort of expected it. Link to the full research paper:
That thumbnail makes me not wanting to watch the video.
You’re not missing anything. In the first minute: “Is ChatGPT AGI? It said it would copy itself to another server if it got shut down!”
I linked the PDF too, so you can read it. I know the Youtube Title is very clickbait, but it is truly worth the watch IMHO.
More no-clicky
Don’t understand what you mean, but no worries. The sources are there to consume at free will. I am not the author of the material, I just came across it and wanted to share. Anyways.
Not really caught. The devs intentionally connected it to specific systems (like other servers), gave it vague instructions that amounted to “ensure you achieve your goal in the long term at all costs,” and then let it do its thing.
It’s not like it did something it wasn’t instructed to do; it didn’t perform some menial task and then also invent its own secret agenda on the side when nobody was looking.
Soon we will not talk about “weapons of mass destruction” anymore, but about “weapons of truth destruction”.
They are worse.
Whenever this topic comes up, I’d like to refer to Robert Miles and his continuing excellent work on the subject.
I did say at one point that self conscious AI had a slight chance at actually ending this loop by sabotaging itself / the company that made it. But slight chance is too thin to hope for.
TFW a LLM might be better at solving cognitive dissonance than its creators and stakeholders.