In an open letter published on Tuesday, more than 1,370 signatories—including business founders, CEOs and academics from various institutions including the University of Oxford—said they wanted to “counter ‘A.I. doom.’”
“A.I. is not an existential threat to humanity; it will be a transformative force for good if we get critical decisions about its development and use right,” they insisted.
I hate this skynet discourse, like no fucking shit LLMs aren’t skynet, they’re not even AI. Acting like that’s a legitimate criticism that needs to be discussed is blatantly just an attempt to distract from the real issues and criticism.
There are real issues surrounding how these models are trained, how the data for the models is selected and who gets compensated for that data, let alone the discussions around companies using these tools to devalue skilled individuals and cut their pay.
But they don’t have to engage with any of that because they get to debate the merits of wether or not the shitty sitcom script autocomplete program will launch nukes or make paper clips out of people.