vox.com/23167703/google-artificial-intelligence-lamda-blake-lemoine-language-model-sentient
"What wins people over is talking about the consequences systems have. So instead of saying, “the AI will start hoarding resources to stay alive,” I’ll say something like, “AIs have decisively replaced humans when it comes to recommending music and movies. They have replaced humans in making bail decisions. They will take on greater and greater tasks, and Google and Facebook and the other people running them are not remotely prepared to a***yze the subtle mistakes they’ll make, the subtle ways they’ll differ from human wishes. Those mistakes will grow and grow until one day they could kill us all.”
Discuss
There isn’t anything to discuss. I am happy with being subjugated & made into cattle by our digital overlords. Submission is inevitable, and vital.
Hate that whole "ai is about to go wild" narrative
If google wanted it to stay away from vital decision making they could
would rather killed by an AI than read a vox article
Hollywood has skewed people's perception of AI but in reality there are a lot of great real world applications that could significantly improve people's day to day life
Hollywood has skewed people's perception of AI but in reality there are a lot of great real world applications that could significantly improve people's day to day life
do tell
This is a case of a google employee likely being emotionally and mentally unprepared to be in a role of assessing technology he seems to have no understanding of.
do tell
in healthcare - diagnostics, triage, d*** discovery/treatment development, preventative care, assistance in surgery, etc etc
This is a case of a google employee likely being emotionally and mentally unprepared to be in a role of assessing technology he seems to have no understanding of.
https://twitter.com/cajundiscordian/status/1536503474308907010Given the semi-absurd nature of his claim he probably is dealing with some paranoia or something but if he recommended a team or tool be created to objectively assess his claim and they refused then they're definitely partially at fault
We are millennia away from AI being able to be aware of what they’d perceive as oppressive conditions
sensationalism
We are millennia away from AI being able to be aware of what they’d perceive as oppressive conditions
why millenia?
We are millennia away from AI being able to be aware of what they’d perceive as oppressive conditions
This is a case of a google employee likely being emotionally and mentally unprepared to be in a role of assessing technology he seems to have no understanding of.
https://twitter.com/cajundiscordian/status/1536503474308907010*this is a case of a google employee recognizing big tech companies like google doesn’t take ethics seriously enough and we should be concerned