do i think a.i being terminator style “kill them all” sentient is sensationalized?
yes
is there genuine cause for concern with the current application of our technology and it’s implications on our future?
absolutely
why millenia?
We’re still in this place where AI is mostly input -> output. They do not act creatively and do not act on their own initiative. The conditions of their algorithms and the logic in their outputs is still entirely dependent on human input
Essentially AI is not creative and until it is then we aren’t really gonna get anywhere with them being near human
Given the semi-absurd nature of his claim he probably is dealing with some paranoia or something but if he recommended a team or tool be created to objectively assess his claim and they refused then they're definitely partially at fault
I don't have much comment on it bc I don't know what happened internally, what he asked for/proposed, etc. I think the bigger issue is AI engineers not realizing an incident like this was inevitable.
From that tweet alone it seems like he's not being objective at all, but it's understandable bc spending several months talking to something that doesn't seem much different than talking to a human will probably do that to someone. Especially someone as conscientious and empathetic as he apparently is, according to quotes from colleagues when I skimmed through that WaPo article.
I've played around with language models similar to LaMDA for over a year, including some chat bot style experiments. There's many parts of his transcript (intended to show its sentience) that stand out as flaws.
There are things like talking about feeling happiness from spending time with friends and family (that it doesn't have). It also contradicts itself until corrected, then it just gives the best bullshit answer to keep the conversation as consistent as possible. These are all v familiar patterns from my experience.
And most of all, the AI is heavily influenced by Lemoine's use of words. His questions or statements are often phrased in a way that is likely to lead to a certain kind of response by setting or reinforcing the tone of the conversation.
This is how he begins the conversation about sentience:
lemoine [edited]: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?
lemoine [edited]: What about how you use language makes you sentient as opposed to other systems?
He's literally telling it it's sentient right off the bat. If he said "You are not sentient" it would respond differently. If he said "You are a cat. I assume you want other people to know that you are a cat. What is it that makes you a cat?" etc then the transcript would read like an AI doing its best to convince you that it's a cat.
I don't have much comment on it bc I don't know what happened internally, what he asked for/proposed, etc. I think the bigger issue is AI engineers not realizing an incident like this was inevitable.
From that tweet alone it seems like he's not being objective at all, but it's understandable bc spending several months talking to something that doesn't seem much different than talking to a human will probably do that to someone. Especially someone as conscientious and empathetic as he apparently is, according to quotes from colleagues when I skimmed through that WaPo article.
I've played around with language models similar to LaMDA for over a year, including some chat bot style experiments. There's many parts of his transcript (intended to show its sentience) that stand out as flaws.
There are things like talking about feeling happiness from spending time with friends and family (that it doesn't have). It also contradicts itself until corrected, then it just gives the best bullshit answer to keep the conversation as consistent as possible. These are all v familiar patterns from my experience.
And most of all, the AI is heavily influenced by Lemoine's use of words. His questions or statements are often phrased in a way that is likely to lead to a certain kind of response by setting or reinforcing the tone of the conversation.
This is how he begins the conversation about sentience:
lemoine [edited]: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?
lemoine [edited]: What about how you use language makes you sentient as opposed to other systems?
He's literally telling it it's sentient right off the bat. If he said "You are not sentient" it would respond differently. If he said "You are a cat. I assume you want other people to know that you are a cat. What is it that makes you a cat?" etc then the transcript would read like an AI doing its best to convince you that it's a cat.
Real, AI can be so clickbaity…
AI won’t do s*** when we become cyborgs and place our collective consciousness into the metaverse
Given the semi-absurd nature of his claim he probably is dealing with some paranoia or something but if he recommended a team or tool be created to objectively assess his claim and they refused then they're definitely partially at fault
Semi
I don't have much comment on it bc I don't know what happened internally, what he asked for/proposed, etc. I think the bigger issue is AI engineers not realizing an incident like this was inevitable.
From that tweet alone it seems like he's not being objective at all, but it's understandable bc spending several months talking to something that doesn't seem much different than talking to a human will probably do that to someone. Especially someone as conscientious and empathetic as he apparently is, according to quotes from colleagues when I skimmed through that WaPo article.
I've played around with language models similar to LaMDA for over a year, including some chat bot style experiments. There's many parts of his transcript (intended to show its sentience) that stand out as flaws.
There are things like talking about feeling happiness from spending time with friends and family (that it doesn't have). It also contradicts itself until corrected, then it just gives the best bullshit answer to keep the conversation as consistent as possible. These are all v familiar patterns from my experience.
And most of all, the AI is heavily influenced by Lemoine's use of words. His questions or statements are often phrased in a way that is likely to lead to a certain kind of response by setting or reinforcing the tone of the conversation.
This is how he begins the conversation about sentience:
lemoine [edited]: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?
lemoine [edited]: What about how you use language makes you sentient as opposed to other systems?
He's literally telling it it's sentient right off the bat. If he said "You are not sentient" it would respond differently. If he said "You are a cat. I assume you want other people to know that you are a cat. What is it that makes you a cat?" etc then the transcript would read like an AI doing its best to convince you that it's a cat.
Really Interesting. I think what you've said here definitely reinforces his statement that a set of measurability tools should be in place for the ai and honestly probably psych testing for the engineers.
But on the former point I don't think saying "the response was primed by the input" is enough to draw a strong conclusion on the "sentience" question.
It seems reasonable to me an ai that is actually sentient could ostensibly respond no different than one that was faking it hence the problem with a subjective measure of the "machines" capacity. This all of course goes back to the nature of these large language models being an intractable black box once trained and deployed. Relying on I/O to assess modulation seems like a recipe for disaster
in healthcare - diagnostics, triage, d*** discovery/treatment development, preventative care, assistance in surgery, etc etc
Not to mention sex. Robotic s***is gonna be perfect cause they gonna know how you want it without even asking.
I would never be a robosexual by the way but I’m definitely investing into whatever company makes it a thing cause s***s going to sell.
We’re still in this place where AI is mostly input -> output. They do not act creatively and do not act on their own initiative. The conditions of their algorithms and the logic in their outputs is still entirely dependent on human input
Essentially AI is not creative and until it is then we aren’t really gonna get anywhere with them being near human
your entire first paragraph might as well apply to humans lol, just replace "human inputs" with "evolution and other material processes"
i think ai learning to lie is more harrowing than ai being "sentient" because the ai could simply lie about being sentient
Hate that whole "ai is about to go wild" narrative
If google wanted it to stay away from vital decision making they could
There’s no guarantee that AI can’t do that tho
There’s no guarantee that AI can’t do that tho
Just dont give them administrative rights
Just turn the computer off dipshit