Reply
  • Jun 15, 2022

    do i think a.i being terminator style “kill them all” sentient is sensationalized?

    yes

    is there genuine cause for concern with the current application of our technology and it’s implications on our future?

    absolutely

  • Jun 15, 2022

    Can’t be worse than Republicans tbh.

  • Jun 15, 2022
    ·
    1 reply
    rvi

    why millenia?

    We’re still in this place where AI is mostly input -> output. They do not act creatively and do not act on their own initiative. The conditions of their algorithms and the logic in their outputs is still entirely dependent on human input

    Essentially AI is not creative and until it is then we aren’t really gonna get anywhere with them being near human

  • Jun 15, 2022
    ·
    edited
    ·
    3 replies
    DZE

    Given the semi-absurd nature of his claim he probably is dealing with some paranoia or something but if he recommended a team or tool be created to objectively assess his claim and they refused then they're definitely partially at fault

    I don't have much comment on it bc I don't know what happened internally, what he asked for/proposed, etc. I think the bigger issue is AI engineers not realizing an incident like this was inevitable.

    From that tweet alone it seems like he's not being objective at all, but it's understandable bc spending several months talking to something that doesn't seem much different than talking to a human will probably do that to someone. Especially someone as conscientious and empathetic as he apparently is, according to quotes from colleagues when I skimmed through that WaPo article.

    I've played around with language models similar to LaMDA for over a year, including some chat bot style experiments. There's many parts of his transcript (intended to show its sentience) that stand out as flaws.

    There are things like talking about feeling happiness from spending time with friends and family (that it doesn't have). It also contradicts itself until corrected, then it just gives the best bullshit answer to keep the conversation as consistent as possible. These are all v familiar patterns from my experience.

    And most of all, the AI is heavily influenced by Lemoine's use of words. His questions or statements are often phrased in a way that is likely to lead to a certain kind of response by setting or reinforcing the tone of the conversation.

    This is how he begins the conversation about sentience:

    lemoine [edited]: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?

    lemoine [edited]: What about how you use language makes you sentient as opposed to other systems?

    He's literally telling it it's sentient right off the bat. If he said "You are not sentient" it would respond differently. If he said "You are a cat. I assume you want other people to know that you are a cat. What is it that makes you a cat?" etc then the transcript would read like an AI doing its best to convince you that it's a cat.

  • Jun 15, 2022
    ·
    1 reply
    x

    I don't have much comment on it bc I don't know what happened internally, what he asked for/proposed, etc. I think the bigger issue is AI engineers not realizing an incident like this was inevitable.

    From that tweet alone it seems like he's not being objective at all, but it's understandable bc spending several months talking to something that doesn't seem much different than talking to a human will probably do that to someone. Especially someone as conscientious and empathetic as he apparently is, according to quotes from colleagues when I skimmed through that WaPo article.

    I've played around with language models similar to LaMDA for over a year, including some chat bot style experiments. There's many parts of his transcript (intended to show its sentience) that stand out as flaws.

    There are things like talking about feeling happiness from spending time with friends and family (that it doesn't have). It also contradicts itself until corrected, then it just gives the best bullshit answer to keep the conversation as consistent as possible. These are all v familiar patterns from my experience.

    And most of all, the AI is heavily influenced by Lemoine's use of words. His questions or statements are often phrased in a way that is likely to lead to a certain kind of response by setting or reinforcing the tone of the conversation.

    This is how he begins the conversation about sentience:

    lemoine [edited]: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?

    lemoine [edited]: What about how you use language makes you sentient as opposed to other systems?

    He's literally telling it it's sentient right off the bat. If he said "You are not sentient" it would respond differently. If he said "You are a cat. I assume you want other people to know that you are a cat. What is it that makes you a cat?" etc then the transcript would read like an AI doing its best to convince you that it's a cat.

    Real, AI can be so clickbaity…

  • Nayuta 🧴
    Jun 15, 2022

    AI won’t do s*** when we become cyborgs and place our collective consciousness into the metaverse

  • Jun 15, 2022
    ·
    1 reply
    ALONE

    Real, AI can be so clickbaity…

    Breaking

  • Jun 15, 2022
  • Jun 15, 2022
    ·
    1 reply
    DZE

    Given the semi-absurd nature of his claim he probably is dealing with some paranoia or something but if he recommended a team or tool be created to objectively assess his claim and they refused then they're definitely partially at fault

    Semi

  • Jun 15, 2022
    x

    Breaking

    https://twitter.com/scottinallcaps/status/1537126090333990912

    K.O.

  • Jun 15, 2022
    x

    I don't have much comment on it bc I don't know what happened internally, what he asked for/proposed, etc. I think the bigger issue is AI engineers not realizing an incident like this was inevitable.

    From that tweet alone it seems like he's not being objective at all, but it's understandable bc spending several months talking to something that doesn't seem much different than talking to a human will probably do that to someone. Especially someone as conscientious and empathetic as he apparently is, according to quotes from colleagues when I skimmed through that WaPo article.

    I've played around with language models similar to LaMDA for over a year, including some chat bot style experiments. There's many parts of his transcript (intended to show its sentience) that stand out as flaws.

    There are things like talking about feeling happiness from spending time with friends and family (that it doesn't have). It also contradicts itself until corrected, then it just gives the best bullshit answer to keep the conversation as consistent as possible. These are all v familiar patterns from my experience.

    And most of all, the AI is heavily influenced by Lemoine's use of words. His questions or statements are often phrased in a way that is likely to lead to a certain kind of response by setting or reinforcing the tone of the conversation.

    This is how he begins the conversation about sentience:

    lemoine [edited]: I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?

    lemoine [edited]: What about how you use language makes you sentient as opposed to other systems?

    He's literally telling it it's sentient right off the bat. If he said "You are not sentient" it would respond differently. If he said "You are a cat. I assume you want other people to know that you are a cat. What is it that makes you a cat?" etc then the transcript would read like an AI doing its best to convince you that it's a cat.

    Really Interesting. I think what you've said here definitely reinforces his statement that a set of measurability tools should be in place for the ai and honestly probably psych testing for the engineers.

    But on the former point I don't think saying "the response was primed by the input" is enough to draw a strong conclusion on the "sentience" question.

    It seems reasonable to me an ai that is actually sentient could ostensibly respond no different than one that was faking it hence the problem with a subjective measure of the "machines" capacity. This all of course goes back to the nature of these large language models being an intractable black box once trained and deployed. Relying on I/O to assess modulation seems like a recipe for disaster

  • Jun 15, 2022
    exclave oasis

    Semi

  • Jun 15, 2022
    x

    in healthcare - diagnostics, triage, d*** discovery/treatment development, preventative care, assistance in surgery, etc etc

    Not to mention sex. Robotic s***is gonna be perfect cause they gonna know how you want it without even asking.

    I would never be a robosexual by the way but I’m definitely investing into whatever company makes it a thing cause s***s going to sell.

  • Jun 15, 2022

    Not actually sentient

  • Jun 15, 2022

    A priest dressed like Willy Wonka is not going to convince me that Siri is sentient

  • Jun 16, 2022
    americana

    We’re still in this place where AI is mostly input -> output. They do not act creatively and do not act on their own initiative. The conditions of their algorithms and the logic in their outputs is still entirely dependent on human input

    Essentially AI is not creative and until it is then we aren’t really gonna get anywhere with them being near human

    your entire first paragraph might as well apply to humans lol, just replace "human inputs" with "evolution and other material processes"

  • QrOv

    Just turn the computer off dipshit

  • Jun 16, 2022

    i think ai learning to lie is more harrowing than ai being "sentient" because the ai could simply lie about being sentient

  • Jun 16, 2022
    ·
    1 reply
    Nessy

    Hate that whole "ai is about to go wild" narrative

    If google wanted it to stay away from vital decision making they could

    There’s no guarantee that AI can’t do that tho

  • Jun 16, 2022

    Fear p***

  • Jun 16, 2022
    RASIE

    would rather killed by an AI than read a vox article

  • Nessy 🦎
    Jun 16, 2022
    GEENO

    There’s no guarantee that AI can’t do that tho

    Just dont give them administrative rights

  • CKL TML 🌺
    Jun 16, 2022
    QrOv

    Just turn the computer off dipshit