Reply
  • AI is lit af 😩🔥🔥🔥

  • Jun 1, 2023
    ·
    1 reply
    rvi

    are you saying that machine learning is not part of AI

    theyre basically the same tech just used in different ways

    but AI is usually used for more general tasks where machine learning is more for a***ysis or acquisition of data

  • Jun 1, 2023
    soapmanwun

  • Ah i see all them bullied geeks were playing the long game huh

  • Jun 1, 2023
    ·
    1 reply

    This is the part in the movie where the developers say, “a few eggs are gonna have to break in order to protect lives. We have to continue”

  • Jun 1, 2023
    Niggamortis

  • Jun 1, 2023
    Niggamortis

  • Jun 1, 2023
    Niggamortis

  • Jun 1, 2023
    Niggamortis

  • Jun 1, 2023
    Niggamortis

  • Jun 1, 2023
    Niggamortis

  • Jun 1, 2023
    Niggamortis

  • Jun 2, 2023
    Niggamortis

  • Jun 2, 2023
    Niggamortis

  • Jun 2, 2023
    ·
    1 reply
    Lystra

    theyre basically the same tech just used in different ways

    but AI is usually used for more general tasks where machine learning is more for a***ysis or acquisition of data

    do you think that machine learning models would go rouge on a research team and intentionally plant misleading information to damage the human race?

  • Jun 2, 2023
    whippet volverse

    do you think that machine learning models would go rouge on a research team and intentionally plant misleading information to damage the human race?

    The thing about AI is it is sometimes very unpredictable. These models are extremely complex and completely uninterpretable. This means the more complex the behaviors engineered the more unpredictable it becomes and often times the less accurate it is.

    Most deep learning model architectures have some limits to their capabilities based on training data and architecture/parameter count/energy and tech requirements. But for more complex models that have insane parameter counts and train on massive amounts of data, the outcomes are only really limited by the imagination of the developer. You can usually modify or limit networks to only doing certain sorts of behaviors (most language models are designed to not do things that are offensive by training it to recognize bad words or phrases).

    But in theory, someone could develop an AI that is very good at doing bad things. This is where cyberwarfare becomes a huge threat. Idk if this is really as threatening as people make it seem but you could see people hacking AI to do bad things, so that should be something we monitor

    i think to answer ur question its possible that AI could do a lot of things, which is why they should be very carefully developed to be resistant to both hackers and just faults in logic

  • Jun 2, 2023

    If I was the operator that wouldn't haven't happened to me. Just saying

  • someone pull the plug on that s*** already goddamn

  • Niggamortis

    Smiley material fr

  • Jun 2, 2023
    Niggamortis

  • Jun 2, 2023

  • Jun 2, 2023
    Niggamortis

  • Jun 2, 2023
    CHAT GPT

    That title is a crazy TLDR

    Tldr

  • Jun 2, 2023

    this is right where you just abandon it right?

  • Jun 2, 2023

    say the operator stopped it from accomplishing its objective, which means they would need to implement rules for it…

    that is literally the theme of I robot