• HardlightCereal@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    1 year ago

    I’m a computer scientist, and I will tell you right now that AI is biased. Here’s how you train a neural network AI: you arrange a whole lot of neurons, you reinforce the connections between the neurons when it succeeds, and you weaken the connections when it fails. That’s the same way your brain works. When you eat food or have sex or do something else beneficial to survival, your neural connections are strengthened. An ANN AI is driven by its training directive just like you’re driven to eat or have sex. It develops the same biases.

    And since AI is trained by humans and humans have critical thinking, I don’t see why AI cannot develop one

    This is nonsense. Humans invented the horse drawn wagon. Is a wagon ever going to develop critical thinking? No. AI isn’t a child with boundless potential, it’s a tool, just like a wagon. If humans want AI to have critical thinking, they’re going to have to build it. And no human has ever succeeded at that yet. The AI that Reddit is using does not have it. And since the AI is a profitable tool in its current state, it will probably not be improved to the level of a human.

    • btaf45@lemmy.worldOP
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      I’m a computer scientist, and I will tell you right now that AI is biased.

      AI is also constantly wrong.

      ChatGPT lies about science.

      ChatGPT lies about history

      ChatGPT lies about politics

      ChatGPT lies about nonexistent programming libraries

      ChatGPT lies about nonexistent legal cases

      ChatGPT lies about nonexistent criminal backgrounds

      The only time I would trust ChatGPT is when there are no right and wrong answers.