When the idea of self-driving cars first started becoming mainstream, I remember a lot of debate about liability. If an accident occurs, who would be at fault? I think a lot of those questions are still unanswered.

Fast forward and now we have software like ChatGPT. I assume they’ll only become more capable (and connected) over time.

Which makes it strange I haven’t really heard any similar discussion around liability. What happens when it makes mistakes or causes damage?

Maybe in people’s minds it doesn’t matter, because AI is either something that helps with homework questions, or something that’s taking over humanity. Reality is probably in between those two, with much more mundane mistakes or damages done.

What happens when the first ransomware is deployed by AI, on behalf of a user who just wanted tips on how to make more side income?

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    But what happens as memory gets cheaper and calculations get faster, and ordinary developers are able to train their own generative AI?

    That happens all the time since GANs entered the scene, and before that since alexnet broke image classification records in 2012 using consumer hardware. Anyone can train neural nets.

    • catreadingabook@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      10 months ago

      Ok, let me be more specific so that it’s not open to uncharitable interpretation.

      What happens when it becomes easy to make something as reliable and complete as, e.g., ChatGPT-4 without the hardware costs and other costs currently associated with it?