As the AI market continues to balloon, experts are warning that its VC-driven rise is eerily similar to that of the dot com bubble.

  • @Reva
    link
    English
    44
    edit-2
    10 months ago

    deleted by creator

    • Lazz45
      link
      fedilink
      English
      1110 months ago

      I just want to make the distinction, that AI like this literally are black boxes. We (currently) have no ability to know why it chose the word it did for example. You train it, and under the hood you can’t actually read out the logic tree of why each word was chosen. That’s a major pitfall of AI development, its very hard to know how the AI arrived at a decision. You might know it’s right, or it’s wrong…but how did the AI decide this?

      At a very technical level we understand HOW it makes decisions, we do not actively understand every decision it makes (it’s simply beyond our ability currently, from what I know)

      example: https://theconversation.com/what-is-a-black-box-a-computer-scientist-explains-what-it-means-when-the-inner-workings-of-ais-are-hidden-203888

      • @barsoap@lemm.ee
        link
        fedilink
        English
        910 months ago

        You train it, and under the hood you can’t actually read out the logic tree of why each word was chosen.

        Of course you can, you can look at every single activation and weight in the network. It’s tremendously hard to predict what the model will do, but once you have an output it’s quite easy to see how it came to be. How could it be bloody otherwise you calculated all that stuff to get the output, the only thing you have to do is to prune off the non-activated pathways. That kind of asymmetry is in the nature of all non-linear systems, a very similar thing applies to double pendulums: Once you observed it moving in a certain way it’s easy to say “oh yes the initial conditions must have looked like this”.

        What’s quite a bit harder to do for the likes of ChatGPT compared to double pendulums is to see where they possibly can swing. That’s due to LLMs having a fuckton more degrees of freedom than two.

        • @BackupRainDancer@lemmy.world
          link
          fedilink
          English
          4
          edit-2
          10 months ago

          I don’t disagree with anything you said but wanted to just weigh in on the more degrees of freedom.

          One major thing to consider is that unless we have 24/7 sensor recording with AI out in the real world and a continuous monitoring of sensor/equipment health, we’re not going to have the “real” data that the AI triggered on.

          Version and model updates will also likely continue to cause drift unless managed through some sort of central distribution service.

          Any large Corp will have this organization and review or are in the process of figuring it out. Small NFT/Crypto bros that jump to AI will not.

          IMO the space will either head towards larger AI ensembles that tries to understand where an exact rubric is applied vs more AGI human reasoning. Or we’ll have to rethink the nuances of our train test and how humans use language to interact with others vs understand the world (we all speak the same language as someone else but there’s still a ton of inefficiency)

    • @yata@sh.itjust.works
      link
      fedilink
      English
      910 months ago

      The thing is a lot of people are not using for that. They think it is a living omniscient sci-fi computer who is capable of answering everything, just like they saw in the movies. Noone thought that about keyboard auto-suggestions.

      And with regards to people who aren’t very knowledgeable on the subject, it is difficult to blame them for thinking so, because that is how it is presented to them in a lot of news reports as well as adverts.

      • @barsoap@lemm.ee
        link
        fedilink
        English
        1010 months ago

        They think it is a living omniscient sci-fi computer who is capable of answering everything

        Oh that’s nothing new:

        On two occasions I have been asked [by members of Parliament], ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

        • Charles Babbage
      • @Reva
        link
        English
        1
        edit-2
        10 months ago

        deleted by creator

    • Ragnell
      link
      fedilink
      9
      edit-2
      10 months ago

      @Reva “Hey, should we use this statistical model that imitates language to replace my helpdesk personnel?” is an ethical question because bosses don’t listen when you outright tell them that’s a stupid idea.

      • @Reva
        link
        English
        2
        edit-2
        10 months ago

        deleted by creator

    • Flying Squid
      link
      fedilink
      English
      110 months ago

      Are you familiar with the 1980s program Racter? It wasn’t trained on the entire internet like LLMs are, but it kind of feels like an extension of that. Except Racter’s output was more amusing.

    • @Freesoftwareenjoyer@lemmy.world
      link
      fedilink
      English
      -310 months ago

      Yeah, it’s kinda scary to see how much people don’t understand modern technology. If some non-expert tells them AI can’t be trusted, they just believe it. I’ve noticed the same thing with cryptocurrencies. A non-expert says it’s a scam and people believe it even though it’s clear they don’t understand anything about that technology or what it’s made for.