• Death_Equity@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      1 month ago

      True AI is a sentient program that can be creative and evolve it’s own programming. Think digital human analoge, but it knows everything and is easily confused.

      Current AI is a party trick performed by a search engine that phrases results in a conversation or a random data generator that can have a theme that informs a comprehensible image.

      • IchNichtenLichten@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 month ago

        True AI is a sentient program that can be creative and evolve it’s own programming. Think digital human analoge, but it knows everything and is easily confused.

        We’ll achieve fusion as an energy source before we develop what you’re describing. If they’re trying to get in on the ground floor that’s pretty funny.

      • IchNichtenLichten@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 month ago

        AI isn’t a very good descriptor, it’s a catch all for a bunch of different tech. The media does a pretty poor job making that point so people are left to come to their own conclusions about what AI actually is.

    • desktop_user@lemmy.blahaj.zone
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      aaeven just a machine learning model capable of searching for information and accurately returning an answer with a list of references supporting the claims would be huge for many industries and individuals.

      it could help replace customer service with a competent replacement (if the company actually spent the effort to provide necessary features to the customer ui), search through software documentation to help programmers, and hopefully be a better version of what google was.

        • bizarroland@fedia.io
          link
          fedilink
          arrow-up
          1
          ·
          1 month ago

          Yes and no. They can do the job but they are too easily tricked and too quick to hallucinate to be able to reliably do the job.

          Compared to a human after 8 hours of continuous customer support, you’re going to have far more errors of a much greater variety and risk with any current llm models compared to any human that isn’t actively attempting to destroy your company