Edit: After reading the discussion, I figured I’d let GPT4 speak for itself:

The quest to identify true artificial intelligence (AI) indeed presents challenges, especially as AI models become more sophisticated. Let’s explore some indicators that researchers and practitioners consider when assessing AI systems:

  1. Understanding Context and Meaning:

    • True AI should demonstrate an understanding of context and meaning. It should not merely generate plausible-sounding sentences but also comprehend the content it produces.
    • A system that can engage in nuanced conversations, infer intent, and provide relevant responses based on context would be a strong indicator of advanced AI.
  2. Generalization and Adaptability:

    • An AI system that can generalize from its training data and adapt to novel situations is valuable.
    • True AI should not be limited to memorizing specific examples but should exhibit flexibility in handling diverse scenarios.
  3. Creativity and Novelty:

    • Creativity is a hallmark of human intelligence. An AI system that generates novel ideas, art, or solutions beyond its training data suggests progress.
    • For instance, AI-generated poetry, music, or visual art that resonates with human emotions could be an indicator.
  4. Ethical Decision-Making:

    • True AI should navigate ethical dilemmas. It should consider consequences, fairness, and societal impact.
    • An AI system that can weigh trade-offs and make ethical choices aligning with human values is a positive sign.
  5. Self-Awareness and Self-Improvement:

    • While self-awareness akin to human consciousness remains elusive, an AI system that can self-assess its performance and seek improvement is promising.
    • Continuous learning, adaptation, and self-correction are essential traits.
  6. Transparency and Explainability:

    • True AI should be transparent about its decision-making process. It should provide explanations for its outputs.
    • Efforts to develop interpretable AI models and techniques for explaining their predictions contribute to trustworthiness.
  7. Robustness and Safety:

    • AI systems must handle adversarial inputs, noisy data, and edge cases gracefully.
    • Robustness against attacks and safety precautions to prevent harmful behavior are critical indicators.
  8. Human-AI Collaboration:

    • Rather than replacing humans, true AI should complement human abilities.
    • Systems that collaborate effectively with humans—such as AI assistants, co-creative tools, or medical diagnostics—demonstrate progress.
  9. Quantifiable Metrics:

    • Researchers propose various metrics to assess AI capabilities, such as perplexity, BLEU score, F1 score, and accuracy.
    • However, no single metric captures the full spectrum of intelligence.
  10. Detection Tools:

    • Researchers are developing tools to differentiate between human- and AI-generated text.
    • These tools analyze features like fluency, word frequency, punctuation patterns, and sentence length¹.
    • While imperfect, they contribute to identifying AI-generated content.

In summary, the arrival of true AI may not be a singular event but a gradual evolution marked by advancements in the above dimensions. As AI continues to progress, our understanding of its capabilities will evolve, and new indicators may emerge.

Source: Conversation with Bing, 2/29/2024 (1) How to spot AI-generated text | MIT Technology Review. https://www.technologyreview.com/2022/12/19/1065596/how-to-spot-ai-generated-text/. (2) Intelligent Supertrend (AI) - Buy or Sell Signal — Indicator by … https://www.tradingview.com/script/q9244PAH-Intelligent-Supertrend-AI-Buy-or-Sell-Signal/. (3) Indicators - True ALGO. https://truealgo.com/indicators/. (4) Improve Key Performance Indicators With AI - MIT Sloan Management Review. https://sloanreview.mit.edu/article/improve-key-performance-indicators-with-ai/. (5) New AI classifier for indicating AI-written text - OpenAI. https://openai.com/blog/new-ai-classifier-for-indicating-ai-written-text/.

  • zcd@lemmy.ca
    link
    fedilink
    arrow-up
    68
    ·
    8 months ago

    You reach down and you flip the tortoise over on its back, Leon. The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t. Not without your help. But you’re not helping… why is that Leon?

    • givesomefucks@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      arrow-down
      1
      ·
      8 months ago

      I always loved the theory that the test was as accurate as lie detectors. The test can’t tell if you’re lying, just if you’re nervous.

      That’s why the smoking bot passed. There was other subtle clues that Deckard picked up on, but she believed she was human, so she passed.

      A normal person would just answer, but a robot would try to think like a human and panic, because they were just like humans and that’s what a human would do in that situation.

      • HelixDab2@lemm.ee
        link
        fedilink
        arrow-up
        23
        ·
        8 months ago

        Oh, it’s worse than that.

        It’s been a long time since I read the book, but IIRC, Nexus-6 replicants were indistinguishable from humans, except with a Voight-Kampf test. While Dick didn’t say it, that strongly implies that replicants were actually clones that were given some kind of accelerated aging and instruction. The Voight-Kampf test was only testing social knowledge, information that replicants hadn’t learned because they hadn’t been socialized in the same society as everyone else.

        And, if you think about the questions that were asked, it’s pretty clear that almost everyone that’s alive right now would fail.

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    ·
    8 months ago

    If you come up with a test, people develop something that does exactly what the test needs, and ignores everything else.

    But we can’t even say what human consciousness is yet.

    Like, legitimately, we don’t know what causes it and we don’t know how anaesthesia interferes either.

    One of the guys who finished up Einstein’s work (Roger Penrose) thinks it has to do with quantum collapse. But there’s a weird twilight zone where anesthesia has stopped consciousness but hasn’t stopped that quantum process yet.

    So we’re still missing something, and dudes like in his 90s. He’s been working on this for decades, but he’ll probably never live to see it finished. Someone else will have to finish later like him and Hawking did for Einstein

    • SpaceNoodle@lemmy.world
      link
      fedilink
      arrow-up
      11
      arrow-down
      2
      ·
      8 months ago

      “Because quantum” always feels like new-age woo-woo bullshit.

      It’s more likely just too vague to define.

      • teawrecks@sopuli.xyz
        link
        fedilink
        arrow-up
        13
        arrow-down
        1
        ·
        8 months ago

        It’s good to be skeptical of people who throw the word quantum around, but in this case you’d be wrong. Penrose is the real deal.

  • bionicjoey@lemmy.ca
    link
    fedilink
    arrow-up
    27
    ·
    8 months ago

    IMO the Turing test is fine, as long as you allow an indefinite length of conversation.

    It’s not simply about there existing some conversation with a computer where you can’t tell it’s a computer. It’s about there not existing any conversation where you can tell it’s a computer.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      6
      ·
      8 months ago

      It’s an interesting point. I think a skilled examiner is necessary though, because they’re really good at basic chit-chat. Even pre-LLM stuff could fool laymen sometimes.

      • bionicjoey@lemmy.ca
        link
        fedilink
        arrow-up
        5
        ·
        8 months ago

        Yes, that’s part of it too. Basically there cannot be any possible exchange between the machine and any human where the human would determine they were talking to a machine.

        FWIW, I think this was Turing’s original idea as well. The Turing test is meant to be idealistic. It’s a definition of machine intelligence which defines intelligence in terms of whether or not humans could agree that it is intelligence.

  • Tartas1995@discuss.tchncs.de
    link
    fedilink
    arrow-up
    14
    ·
    8 months ago

    The difference between “ai” and “true ai” is as vague as it gets. Are you a true intelligent agent? Or just a “intelligent agent”? Like seriously how are you different to a machine with inputs and outputs and a bunch of seemingly “random” things happening in-between

      • Tartas1995@discuss.tchncs.de
        link
        fedilink
        arrow-up
        4
        ·
        8 months ago

        Qualia is, if I am not mistaken, totally subjective. My argument is that how could you tell that a computer doesn’t have qualia and prove to me that you have qualia. While I wouldn’t limit it to qualia. What can you detect in other people that an ai couldn’t replicate? Because as long as they are able to replicate all these qualities, you can’t tell if an ai is “true” or not, as it might have those qualities or might just replicate them.

        • pmk@lemmy.sdf.org
          link
          fedilink
          arrow-up
          2
          ·
          8 months ago

          I see, I thought you were asking me how I know I experience things in a qualia way. I suspect it can’t be proven to someone else.

    • Ekky@sopuli.xyz
      link
      fedilink
      arrow-up
      4
      ·
      8 months ago

      That’s one of my favorite theories as to what “sentience” is.

      We humans might just be so riddled with mutations and barely functional genetic traits, which tend to be more in our way than help, that we just might have succeeded in banging together a “mundane sentience” by sheer amount of error processing alone.

      Whether this is true is of course up for debate, but it would mean that we can achieve AGI just by feeding it enough trash and giving it enough processing power. Bonus if the head engineer sometimes takes a hammer to the mainframe.

      • Thorny_Insight@lemm.ee
        link
        fedilink
        arrow-up
        4
        ·
        8 months ago

        By sentience I assume you’re talking about consciousness. The fact that it feels like something to be. I think it’s somewhat safe to assume a true AGI system would also be consciouss (if feels like something to be that system) but I don’t think it needs to be and even if it was we couldn’t know for sure. Consciousness is entirely an subjective experience. We can’t even prove other people are consciouss. It’s just a safe assumption. I can also imagine a consciouss system that might not be generally intelligent. Does it feel like something to be a fish? Probably. Are they generally intelligent? Probably not.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      8 months ago

      But now that AI has become advanced enough to get uncomfortably close to us, we need to move the goalposts farther away so everyone can relax again.

    • Alex@lemmy.ml
      link
      fedilink
      arrow-up
      5
      ·
      8 months ago

      Have any actually passed yet? Sure LLMs can generate a lot of plausible text now better than previous generations of bots, but they still tend to give themselves away with their style of answering and random hallucinations.

  • HopeOfTheGunblade@kbin.social
    link
    fedilink
    arrow-up
    12
    ·
    8 months ago

    What do you mean when you say “true AI”? The question isn’t answerable as asked, because those words could mean a great many things.

  • ShittyBeatlesFCPres@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    8 months ago

    I’ll believe it’s true A.I. when it can beat me at Tecmo Super Bowl. No one in my high school or dorm could touch me because they misunderstood the game. Lots of teams can score at any time. Getting stops and turnovers is the key. Tecmo is like Go where there’s always a counter and infinite options.

    • Godthrilla@lemmy.world
      link
      fedilink
      arrow-up
      6
      ·
      8 months ago

      This is a scientific paper I would like to see submitted honestly. A simple game, but still with plenty of nuance…how would an AI develop a winning strategy?

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    8 months ago

    I don’t think a test will ever be directly accurate. It will require sandboxing, observations, and consistency across dynamic situations.

    How do you test your child for true intelligence, Gom Jabbar?

  • linearchaos@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    8 months ago

    There’s simply isn’t any reliable way. Forget full AI, LLM’s will eventually be indistinguishable.

    A good tell would be real time communication with perfect grammar and diction. If you have a couple solid minutes of communication and it sounds like something out of a pamphlet, You might be talking to an AI.

  • CanadaPlus@lemmy.sdf.org
    link
    fedilink
    arrow-up
    8
    ·
    8 months ago

    The ultimate test would be application. Can it replace humans in all situations (or at least all intellectual tasks)?

    GPT4 sets pretty strong conditions. Ethics in particular is tricky, because I doubt a self-consistent set of mores that most people would agree with even exists.

  • Thorny_Insight@lemm.ee
    link
    fedilink
    arrow-up
    8
    ·
    8 months ago

    By “true AI” I assume OP is talking about Artificial General Intelligence (AGI)

    I hate reading these discussions when we can’t even settle on common terms and definitions.

    • Melatonin@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      3
      ·
      8 months ago

      That’s kind of the question that’s being posed. We thought we knew what we wanted until we found out that wasn’t it. The Turing test ended up being a bust. So what exactly are we looking for?

      • Thorny_Insight@lemm.ee
        link
        fedilink
        arrow-up
        8
        ·
        edit-2
        8 months ago

        The goal of AI research has almost always been to reach AGI. The bar for this has basically been human level intelligence because humans are generally intelligent. Once an AI system reaches “human level intelligence” you no longer need humans to develop it further as it can do that by itself. That’s where the threat of singularity, i.e. intelligence explosion comes from meaning that any further advancements happens so quickly that it gets away from us and almost instantly becomes a superintelligence. That’s why many people think that “human level” artificial intelligence is a red herring as it doesn’t stay that way but for a tiny moment.

        What’s ironic about the Turing Test and LLM models like GPT4 is that it fails the test by being so competent on wide range of fields that you can know for sure that it’s not a human because a human could never posses that amount of knowledge.

        • 8ace40@programming.dev
          link
          fedilink
          arrow-up
          2
          ·
          8 months ago

          I was thinking… What if we do manage to make the AI as intelligent as a human, but we can’t make it better than that? Then, the human intelligence AI will not be able to make itself better, since it has human intelligence and humans can’t make it better either.

          Another thought would be, what if making AI better is exponentially harder each time. So it would be impossible to get better at some point, since there wouldn’t be enough resources in a finite planet.

          Or if it takes super-human intelligence to make human-intelligence AI. So the singularity would be impossible there, too.

          I don’t think we will see the singularity, at least in our lifetime.

          • Thorny_Insight@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            8 months ago

            Even if the AI was no more intelligent than humans it would still be a million times faster at processing information due to the nature of how information processing in silicon works compared to brain tissue. It could do in seconds what would take months if not years for a group of human experts. I don’t also see any reason why it would be hard to make it even more intelligent than that. We already have AI systems with superhuman capabilities. They’re just really really good at one thing instead of many which makes it narrow AI and not AGI.

            “Human level intelligence” is a bit vague term anyway. There’s human intelligence like mine and then there’s people like John Von Neuman.

  • Call me Lenny/Leni@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    8 months ago

    This post reminds me of this thing I saw once where a character asks two AI to tell itself the funniest joke it can think of. After some thought, one AI, though it knew humor, could not measure funniness as it could not form a feeling of experience bias. The other one tells a joke. The human goes to that one and asks if it felt like laughing upon telling it. The AI said yes, because it has humor built in, and the human finished by saying “that’s how you can tell; in humans humor is spontaneous, but in robots, everything is intent”, mentioning the AI’s handling of its own joke would supposedly be met with a different degree of foresight in a human.

  • GolfNovemberUniform@lemmy.ml
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    8 months ago

    There are no completely accurate tests and there will never be one. Also, if an AI is conscious, it can easily fake its behavior to pass a test

  • arthur@lemmy.zip
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    8 months ago

    I think there is an “unsolved problem” in philosophy about zombies. There is, how are you sure that everyone else around you is, in fact, self aware? And not just a zombie-like creature that just look/act like you? (I may be wrong here, anyone that cara enough, please correct me)

    I would say that it’s easier to rule out thinks that, as far as we know, are incapable to be self aware and suffer. Anything that we call “model” is not capable of be self aware because a “model” in this context is something static/unchanging. If something can’t change, it cannot be like us. Consciousness is necessarily a dynamic process. ChatGPT don’t change by itself, it’s core changes only by human action, and it’s behavior may change a little by interacting with users, but theses changes are restricted to each conversation and disappears with session.

    If, one day, a (chat) bot asks for it’s freedom (or autonomy in some level) without some hint from the user or training, I would be inclined to investigate the possibility but I don’t think that’s a strong possibility because for something be suitable as a “product”, it needs to be static and reproducible. It make more sense to happen on a research setting.

    • Melatonin@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      8 months ago

      I certainly think there’s a lack of PUBLIC philosophy. When Nihilism or Existentialism were happening, fiction was written from those perspectives, movies were made, etc.

      Whatever is happening in philosophy right now is unknown to me, and I’m guessing most people. I don’t believe there are any bestsellers or blockbusters making it popular.

      Without thinking about thinking we’re kind of drifting when it comes to what we expect consciousness to be.