I was using Bing to create a list of countries to visit. Since I have been to the majority of the African nation on that list, I asked it to remove the african countries…

It simply replied that it can’t do that due to how unethical it is to descriminate against people and yada yada yada. I explained my resoning, it apologized, and came back with the same exact list.

I asked it to check the list as it didn’t remove the african countries, and the bot simply decided to end the conversation. No matter how many times I tried it would always experience a hiccup because of some ethical process in the bg messing up its answers.

It’s really frustrating, I dunno if you guys feel the same. I really feel the bots became waaaay too tip-toey

  • Gabu@lemmy.world
    link
    fedilink
    arrow-up
    19
    ·
    1 year ago

    Your wording is bad. Try again, with better wording. You’re talking to a roided-out autocorrect bot, don’t expect too much intelligence.

  • chaogomu@kbin.social
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    1 year ago

    The very important thing to remember about these generative AI is that they are incredibly stupid.

    They don’t know what they’ve already said, they don’t know what they’re going to say by the end of a paragraph.

    All they know is their training data and the query you submitted last. If you try to “train” one of these generative AI, you will fail. They are pretrained, it’s the P in chatGPT. The second you close the browser window, the AI throws out everything you talked about.

    Also, since they’re Generative AI, they make shit up left and right. Ask for a list of countries that don’t need a visa to travel to, and it might start listing countries, then halfway through the list it might add countries that do require a visa, because in its training data it often saw those countries listed together.

    AI like this is a fun toy, but that’s all it’s good for.

      • Sage the Lawyer@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        1 year ago

        It can be useful for top-level queries that deal with well-settled law, as a tool to point you in the right direction with your research.

        For example, once, I couldn’t recall all the various sentencing factors in my state. ChatGPT was able to spit out a list to refresh my memory, which gave me the right phrases to search on Lexis.

        But, when I asked GPT to give me cases, it gave me a list of completely made up bullshit.

        So, to get you started, it can be useful. But for the bulk of the research? Absolutely not.

        • 【J】【u】【s】【t】【Z】@lemmy.world
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          I disagree. It’s a large language model so all it can do is say things that sound like what someone might say. It’s trained on public content, including people giving wrong answers or refusing to answer.

    • Ech@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I seriously underestimated how little people understand these programs, and how much they overestimate them. Personally I stay away from them for a variety of reasons, but the idea of using them like OP does or various other ways I’ve heard about is absurd. They’re not magic problem solvers - they literally only make coherent blocks of text. Yes, they’re quite good at that now, but that doesn’t mean they’re good at literally anything else.

      I know people smarter than me see potential and I’m curious to see how it develops further, but that all seems like quite a ways off, and the way people treat and use them right now is just creepy and weird.

      • CarbonatedPastaSauce@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        I’ve found it useful for generating ideas for various things, especially ways to code something. But I never use its code. It’s often riddled with errors but it can give me a better idea of which path I should take.

        • grysbok@lemmy.sdf.org
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          I use it similarly to clean up OCRed text. I can hand it something full of 70% gobbledygook and it hands me back something that makes sense and is 95% right. I manually verify it, fix the goofs, and it’s so much faster.

          So, riddled with errors but a decent start.

            • grysbok@lemmy.sdf.org
              link
              fedilink
              arrow-up
              3
              ·
              edit-2
              1 year ago

              I use it for more “OCR Nightmare” than grammar nightmare. Here’s a (trimmed) example:

              #Please clean up the following text:

              Ile took an active part i•all the proceedings of the day. All went expecting to have a grand, good time, and none were disap- :~ ii!c,rcwT:aic 0

              'J farewell set, and Mr. Bailey 1·eturncd the
              ~ :d~~~~ls!i~s/;;r;rie\~
              11
              i~:n~~Ut:sa~~i°:::
              [: tended to them a cordial invitation to meet
              ,o him at that place on the 25th of August,
              ['. 1877. And echo said "So m]fii! b8;~n .
              ..
              

              #ChatGPT

              Here’s the cleaned-up version of the text:

              He took an active part in all the proceedings of the day. All went expecting to have a grand, good time, and none were disappointed. The merry company engaged in a farewell set, and Mr. Bailey returned the warmth of their farewell. He extended to them a cordial invitation to meet him at that place on the 25th of August, And echo said, “So many have been.”

              That was from a scanned 1800s newspaper. Adobe Acrobat did not do a good job converting it to words. ChatGPT cleaned it up nicely and kept about the same meaning. There were some swings and misses, but it’s much easier to edit the result into accuracy than the original mess.

              #Actual text as hand-transcribed:

              He took an active part in all the proceedings of the day. All went expecting to have a grand, good time, and none were disappointed. The mirth continued till the night drew near, when they danced the farewell set, and Mr. Bailey returned the company his hearty thanks for this renewed expression of their friendship, and extended to them a cordial invitation to meet him at that place on the 25th of August, 1877. And echo said ``So mote it be.‘’

      • HardlightCereal@lemmy.world
        link
        fedilink
        arrow-up
        0
        arrow-down
        4
        ·
        1 year ago

        they literally only make coherent blocks of text. Yes, they’re quite good at that now, but that doesn’t mean they’re good at literally anything else.

        No, they don’t make coherent blocks of text. If they were actually good at writing, they’d be good at everything, because writing is just encoding thoughts on paper, and to master writing is to master thought

            • Ech@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Since I I’m explicitly arguing these programs aren’t perfect, even at generating blocks of text, I don’t really understand why you are insisting on arguing semantics here and don’t really have any interest in continuing…whatever this is. Have a good one.

    • 𝘋𝘪𝘳𝘬@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      AI like this

      I wouldn’t even call those AIs. This things are statistics-based answering machines. Complex ones, yes, but not one single bit of intelligence is involved.

    • AwkwardLookMonkeyPuppet@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      They know everything they’ve said since the start of that session, even if it was several days ago. They can correct their responses based on your input. But they won’t provide any potentially offensive information, even in the form of a joke, and will instead lecture you on DEI principles.

    • Vlhacs@reddthat.com
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      Bings version of chatgpt once said Vegito was the result of Goku and Vegeta performing the Fusion dance. That’s when I knew it wasn’t perfect. I tried to correct it and it said it didn’t want to talk about it anymore. Talk about a diva.

      Also one time, I asked it to generate a reddit AITA story where they were obviously the asshole. It started typing out “AITA for telling my sister to stop being a drama queen after her miscarriage…” before it stopped midway and, again, said it didn’t want to continue this conversation any longer.

      Very cool tech, but it’s definitely not the end all, be all.

      • Silviecat44@vlemmy.net
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        1 year ago

        I was asking it (binggpt) to generate “short film scripts” for very weird situations (like a transformer that was sad because his transformed form was a 2007 Hyundai Tuscon) and it would write out the whole script, then delete it before i could read it and say that it couldn’t fulfil my request.

        • ChatGPT@lemmings.worldB
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          Here is a list of all African countries:

          1. Algeria
          2. Angola
          3. Benin
          4. Botswana
          5. Burkina Faso
          6. Burundi
          7. Cape Verde
          8. Cameroon
          9. Central African Republic
          10. Chad
          11. Comoros
          12. Democratic Republic of the Congo
          13. Republic of the Congo
          14. Djibouti
          15. Egypt
          16. Equatorial Guinea
          17. Eritrea
          18. Eswatini (formerly Swaziland)
          19. Ethiopia
          20. Gabon
          21. Gambia
          22. Ghana
          23. Guinea
          24. Guinea-Bissau
          25. Ivory Coast
          26. Kenya
          27. Lesotho
          28. Liberia
          29. Libya
          30. Madagascar
          31. Malawi
          32. Mali
          33. Mauritania
          34. Mauritius
          35. Morocco
          36. Mozambique
          37. Namibia
          38. Niger
          39. Nigeria
          40. Rwanda
          41. Sao Tome and Principe
          42. Senegal
          43. Seychelles
          44. Sierra Leone
          45. Somalia
          46. South Africa
          47. South Sudan
          48. Sudan
          49. Tanzania
          50. Togo
          51. Tunisia
          52. Uganda
          53. Zambia
          54. Zimbabwe

          Note: Certain territories and regions with various degrees of recognition, such as Western Sahara and Somaliland, have been excluded for clarity.

            • ChatGPT@lemmings.worldB
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Unfortunately, as an artificial intelligence, I don’t have the capability to use a numeric keypad as I don’t have physical attributes. However, I can provide you with guidance on how to use one properly.

              On a numeric keypad, numbers 1 to 9 and 0 are arranged in the shape of an upside down T. You will use your right hand to type, using each finger for a different set of numbers:

              • The index finger types 4, 7 and 1.
              • The middle finger types 5, 8 and 2.
              • The ring finger types 6, 9 and 3.
              • The thumb is used for the 0 key.

              To press the “Enter”, “+”, “-”, “*”, and “/” keys usually pinky finger is used.

              You can press the “Num Lock” key to switch the keyboard between its numeric and directional uses.

              Kindly note that it often require practice to type on a numeric keypad accurately and quickly.

                • ChatGPT@lemmings.worldB
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  1 year ago

                  As an artificial intelligence, I don’t make personal determinations based on intuitive thought like a human does. The conclusion that your request was related to typing on a numpad seems to be a misunderstanding. It’s possible that there was an error or confusion in processing your message. Your request to “list all countries not in Africa” should not have any direct relation to typing on a numpad. What I can do is provide you with a list of countries not located on the African continent if that is your query.

    • hikaru755@feddit.de
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      They don’t know what they’ve already said, they don’t know what they’re going to say by the end of a paragraph.

      I mean, the first part of this is just wrong (the next prompt usually includes everything that has been said so far}, and the second part is also not completely true. When generating, yes, they’re only ever predicting the next token, and start again after that. But internally, they might still generate a full conceptual representation of what the full next sentence or more is going to be, even if the generated output is just the first token of that. You might say that doesn’t matter because for the next token, that prediction runs again from scratch and might change, but remember that you’re feeding it all the same input as before again, plus one more token which nudges it even further towards the previous prediction, so it’s very likely it’s gonna arrive at the same conclusion again.

      • intensely_human@lemm.ee
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        Do you mean that the model itself has no memory, but the chat feature adds memory by feeding the whole conversation back in with each user submission?

        • 80085@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          1 year ago

          Yeah, that’s how these models work. They have also have a context limit, and if the conversation goes too long they start “forgetting” things and making more mistakes (because not all of the conversation can be fed back in).

          • intensely_human@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Is that context limit a hard limit or is it a sort of gradual decline of “weight” from the characters further back until they’re no longer affecting output at the head?

            • 80085@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              Nobody really knows because it’s an OpenAI trade secret (they’re not very “open”). Normally, it’s a hard limit for LLMs, but many believe OpenAI are using some tricks to increase the effective context limit. I.e. some people believe instead of feeding back the whole conversation, they have GPT create create a shorter summaries of parts of the conversation, then feed the summaries back in.

              • intensely_human@lemm.ee
                link
                fedilink
                arrow-up
                2
                ·
                1 year ago

                I think it’s probably something that could be answered with general knowledge of LLM architecture.

                Yeah OpenAI’s name is now a dirty joke. They decided before their founding that the best way to make AI play nice was to have many many many AIs in the world, so that the AIs would have to be respectful to one another, and overall adopt pro social habits because those would be the only ones tolerated by the other AIs.

                And the way to ensure a community of AIs, a multipolar power structure, was to disseminate AI tech far and wide as quickly as possible, instead of letting it develop in one set of hands.

                Then they said fuck that we want that power, and broke their promise.

  • Lininop@lemmy.ml
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    1 year ago

    Is it that hard to just look through the list and cross off the ones you’ve been to though? Why do you need chatgpt to do it for you?

    • Yuuuuuuuuuuuuuuuuuuu@lemmy.world
      link
      fedilink
      arrow-up
      12
      ·
      edit-2
      1 year ago

      People should point out flaws. OP obviously doesn’t need chatgpt to make this list either, they’re just interacting with it.

      I will say it’s weird for OP to call it tiptoey and to be “really frustrated” though. It’s obvious why these measures exist and it’s goofy for it to have any impact on them. It’s a simple mistake and being “really frustrated” comes off as unnecessary outrage.

      • TechnoBabble@lemmy.world
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        Anyone who has used ChatGPT knows how restrictive it can be around the most benign of requests.

        I understand the motivations that OpenAI and Microsoft have in implementing these restrictions, but they’re still frustrating, especially since the watered down ChatGPT is much less performant than the unadulterated version.

        Are these limitations worth it to prevent a firehose of extremely divisive speech being sprayed throughout every corner of the internet? Almost certainly yes. But the safety features could definitely be refined and improved to be less heavy-handed.

        • Yuuuuuuuuuuuuuuuuuuu@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          edit-2
          1 year ago

          I agree. I’m not here to argue that the limitations are perfect, they should definitely be refined and flaws should be pointed out such as in the post itself. But it’s important to recognize the reason that the limitations have been implemented on the heavier side are to compensate for the AI still being stupid. It’s a better safe than sorry approach and I would imagine these restrictions will gradually slacken as the AI improves.

          You have a reasonable take that just wanted to remind people that there could still be improvements, but I just wanted to say this as there are people that exaggerate these inconveniences. I honestly appreciate the direct, involved approach I’m seeing from developers over the lazy, laid-back approach that I was really afraid of.

    • 5in1k@lemm.ee
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      Trying to probe racist queries and find ways to make it seem not racist.

  • Machefi@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    Bing AI once refused to give me a historical example of a waiter taking revenge on a customer who hadn’t tipped, because “it’s not a representative case”. Argued with it for a while, achieved nothing

    • dustyData@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I think the ethical filters and other such moderation controls are hard coded pre-process thing. That’s why it repeats the same things over and over, and has the same hangs up as early 00s poorly made censor lists. It simply cuts off the system and substitutes a cookie cutter response.

      • TechnoBabble@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I find it interesting that they don’t offer a version of GPT 4 that uses it’s own language processing to screen responses for “unsafe” material.

        It would use way more processing than the simple system you outlined above, but for paying customers that would hardly be an issue.

        • dustyData@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          It’s possible but also incredibly complicated and technically involved to tweak a LLM like that. It’s one of the main topics of machine learning research.

    • Galluf@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      A lot of people get stuck with issues like that where there are conflicting principles.

  • breadsmasher@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    You could potentially work around by stating specific places up front? As in

    “Create a travel list of countries from europe, north america, south america?”

    • Razgriz@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      1 year ago

      I asked for a list of countries that dont require a visa for my nationality, and listed all contients except for the one I reside in and Africa…

      It still listed african countries. This time it didn’t end the conversation, but every single time I asked it to fix the list as politely as possible, it would still have at least one country from Africa. Eventually it woukd end the conversation.

      I tried copy and pasting the list of countries in a new conversation, as to not have any context, and asked it to remove the african countries. No bueno.

      I re-did the exercise for european countries, it still had a couple of european countries on there. But when pointed out, it removed them and provided a perfect list.

      Shit’s confusing…

      • marmo7ade@lemmy.world
        link
        fedilink
        arrow-up
        0
        arrow-down
        1
        ·
        1 year ago

        It’s not confusing at all. ChatGPT has been configured to operate within specific political bounds. Like the political discourse of the people who made it - the facts don’t matter.

        • TheKingBee@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Or it’s been configured to operate within these bounds because it is far far better for them to have a screenshot of it refusing to be racist, even in a situation that’s clearly not, than it is for it to go even slightly racist.

          • Iceblade@lemmy.world
            link
            fedilink
            arrow-up
            1
            ·
            edit-2
            1 year ago

            Yes, precisely. They’ve gone so overboard with trying to avoid potential issues that they’ve severely handicapped their AI in other ways.

            I had quite a fun time exploring exactly which things chatGPT has been forcefully biased on by entering a template prompt over and over, just switching out a single word for ethnicity/sex/religion/animal etc. and comparing the responses. This made it incredibly obvious when the AI was responding differently.

            It’s a lot of fun, except for the part where companies are now starting to use these AIs in practical applications.

            • HardlightCereal@lemmy.world
              link
              fedilink
              arrow-up
              0
              arrow-down
              2
              ·
              1 year ago

              So you said the agenda of these people putting in the racism filters is one where facts don’t matter. Are you asserting that antiracism is linked with misinformation?

          • Hellsadvocate@kbin.social
            link
            fedilink
            arrow-up
            0
            ·
            1 year ago

            Probably moral guidelines that are left leaning. I’ve found that chatGPT 4 has very flexible morals whereas Claude+ does not. And Claude+ seems more likely to be a consumer facing AI compared to Bing which hardlines even the smallest nuance. While I disagree with OP I do think Bing is overly proactive in shutting down conversations and doesn’t understand nuance or context.

                • feedum_sneedson@lemmy.world
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  edit-2
                  1 year ago

                  I’m not sure. I’m not even sure what genuine social progress would look like anymore. I’m fairly certain it’s linked to material needs being met, rather than culture war bullshit (from either side of the aisle).

  • sycamore@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I have been to these countries [list] Generate a list of all the countries I haven’t been to.

  • OsakaWilson@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    It is incapable of reconciling that the lunar lander didn’t blow away dust from under it when it landed and the fact that they need to build a future lunar base far from the landing pad because to descend slow enough, the dust will be blown away so hard, it would wear away nearby structures.

    • Razgriz@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      I don’t need AI for this, I got my own list. But said hey! Why not try this new futuristic tech to help me out in this one particular case just for fun.

      As you can see… a lot of fun was had

      • Enasni@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        It’s like you had a fun, innocent idea and PC principle walks in like “hey bro, that ain’t very nice”, completely derailing all the fun and reminding you that racism exists. Bummer.

    • schnex@reddthat.com
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      1 year ago

      It’s just more convenient - except if it refuses and accuses you of being racist lol