If your IP (and possible your browser) looks “suspicious” or has been used by other users before, you need to add additional information for registration on gitlab.com, which includes your mobile phone number and possibly credit card information. Since it is not possible to contribute or even report issues on open source projects without doing so, I do not think any open source project should use this service until they change that.

Screenshot: https://i.ibb.co/XsfcfHf/gitlab.png

  • @AdmiralShat@programming.dev
    link
    fedilink
    English
    685 months ago

    On a tangent, why are all of these companies pushing AI programming? This shit isn’t nearly as functional as they make it seem and all the beginners who try it are constantly asking questions about why their generated code doesn’t work

    • @agent_flounder@lemmy.world
      link
      fedilink
      English
      645 months ago

      We are in the hype cycle so everyone is going bananas and there’s money to be made prior to the trough of disillusionment.

      • Goku
        link
        fedilink
        5
        edit-2
        5 months ago

        Haha so true.

        I tried to use chatgpt to convert a monstrosity of a SQL query to a sqlalchemy query and it failed horribly.

    • lemmyvore
      link
      fedilink
      English
      42
      edit-2
      5 months ago

      It’s their wet dream. Making software without programmers.

      Execs have never cared about the technology or the engineering side of it. If you could make software by banging on a pot while dancing naked around the fire, they’d have been ok with that.

      And now that AI has come along that’s basically what it looks like to them.

    • Dr. Jenkem
      link
      fedilink
      English
      275 months ago

      VC’s and companies like OpenAI have done a really good job of propagandizing AI (LLMs). People think it’s magical and the future, so there’s money in saying you have it.

    • TimeSquirrel
      link
      fedilink
      17
      edit-2
      5 months ago

      the beginners who try it are constantly asking questions about why their generated code doesn’t work

      Because it ain’t here to generate all their code for them. It’s a glorified autocomplete and suggestion engine. When are people gonna get this? (not you, just in general)

      I use CoPilot myself, but if you have absolutely no idea what you’re doing yourself, you and CoPilot will both quickly hit a dead end together. It doesn’t actually understand what you want the code to do. Only what is similar to what you have already written or prompted for, which may be some garbage picked up from a noob on the web somewhere. Books and research using your meatbrain are still very much needed.

      • @devfuuu@lemmy.world
        link
        fedilink
        75 months ago

        It’s not in the interest of all the techbros to sell the new age AIshit as something less that can only do such small thing. They need to hype the shit out of it to get all the crazy investors money that understand nothing about it but only see AI buzzwords everywhere and need to go for it now because of FOMO.

        It’s only gonna get much worse before it is toned down to appropriate usage.

      • @DrQuint@lemmy.world
        link
        fedilink
        English
        2
        edit-2
        5 months ago

        Don’t even need to make it about code. I once asked what a term meant in a page full of a certain well known FOSS application’s benchmarks page. It gave me a lot of garbage that was unrelated because it made an assumption about the term, exactly the assumption I was trying to avoid. I try to deviate it away from that, and it fails to say anything coherent and then loops back and gives that initial attempt as the answer again. I was stuck unable from stopping it from hallucinating.

        How? Why?

        Basically, it was information you could only find by looking at the github code, and it was pretty straightforward - but the LLM sees “benchmark” and it must therefore make a bajillion assumptions.

        Even if asked not to.

        I have a conclusion to make. It does do the code thing too, and it is directly related. Once asked about a library, and it found a post where someone was ASKING if XYZ was what a piece of code was for - and it gave it out as if it was the answer. It wasn’t. And this is the root of the problem:

        AI’s never say “I don’t know”.

        It must ALWAYS know. It must ALWAYS assume something, anything, because not knowing is a crime and it won’t commit it.

        And that makes them shit.

    • Badabinski
      link
      fedilink
      13
      edit-2
      5 months ago

      Because greedy investors are gullible and want to make money from the jobs they think AI will displace. They don’t know that this shit doesn’t work like they’ve been promised. The C-levels at Gitlab want their money (gotta love publicly traded companies), and nobody is listening to the devs who are shouting that AI is great at writing security vulnerabilities or just like, totally nonfunctioning code.

    • @fruitycoder@sh.itjust.works
      link
      fedilink
      15 months ago

      I’m hyped about AI assisted programming and even agent driven projects (writing their own code, submitting pull requests etc) but I also agree that it seems just too early to actually put money behind it.

      Its just so marginal so far, the UI/HMI has too much friction still and the output without skilled programming assistance is too limited.