Playing complex strategy games for many years, one of the things that irks me the most is that hard AI levels often just give the dumb AI cheats to simulate it being smarter. To me, it’s not very satisfying to go against cheating AI. Are any games today leveraging neural networks to supplant or augment hand-written decision tree based AI? Are any under development? I know AI can be resource intensive, but it seems that at least turn based games could employ it.

  • relic_@lemm.ee
    link
    fedilink
    English
    arrow-up
    18
    arrow-down
    2
    ·
    3 days ago

    The challenge is that AI for a video game (even one fixed game) is very problem specific and there’s no generalized approach/kit for developing AI for games. So while there’s research showing AI can play games, it’s involved lots of iteration and AI expertise. Thats obviously a large barrier for any video game and that doesn’t even touch the compute requirements.

    There’s also the problem of making AI players fun. Too easy and they’re boring, too hard and they’re frustrating. Expert level AI can perform at expert level, which wouldn’t be fun for the average player. Striking the right difficulty balance isn’t easy or obvious.

    • count_dongulus@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      3 days ago

      I wouldn’t mind an AI using unorthodox strategies, but yeah that’s a good point that fine tuning it to be fun is a big challenge. Speaking of “non-player-like behavior”, I wonder if AI could be used to find multiplayer exploits sooner, though the problem there is you don’t really have much training data besides QA and playtesters before a full release.

      • relic_@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 days ago

        Historically, AI has found and used exploits. Before OpenAI was known for chatgpt, they did a lot of work in reinforcement learning (often deployed in game-like scenarios). One of the more mainstream training strategies (pioneered at OpenAI) played sonic and would exploit bugs in the game, for example.

        The compute used for these strategies are pretty high though. Even crafting a diamond in Minecraft can require playing for hundreds of millions of steps, and even then, AI might not constantly reach their goal. Theres still interesting work in the space, but sadly LLMs have sucked up a lot of the R&D resources.