• Adderbox76@lemmy.ca
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Every single one of us, as kids, learned the concept of “garbage in, garbage out”; most likely in terms of diet and food intake.

    And yet every AI cultist makes the shocked pikachu face when they figure out that trying to improve your LLM by feeding it on data generated by literally the inferior LLM you’re trying to improve, is an exercise in diminishing returns and generational degradation in quality.

    Why has the world gotten both “more intelligent” and yet fundamentally more stupid at the same time? Serious question.

    • GamingChairModel@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      Why has the world gotten both “more intelligent” and yet fundamentally more stupid at the same time? Serious question.

      Because it’s not actually always true that garbage in = garbage out. DeepMind’s Alpha Zero trained itself from a very bad chess player to significantly better than any human has ever been, by simply playing chess games against itself and updating its parameters for evaluating which chess positions were better than which. All the system needed was a rule set for chess, a way to define winners and losers and draws, and then a training procedure that optimized for winning rather than drawing, and drawing rather than losing if a win was no longer available.

      Face swaps and deep fakes in general relied on adversarial training as well, where they learned how to trick themselves, then how to detect those tricks, then improve on both ends.

      Some tech guys thought they could bring that adversarial dynamic for improving models to generative AI, where they could train on inputs and improve over those inputs. But the problem is that there isn’t a good definition of “good” or “bad” inputs, and so the feedback loop in this case poisons itself when it starts optimizing on criteria different from what humans would consider good or bad.

      So it’s less like other AI type technologies that came before, and more like how Netflix poisoned its own recommendation engine by producing its own content informed by that recommendation engine. When you can passively observe trends and connections you might be able to model those trends. But once you start actually feeding back into the data by producing shows and movies that you predict will do well, the feedback loop gets unpredictable and doesn’t actually work that well when you’re over-fitting the training data with new stuff your model thinks might be “good.”