• PepperoniNipple@lemmynsfw.com
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      Yup. The worst part is, I think they are trying to do this as quick as possible to finally reach a level where they can probably ditch out the artists’ training data and be “free” of these consequences. They’re violating them as much as they can right now to be able to let go the training wheels asap and get away with it

      • LostWanderer@lemmynsfw.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Yep, as regulation is surely coming because LLM development is such a Wild West Situation. Any scummy, dumpster fire Corpos will certainly put a rush on acquiring enough training data to make artists useless to them in the future. These mind blind, money-grubbing bastards are actively chasing that quick buck before it comes crashing down on them. Grab the bag and don’t get beat up with that regulation ruler!

        There are already some fuckwits (Apple, Anthropic, Nvidia, Salesforce) who actively scrapped content from YouTube titles and videos; data laundering that stolen content, but if you prompt their AI just right, you can see the direct influence the stolen data has on whatever the LLM generates. Reference Article It’s a gross practice that needs to be regulated and punished quickly because these corporations need regulation to be arm wrestled into barely acting civil.