• brie@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    6
    ·
    1 day ago

    Large context window LLMs are able to do quite a bit more than filling the gaps and completion. They can edit multiple files.

    Yet, they’re unreliable, as they hallucinate all the time. Debugging LLM-generated code is a new skill, and it’s up to you to decide to learn it or not. I see quite an even split among devs. I think it’s worth it, though once it took me two hours to find a very obscure bug in LLM-generated code.

    • sudneo@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 hours ago

      Humans are notoriously worse at tasks that have to do with reviewing than they are at tasks that have to do with creating. Editing an article is more boring and painful than writing it. Understanding and debugging code is much harder than writing it etc., observing someone cooking to spot mistakes is more boring than cooking etc.

      This also fights with the attention required to perform those tasks, which means a higher ratio of reviewing vs creating tasks leads to lower quality output because attention is depleted at some point and mistakes slip in. All this with the additional “bonus” to have to pay for the tool AND the human reviewing while also wasting tons of water and energy. I think it’s wise to ask ourselves whether this makes sense at all.

      • brie@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        To make sense of that, figure out what pays more observing/editing or cooking/writing. Big shekels will make boring parts exciting

        • sudneo@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 minutes ago

          Think also the amount of people doing both. Also writers earn way more than editors, and stellar chefs earn way more than cooking critics.

          If you think devs will be paid more to review GPT code, well, I would love to have your optimism.

    • cley_faye@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      18 hours ago

      If you consider debugging broken LLM-generated code to be a skill… sure, go for it. But, since generated code is able to use tons of unknown side effects and other seemingly (for humans) random stuff to achieve its goal, I’d rather take the other approach, where it takes a human half an hour to write the code that some LLM could generate in seconds, and not have to learn how to parse random mumbo jumbo from a machine, while getting a working result.

      Writing code is far from being the longest part of the job; and you gingerly decided that making the tedious part even more tedious is a great idea to shorten the already short part of it…

      • brie@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        It’s similar to fixing code written by interns. Why hire interns at all, eh?

        Is it faster to generate then debug or write everything? Needs to be properly tested. At the very least many devs have the perception of being faster, and perception sells.

        It actually makes writing web apps less tedious. The longest part of a dev job is pretending to work actually, but that’s no different from any office jerb.

    • NigelFrobisher@aussie.zone
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      18 hours ago

      I have one of those at work now, but my experience with it is still quite limited. With Copilot it was quite useful for knocking up quick boutique solutions for particular problems (stitch together a load of PDFs sorted on a name heading), with the proviso that you might end up having to repair bleed between dependency versions and repair syntax. I couldn’t trust it with big refactors of existing systems.

      • brie@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        Cursor and Claude are a lot better than Copilot, but none of them can be trusted. For existing large code repos, LLMs can generate tests and similar boring stuff. I suspect there’ll be an even bigger shift to micro services to make it easier for LLMs generate something that works.