I’ve experimented a bit with chatGPT, asking it to create some fairly simple code snippets to interact with a new API I was messing with, and it straight up confabulated methods for the API based on extant methods from similar APIs. It was all very convincing, but if there’s no way of knowing that it’s just making things up, it’s literally worse than useless.
ChatGPT has been helpful in being an interactive rubber duck. I used it to help myself breakdown the technical problems that I need to solve and it helps to cut down time taken to complete a difficult ticket that usually take a couple of days of work to a couple of hours.
I’ve had similar experiences with it telling me to call functions of third party libs that don’t exist. When you tell it “That function X does not exist” it says “I’m sorry, your right function X does not exist on library A. here is another example using function Y” then function Y doesn’t exist either.
I have found it useful in a limited scope, but I have found co-pilot to be much more of a daily time saver.
So? Those mistakes will come up in testing, and you can easily fix them (either yourself, or by asking the AI to do it, whichever is faster).
I’ve successfully used it to write code for APIs that did not even exist at all a couple years ago when ChatGPT’s model was trained. It doesn’t need to know the API to generate working code - you just need to tell it what APIs are available as part of your conversation.
You don’t, you get it to write both the code and the tests. And you read both of them yourself. And you run them in a debugger to verify they do what you expect.
Yeah, that’s half the work of “normal coding” but it’s also half the work. Which is a pretty awesome boost to productivity.
But where it really boosts your productivity is with APIs that you aren’t very familiar with. ChatGPT is a hell of a lot better than Google for simple “what API can I use for X” questions.
I’ve experimented a bit with chatGPT, asking it to create some fairly simple code snippets to interact with a new API I was messing with, and it straight up confabulated methods for the API based on extant methods from similar APIs. It was all very convincing, but if there’s no way of knowing that it’s just making things up, it’s literally worse than useless.
ChatGPT has been helpful in being an interactive rubber duck. I used it to help myself breakdown the technical problems that I need to solve and it helps to cut down time taken to complete a difficult ticket that usually take a couple of days of work to a couple of hours.
“just good enough to be dangerous”
I’ve had similar experiences with it telling me to call functions of third party libs that don’t exist. When you tell it “That function X does not exist” it says “I’m sorry, your right function X does not exist on library A. here is another example using function Y” then function Y doesn’t exist either.
I have found it useful in a limited scope, but I have found co-pilot to be much more of a daily time saver.
So? Those mistakes will come up in testing, and you can easily fix them (either yourself, or by asking the AI to do it, whichever is faster).
I’ve successfully used it to write code for APIs that did not even exist at all a couple years ago when ChatGPT’s model was trained. It doesn’t need to know the API to generate working code - you just need to tell it what APIs are available as part of your conversation.
Except that in code, you can write unit tests and have checks that it absolutely has to get precisely correct.
If you have to write the code and tests yourself… That’s just normal coding then
You don’t, you get it to write both the code and the tests. And you read both of them yourself. And you run them in a debugger to verify they do what you expect.
Yeah, that’s half the work of “normal coding” but it’s also half the work. Which is a pretty awesome boost to productivity.
But where it really boosts your productivity is with APIs that you aren’t very familiar with. ChatGPT is a hell of a lot better than Google for simple “what API can I use for X” questions.
You might have to rewrite all of it. The code and the tests.
Hell even the structure/outline it took might not be correct.
Yeah but I don’t. That’s the whole damn point.
I really suggest you guys try it.
It’s really really bad very often.