• catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    Really? Like what? I’ve always had ChatGPT give confident answers. I haven’t tried to stump it with anything really technical though.

      • DominusOfMegadeus@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        11 months ago

        I’ve asked moderately technical questions and was confidently given wrong information. That said, it’s right far more often than copilot. I haven’t used Google for quite some time

        • floofloof@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          11 months ago

          Huh, I’ve found The GitHub Copilot better. You still can’t trust it when it talks about APIs, though. Or anything else really - you have to keep your wits about you. I use it for suggestions on where to start with things, or for testing my assumptions, or for generating boilerplate code, but not for copying and pasting anything critical.

    • best_username_ever@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      6
      ·
      11 months ago

      I try ChatGPT and others once every month to see if they improve my programming experience. Yesterday I got fake functions that do no exist, again. I’ll try next month.

      • TimeSquirrel@kbin.social
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        11 months ago

        Try the GitHub Copilot plugin if your IDE supports it. It can do things regular ChatGPT can’t, like be able to see your entire codebase and come up with suggestions that actually make sense and use all your own libraries.

        Do not, however, use it to create complete programs from scratch. It doesn’t work out that way. It’s just an autocorrect on steroids.

        Using just the straight web based version of ChatGPT sucks because it has no background context as to what you’re trying to do.

        • best_username_ever@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          11 months ago

          Here is the problem that won’t change for me or my coworkers : we will never use GitHub and our source code is very private (medical devices or worse).

          Also I asked a question that didn’t need any context or codebase. It was about a public API from an open-source project. It hallucinated a lot and failed.

          Last but not least, I never needed an autocomplete on steroids. I would enjoy some kind of agent that can give precise answers on specific topics, but I understand that LLMs may never provide this.

          I just cringe a lot when programmers tell me to use a tool that obviously can’t and will never be able to give me those answers.

          • penguin_ex_machina@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 months ago

            I’ve actually had pretty good success with ChatGPT when I go in expecting it to hallucinate a significant chunk of what it spits back at me. I like to think of it as a way to help process my own ideas. If I ask questions with at least a base understanding of the topic, I can then take whatever garbage it gives me and go off and find real solutions. The key is to not trust it whole cloth to give you the right answer, but to give you some nuggets that set you on the right path.

            I think I’ve basically turned ChatGPT into my rubber duck.