• Daxtron2@startrek.website
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    1 year ago

    I think this is extremely important:

    Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities.

    Bad programmers + AI = bad code

    Good programmers + AI = good code

      • Aurenkin@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        1 year ago

        What do you mean? Sounds to me like any other tool, it takes skill to use it well. Same as stack overflow, built in code suggestions or IDE generated code.

        Not to detract from the usefulness of it just in terms of the fact that it requires knowledge to use well.

        • ericjmorey@programming.devOP
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          As someone currently studying machine learning thoery and how these models are built, I’m explaining that built into the models at their core are functions that amplify the bias of the training data by identifying and using mathematical associations within the training data to create output. Because of that design, a naive approach to its use would result in amplified bias of not only the training data but also the person using the tool.

      • Daxtron2@startrek.website
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        eh, I’ve known lots of good programmers who are super stuck in their ways. Teaching them to effectively use an LLM can help break you out of the mindset that there’s only one way to do things.