• drdiddlybadger@pawb.social
    link
    fedilink
    English
    arrow-up
    84
    arrow-down
    1
    ·
    2 days ago

    Is anyone else hating a lot of these current articles that are sparse as fuck on detail. How are they actually using generative AI. Where is it being applied. Just telling me that it’s tools for editors and volunteers doesn’t tell me what the tool is doing. 😤

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 hours ago

      I’m a manager of sorts and one of the people who report to me used gen AI in their mid-year reviews. Basically, they said, “make this sound better” and the AI spit out something that reads better while still having the some content. In the past, this person had continually been snarky and self-deprecating, and the AI helped make it sound more constructive.

      I hope that’s what’s happening here. A human curates the content, runs it through the AI to make it read better, then edits from there. That last part is essential though.

      • lime!@feddit.nu
        link
        fedilink
        English
        arrow-up
        70
        ·
        2 days ago

        ah so no generative ai used in actual article production, just in meta stuff and for newcomers to ask questions about how to do things.

      • pelespirit@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        40
        ·
        edit-2
        2 days ago

        Yeah, this article seems like an anti-Wikipedia article. They’re just using it for translation, spelling errors, content quality, etc.

        Wikipedia’s model of collective knowledge generation has demonstrated its ability to create verifiable and neutral encyclopedic knowledge. The Wikipedian community and WMF have long used AI to support the work of volunteers while centering the role of the human. Today we use AI to support editors to detect vandalism on all Wikipedia sites, translate content for readers, predict article quality, quantify the readability of articles, suggest edits to volunteers, and beyond. We have done so following Wikipedia’s values around community governance, transparency, support of human rights, open source, and others. That said, we have modestly applied AI to the editing experience when opportunities or technology presented itself. However, we have not undertaken a concerted effort to improve the editing experience of volunteers with AI, as we have chosen not to prioritize it over other opportunities.

  • randon31415@lemmy.world
    link
    fedilink
    English
    arrow-up
    33
    ·
    2 days ago

    Wikipedia had bots writing US census gathering-place articles in 2002, 20 years before LLMs were a thing. They’ve got decades of regulations in place, so I am not scared that the quality is going to drop.

  • Xanza@lemm.ee
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    31
    ·
    2 days ago

    Wikipedia generally a really good candidate for generative AI.

    • RandomVideos@programming.dev
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      3
      ·
      2 days ago

      Generative AI suffers from inaccuracy; text AI generators making up believable lies if it doesnt have enough information

      • Xanza@lemm.ee
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        3
        ·
        edit-2
        2 days ago

        The idea of generative AI isn’t accuracy, so that’s pretty expected.

        Generative AI is designed to be used with a content base and expand on information, not to create new information. You can feed generative AI with the entirety of the current Wikipedia text source and have it expand on subjects which need it, and curtail and simplify other subjects which need it.

        You don’t ask generative AI to come up with new information–that’s how you get inaccurate information.

        text AI generators making up believable lies if it doesnt have enough information

        Let’s not anthropomorphize AI. It doesn’t lie. It uses available data to expand on a subject to make it conversationally complete when it lacks sufficient information on a subject, regardless of whether or not the context is correct. That’s completely different, and you can specifically prohibit an AI from doing that…

        AI is great when used appropriately. The issue is that people are using AI as a Google replacement, something it’s not designed to do. AI isn’t a fact engine. LLMs are designed to as closely resemble human speech as possible, not to give correct information to questions. People’s issue with AI is that they’re fucking using it wrong.

        This is an exceptionally great usage of AI because you already have the required factual background knowledge. You can simply feed it to your AI telling it not to fill in any gaps and to rewrite articles to be more uniform and to have direct and easy to consume verbiage. This instance is quite literally what generative AI was designed for…to use factual knowledge and to generate context around the existing data.

        Issues arise when you use AI for things other than what it was intended, and you don’t give it enough information and it has to generate information to complete datasets. AI will do what you ask, you just have to know how to ask it. That’s why AI prompt engineers are a thing.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          Exactly. At work, my team kinda sucks at communication but great w/ facts (we’re engineers, go figure), so they use gen AI to turn facts into nicer-to-read documentation and communication (i.e. personal reviews, emails, documentation, etc). The process is relatively smooth:

          1. generate all the facts in a rough form
          2. ask AI to reword it for whatever purpose
          3. edit it a bit to correct any issues
          4. if needed, ask a coworker to quickly review it

          For that task, it works pretty well.

  • LupusBlackfur@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    6
    ·
    2 days ago

    …nothing could possibly go worng!..

    (Some of you may remember the original Westworld 1sheet…)