• 0 Posts
  • 84 Comments
Joined 2 years ago
cake
Cake day: August 9th, 2023

help-circle




  • Casey Newton founded Platformer, after leaving The Verge around 5 years ago. But yeah, I used to listen to Hard Fork, his podcast with Kevin Roose, but I stopped because of how uncritically they cover AI and LLMs. It’s basically the only thing they cover, and yet they are quite gullible and not really realistic about the whole industry. They land some amazing interviews with key players, but never ask hard questions or dive nearly deep enough, so they end up sounding pretty fluffy as ass-kissy. I totally agree with Zitron’s take on their reporting. I constantly found myself wishing they were a lot more cynical and combative.


  • That’s an interesting article, but it was published in 2022, before LLMs were a thing on anyone’s radar. The results are still incredibly impressive without a doubt, but based on how the researchers explain it, it looks like it was accomplished using deep learning, which isn’t the same as LLMs. Though they’re not entirely unrelated.

    Opaque and confusing terminology in this space also just makes it very difficult to determine who or which systems or technology are actually making these advancements. As far as I’m concerned none of this is actual AI, just very powerful algorithmic prediction models. So the claims that an AI system itself has made unique technological advancements, when they are incapable of independent creativity, to me proves that nearly all their touted benefits are still entirely hypothetical right now.


  • The article explains the problems in great detail.

    Here’s just one small section of the text which describes some of them:

    All of this certainly makes knowledge and literature more accessible, but it relies entirely on the people who create that knowledge and literature in the first place—that labor that takes time, expertise, and often money. Worse, generative-AI chatbots are presented as oracles that have “learned” from their training data and often don’t cite sources (or cite imaginary sources). This decontextualizes knowledge, prevents humans from collaborating, and makes it harder for writers and researchers to build a reputation and engage in healthy intellectual debate. Generative-AI companies say that their chatbots will themselves make scientific advancements, but those claims are purely hypothetical.

    (I originally put this as a top-level comment, my bad.)



  • Your description of those desks totally knocked some of my old memories loose. I remember going to a friend’s house in the late 90s when the first smallish “all-in-one” PCs started coming on the market (before the iMac claimed that space in ‘98). They had their new all-in-one PC set up on a tiny desk in the hallway outside their office. It was there so everyone in the family could use it, but I remember being shocked at how small it was, and so impressed that it didn’t need the whole corner of a room.





  • Human and AGI collaboration might be interesting, if ever real AI actually develops. But I wouldn’t call augmenting or probing of existing works of fiction with rehashed LLM sludge collaboration, I’d call it glorified and advanced plagiarism at worst, and low quality cliff notes at best.

    I would much rather read a work of creative fiction from a human being than to encounter autocorrect word predictions written into paragraphs. The idea that the text itself can be queried to gain additional meaning divorced from the author’s intention strikes me as unrealistic and not faithful to the person who originally crafted the words.

    Though I’m obviously biased against LLMs being used for this kind of thing, from lots of experience seeing how crappy they are.



  • Ed Zitron has the best takes on this imo. One of his pieces is linked in the posted article, but here it is again. His podcast also has some of the most grounded and hilarious insight into the absurdity of the AI bubble. If you want to hear from him in a more mainstream setting, I highly recommend the interview he did with Brooke Gladstone on On The Media. That was the first time I heard anyone really talk about the AI industry with genuine frankness and honestly.

    Basically, OpenAI, Sam Altman, and all of the big tech players have defrauded us and investors by raising laughably high amounts of money and wasting precious resources to build inferior and closed products, when any reasonable person would have known there were better ways. This whole thing also proves how essential competition is to a healthy market and producing things people actually want to use.

    In essence, DeepSeek — and I’ll get into its background and the concerns people might have about its Chinese origins — released two models that perform competitively (and even beat) models from both OpenAI and Anthropic, undercut them in price, and made them open, undermining not just the economics of the biggest generative AI companies, but laying bare exactly how they work. That last point is particularly important when it comes to OpenAI’s reasoning model, which specifically hid its chain of thought for fear of “unsafe thoughts” that might “manipulate the customer,” then muttered under their breath that the actual reason was that it was a “competitive advantage.” -Zitron



  • I started losing my hair when I was a teenager, so I’ve been bald for most of my life. I’ve been shaving my head for decades because it’s the only way my head and face don’t look absurd. I’m totally used to it, and long ago accepted that I’d never have hair on my head again.

    But I’d be lying if I said I didn’t want my hair back.

    If this turns out to be legit and works on most people, there could be a worldwide explosion of self-esteem in adults.


  • I wish this was all true, I really do. But there is a time and a place to be calm. This is not that time, and this is not that place.

    These systems are supposed to have COOP plans (Continuity of Operations), but not all of them do. Systems are supposed to have some degree of backups, but I can tell you from experience that this is almost never the case in any meaningful way.

    I’ve spoken to a number of feds who said their work disappeared overnight. They didn’t choose to comply, and didn’t have sufficient backups in place because of a lack of resources. Their manager or an administrative assistant somewhere most likely went on a deletion spree, and there’s nothing anyone can do about it.

    Sometimes when this stuff is gone, it’s really gone. And we have every right to be furious about it.

    100% agree about the media incentives, but sometimes outrage is not only warranted, but essential.


  • I definitely understand that reaction. It does give off a whiff of unprofessionalism, but their reporting is so consistently solid that I’m willing to give them the space to be a little more human than other journalists. If it ever got in the way of their actual journalism I’d say they should quit it, but that hasn’t happened so far.


  • BertramDitore@lemm.eetoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    2
    ·
    3 months ago

    Corporate media take note. This is how you do reality-based reporting. None of the both-sides bullshit trying to justify or make excuses, just laughing in the face of absurd hypocrisy. This is a well-respected journalist confronting a truth we can all plainly see. See? The truth doesn’t need to be boring or bland or “balanced” by disingenuous attempts to see the other side.

    I will explain what this means in a moment, but first: Hahahahahahahahahahahahahahahaha hahahhahahahahahahahahahahaha. It is, as many have already pointed out, incredibly ironic that OpenAI, a company that has been obtaining large amounts of data from all of humankind largely in an “unauthorized manner,” and, in some cases, in violation of the terms of service of those from whom they have been taking from, is now complaining about the very practices by which it has built its company.