FYI: OpenCritic average is moderately lower at (currently) 78/100 (82% recommend) https://opencritic.com/game/18413/tempest-rising
Currently studying CS and some other stuff. Best known for previously being top 50 (OCE) in LoL, expert RoN modder, and creator of RoN:EE’s community patch (CBP).
(header photo by Brian Maffitt)
FYI: OpenCritic average is moderately lower at (currently) 78/100 (82% recommend) https://opencritic.com/game/18413/tempest-rising
Thanks! I did actually Ctrl+F the first two pages but that post has unfortunately not federated to me so I can’t see it on my instance! The woes of federated social media :(
https://fedia.io/media/76/9c/769c2c9146db022100a057732ac38d2502b3a8edc32068b3ff484e43006bed7b.png
I’ll delete the post to preempt anyone else from getting upset at me despite doing nothing wrong 🫠
After many years of selectively evaluating and purchasing bundles as my main source of new games, I’ve come to wonder if it would’ve been better to just buy the individual games when I wanted to play them at whatever the available price was - the rate at which I get through games is far lower than the rate at which games are available in “good” bundles. In the end I’m not even sure if I’ve saved money (because of how many games have been bought but are as-of-yet unplayed) and it does take more time to evaluate whether something’s a good deal or not.
The upside is way more potential variety of games to pull from in my library, but if I only play at most like 1-2 dozen new games a year then I’m not sure that counts for much 🫠
Weirdly grateful right now that lemmy image embeds don’t work properly on mbin (they fall back to being ordinary URLs) 🫠
Nice to see he took it in stride given how… aggressive the post was about him lol
So they literally agree not using an LLM would increase your framerate.
Well, yes, but the point is that at the time that you’re using the tool you don’t need your frame rate maxed out anyway (the alternative would probably be alt-tabbing, where again you wouldn’t need your frame rate maxed out), so that downside seems kind of moot.
Also what would the machine know that the Internet couldn‘t answer as or more quickly while using fewer resources anyway?
If you include the user’s time as a resource, it sounds like it could potentially do a pretty good job of explaining, surfacing, and modifying game and system settings, particularly to less technical users.
For how well it works in practice, we’ll have to test it ourselves / wait for independent reviews.
It sounds like it only needs to consume resources (at least significant resources, I guess) when answering a query, which will already be happening when you’re in a relatively “idle” situation in the game since you’ll have to stop to provide the query anyway. It’s also a Llama-based SLM (S = “small”), not an LLM for whatever that’s worth:
Under the hood, G-Assist now uses a Llama-based Instruct model with 8 billion parameters, packing language understanding into a tiny fraction of the size of today’s large scale AI models. This allows G-Assist to run locally on GeForce RTX hardware. And with the rapid pace of SLM research, these compact models are becoming more capable and efficient every few months.
When G-Assist is prompted for help by pressing Alt+G — say, to optimize graphics settings or check GPU temperatures— your GeForce RTX GPU briefly allocates a portion of its horsepower to AI inference. If you’re simultaneously gaming or running another GPU-heavy application, a short dip in render rate or inference completion speed may occur during those few seconds. Once G-Assist finishes its task, the GPU returns to delivering full performance to the game or app. (emphasis added)
Eh, I think that one’s mostly on the community / players giving up games as soon as anything bad happens (making the 30-70 and 40-60 games where you still have decent odds of winning more like 5-95 games which become a self-fulfilling prophecy), plus regular players getting better over time (mistakes and misplays are more likely to be punished and leads are more likely to be capitalized on).
The give-up culture wasn’t as bad much earlier in the game’s life, at least in my NA-centric exposure to solo queue.
It’s technically an option, yeah, but as you said it’s not something practically used as an “everyday” feed-sorting algorithm. It’s not as though it’s a default or suggested sort option - compare that to Mastodon where it’s the only sort option X_X
Definitely agree that the the common-with-Mastodon viewpoint of exclusively using chronological feeds seems to have over-corrected too far. Can you imagine if the threadiverse was sorted that way? It would be insane and essentially unusable at scale - so we can at least acknowledge that sorting algorithms have a useful place and are not some unsalvageable, irredeemable evil. I wish there was something like a bunch of open source algorithms which the user could choose between in whatever UI they’re using. At the very least there should be some acknowledgement that I, the user, don’t have an identical level of interest in every account I follow, or even in every topic which the same account posts about.
And while microblogging platforms seem to have it worst, there have also been times in the threadiverse where I’ve subscribed to a community/magazine only to later unsubscribe because the activity levels it produces in my feed are much higher than my interest levels in it. So even here (where we have sorting by “hot” etc), some kind of user-configurable weighting would be nice to better match how I actually want my feed to work!
edit: typo
Searching for the phrase, documentation matches for Taiga so maybe you’re right!
Would be curious to read the LLM output.
It looks like it’s available in the linked study’s paper (near the end)
For what it’s worth I generally agree with you, and especially think the people who treat /all as their own personal feed are nuts, but nonetheless it’s something that some people do 🫠
Everyone has their own preferences about how to use things!
Browsing the global/all feed is one way to find new communities, and some people just like using it in general rather than defaulting to a subs-only view.
Just be aware that some places/connections have trouble connecting with it: https://catbox.moe/faq.php (under Connectivity Issues)
Minor bug in the UI / frontend I guess - you could try reporting it to the lemmy devs on GitHub if there isn’t already a GitHub issue for it
Seems like there are other Imgur submissions on Fedia that mostly work fine (no thumbnail, but the image itself shows up if you click on the expando thing in the UI) https://fedia.io/d/imgur.com
I did notice that at some point around the time of my second comment Imgur was having some issues (I was given a JSON file instead of a webpage or image when I found and clicked on the image that OP submitted), so maybe fedia and a few other instances just got unlucky with the timing?
If you notice it becoming a recurring problem you can report it on Fedia’s meta magazine/community: [email protected]
Well, now that’s just discrimination :(
Weird though!
Edit: it also doesn’t show for kbin.earth (which is running mbin) - I’m curious what the source of the problem is for this strange and seemingly-arbitrary minority of users https://kbin.earth/m/[email protected]/t/1028068/So-after-using-Lemmy-for-1-5-Years-You-are-telling
I seem to be missing some context - anyone want to fill in the rest of the class?
Edit: the image being shown to lemmy users everybody else is not being shown to mbin users me and/or fedia.io users (unclear) some unknown subsection of mbin users including me, so here it is for those like me: https://imgur.com/q4zuZzz
Wow, they literally added more horse armor lol