• 12 Posts
  • 1.85K Comments
Joined 2 years ago
cake
Cake day: October 4th, 2023

help-circle

  • I’m confused.

    First, from the article, my understanding is that Google is talking about providing support for their LLM model on Apple’s iOS phones (I assume via querying an off-phone server, rather than locally). This would mean that iOS users have the ability to use Google’s LLM model, Gemini, instead of just ChatGPT being available.

    The Pixel is an Android phone sold by Google. This isn’t the hardware or OS being discussed, and I assume that if you have a Pixel phone, you already have the ability to use Gemini.

    Second, I don’t see why someone would take issue. I mean, I can see not wanting to use the thing. I don’t use Google’s off-device speech recognition, because I don’t want to send snippits of my voice to Google. I don’t use their LLM functionality. I think that there are all sorts of apps, like location-sharing things, that it is a bad idea to install. But it’s not like Google providing support on the platform would force you to use the thing.

    Third, it sounds like you can use Gemini on grapheneOS. If you object to use of a platform that can make use of Gemini, grapheneOS isn’t going to get you there.



  • This is deeply unethical,

    I feel like maybe we’ve gone too far on research ethics restrictions.

    We couldn’t do the Milgram experiment today under modern ethical guidelines. I think that it was important that it was performed, even at the cost of the stress that participants experienced. And I doubt that it is the last experiment for which that is true.

    If we want to mandate some kind of careful scrutiny of such experiments and some after-the-fact compensation be paid to participants in experiments in which trauma-producing deception is imposed, maybe that’d be reasonable.

    That doesn’t mean every study that violates present ethics standards should be greenlighted, but I do think that the present bar is too high.


  • tal@lemmy.todaytoLinux Gaming@lemmy.worldHelp with Cyberpunk 2077 on Mint
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    1 day ago

    As soon as it enters sleep or hibernates, it immediately turns back on.

    I don’t know about sleep, but for hibernation, do you have enough swap space to store a copy of your physical memory? If you don’t, that’s the behavior I’ve seen; there’ll be some system log message, can’t remember if kernel or userspace, about not having enough space to hibernate.

    Easy way to test if you think that might be it is to add a page file (rather than page partition, so you don’t have to repartition) to your swap space, activate it with swapon, and then try re-running a hibernation.



  • Complexity doesn’t really add difficulty to 3d printing. My 3D printer doesn’t much care whether a head is moving in a straight line or doing a zig-zag. It’s gonna just keep extruding that concrete.

    Kinda like how a 2D printer doesn’t much care whether you’re printing a detailed image or a very simple one.

    I guess that there’s a material cost. But, then, that’s also true of existing buildings, and they clearly don’t optimize for that to the exclusion of all else, else there’d be no aesthetic used in designing those buildings.


  • You can run an entirely-untrusted client, but that creates very substantial limitations. I’d go so far as to say that doing so rigorously makes most genres of multiplayer games impractical. Some examples:

    • If the client has no information about visibility of non-visible enemy players in a first-person shooter, it creates delay due to latency in an enemy becoming visible, collision detection issues, and so forth.

    • If all random number generation needs to happen server side, then the client not only can’t be trusted to generate any important random numbers (probably avoided today for many games) but also cannot run an RNG with the same internal state as the server (else a client could “peek ahead” to see the future). As a result, every action that involves randomness requires a server round trip and a sluggish response until a player can see the outcome.

    • On any network without guaranteed minimum bandwidth and guaranteed maximum latency (which is generally not the case for the Internet today), a client can not send packets and pretend that the network just dropped them. There, either one has to seriously restrict predictive movement or one can pull stunts like briefly disabling packet transmission and “teleporting” across dangerous areas, preventing an enemy from taking a shot at one.

    • All hidden information in a game that a client could expose to a player must be exposed to maintain a level playing field. Even stuff that isn’t even specially available to only a client. So, say players figure out that Monster X in an MMO has a respawn timer that causes it to respawn at time T, and someone can modify the client to indicate the remaining time with a timer. For a level playing field, that feature has to be built into the base client for all players, immersion-breaking or no.

    • Any large amount of state on the client probably has to also be exposed to the player, because transmitting it over the network in real time isn’t practical due to bandwidth constraints. So players have to all have minimaps with no obscured terrain, because their client has map files with the geometry. They need to be able to see at least terrain through walls, because their client has that information. Maybe you could try to hide map data in encrypted chunks the very first time through, but that’s going to lead to performance issues and also the next issue if maps are the same across multiple players (almost certainly yes):

    • Any information that another player’s client has should also be available, as players can have clients that collude. So if any other player can see an enemy player, then since their client could tell you about them, the vanilla client must have a wallhack showing any enemy visible to any other player to keep level footing.

    • Almost all simple mechanical operations (e.g. responding quickly to counter an enemy action) must be automated or removed from the game, because otherwise someone could modify their client to do this instantly in response to an enemy action and gain an edge. Dodging needs to be totally automated or gone, for example. The only player actions that can remain are those that require human-level reasoning, where the client can’t play better in some respect than the human using “bot”-like code. Relying on reflexes or having a player managing load from having to deal with many simple things at once can’t be in the game. Aimbots have to be in the base client for first-person shooters.

    For something like turn-based card games, yeah, it’s possible to create games where the client is completely nontrusted. There are no real time constraints, and you can deal with the client (or server) gaming the RNG via mental poker techniques. Keep in mind that even there, the client might be doing things like running probability calculations and telling a player the mathematically-optimal card to play in many cases. Poker would basically be reduced to trying to detect player tells, because playing poker optimally outside of that was solved by von Neumann, and the client could probably tabulate data on other players, like speed of response, to try to help a player identify tells.

    But for most real-time games, providing an experience comparable to the current one is probably going to require some level of client trust. Maybe one can achieve the lesser goal of reducing one’s trust a bit, make it harder to cheat, but a true zero-trust real-time game has to have a great many very serious limitations placed on it.


  • I’m guessing that either (a) the method of concrete extrusion or (b) the process the particular company uses doesn’t permit for what I’ve generally heard in the plastic 3D printing world called “bridging” — being able to create some limited degree of overhang to create arches. If you look at the building, there are no arches — the places with windows are gaps reaching to the roof in the 3D printed wall.

    Normally, with a brick building that has load-bearing walls, you can see a different pattern of bricks directly above a window, where the mason has to go out of their way to support the gap.

    kagis

    I think that that structure is called a lintel.

    I’d think that one route to achieve that might be, during the printing process, sticking some kind of metal support above the window during the printing process, even if the extruded concrete alone doesn’t permit for it. But if they can’t do that as things stand, it’d explain why they might not want to have a lot of windows.


  • which developers can activate some sort of device whitelist to allow the Steam Deck with SteamOS.

    The whole “kernel anti-cheat” PC multiplayer thing is just an endless game of whack-a-mole that I don’t want to get involved with. End of the day, PC hardware is open, and that’s a big part of what’s nice about it. With open hardware, though, comes the fact that you can change things, and one thing that people love using that ability to do is to cheat. And with multiplayer games, a cheater can ruin the game for a non-cheater. Trying to figure out some way to make an open PC be locked down is hard, unreliable, and ultimately just making the thing act like a bad console. If there’s enough demand for it and money in it, a game developer can keep playing that whack-a-mole, but it’s never something that can really be permanently fixed all that well.

    Consoles are really good at blocking players from doing things that will make the playing field not level. They are in a good position to stop fiddling with memory, or modifying game binaries, or extracting information that should be hidden and showing it to the player. They can restrict people from getting a controller or a keyboard or a mouse or a fancier GPU or whatever that will give Player 1 a pay-to-win edge over Player 2. That’s a desirable characteristic if you goal is to have players playing against each other on even footing.

    I really think that the long-term, realistic route to deal with this is for PC games to shift towards single-player, or at least away from competitive multiplayer.

    It used to be that PC multiplayer games were rare. There were two major changes after this that made PC multiplayer games a lot more viable:

    • The Internet came along. Now anyone can communicate with anyone in a very wide area cheaply.

    • People moved off POTS analog modems. This not only provided a lot more bandwidth, but slashed latency — a POTS modem inserted a bit over 100 ms of latency. A tenth of a second of latency at the hardware level was a serious issue for some genres, like first-person-shooter games, so getting rid of that solved a lot of problems.

    Okay, great. That unleashed a flood of multiplayer games. And making a game multiplayer solves a lot of problems:

    • Writing good game AI that stays competitive against a human is hard. Humans are pretty good at that.

    • Humans are good at coordinating, so any cooperative games have humans doing well with humans.

    • Some people specifically want to play against other people, to spend time with them.

    The problem is that I don’t think that there is going to be any future big jump like that improving the multiplayer competitive situation. Those two big jumps are pretty much the end of the road in terms of making multiplayer significantly better. Maybe it’s possible to see some minor gains via better predictive algorithms to shave off perceived latency, though I don’t think that there is going to be game-changing stuff there. Maybe someone can improve on matchmaking. But I think that we’ve seen all the big improvements that are going to come.

    And multiplayer comes with a host of problems that are very hard to fix:

    • By-and-large, realtime multiplayer games cannot be paused. There are a few exceptions, mostly for games with a small number of players. This is a real problem if you, say, have a kid and want to play a game and in the middle of it you hear something smash in the next room and the kid start screaming. Real life is not friendly to people requiring uninterrupted blocks of time.

    • People don’t always do things that are fun for other people. Griefing, spawn-camping, cheating, whatever. Even minor stuff like breaking out of character making the game less-immersive. You can try to mitigate that with game design, but it’s always going to be an issue. Human nature doesn’t change: humans come firmly attached to human foibles.

    • Multiplayer games stop being playable at some point, when they no longer have enough players. Often before that, if they have centralized servers operated by the publisher — which is almost universally the case today — and the servers get shut down.

    • Even with modern networks, latency and flaky connectivity is a factor for real-time games. For people living in remote areas, it’s particularly annoying.

    • For multiplayer competitive games, one can only win at some given rate; for a player to win against a human, that other player will lose. I’d wager that that rate is most-likely not the optimal rate for game enjoyment. If a player isn’t competing against humans, that constraint on game designers goes away.

    On the other hand, while it is hard to make sophisticated game AI, hard to make it as good as a human…there are also no real fundamental limits there. I am confident that while we are not there today, we can make an AGI comparable to a human, and that for the simpler problem of making game AI, it’s possible to make much-less sophisticated AIs that solve many of the problems that humans do in games. That’s expensive to do on a per-game basis – game AI is hard – but my guess is that most games have a fairly-similar set of problems to solve, that it’s probably possible to abstract a lot of that, have “game AI engines” used in many games that solve a lot of those problems. We’ve done that with graphics engines and physics engines; there was a point where having the kind of graphics and physics that we do in many games was prohibitively expensive too, until we figured out how to reuse a lot of work from game to game. And improvements in that game AI is a ratchet: it’s only going to get better, whereas human nature isn’t going to change.

    I’m not saying that multiplayer games are going to vanish tomorrow. But I think that the environment that we’re going to see is going to differ from the one that we’ve seen from maybe the 1990s to the 2010s, where technological change specifically dramatically relatively improved the situation for multiplayer games — I think that there’s going to be a continued trend towards the situation relatively-favoring single-player games.


  • My issue is frequent crashing/freezing, meaning I can’t play longer than a few minutes at a time

    Could be overheating.

    I use AMD hardware.

    However, a few years back, I had a particular AMD card that, using its default onboard power profiles, tended to overheat with the default on-card power profiles in games which really exercised the thing; I understand that the vendor that made these cards had issues with insufficient thermal paste or the thermal paste detaching or something. That’s the card vendor’s fault – the card shouldn’t reach a point where it can get into trouble via overheating, but regardless, it was still a problem. Some people disassembled the thing and put more thermal paste on. I forced the thing to a more-conservative power profile, and that worked.

    I haven’t done this with Nvidia hardware, but it sounds like nvidia-smi can do this:

    https://forum.level1techs.com/t/how-to-set-nvidia-gpu-power-limit-nvidia-smi/131467

    Then to query your power limit:

    sudo nvidia-smi -q -d POWER
    

    And to set it

    sudo nvidia-smi -pl (base power limit+11)
    

    Might try restricting the power usage and see if your crashing stops.

    EDIT: Might also try turning down in-game graphical settings. That’d decrease load and maybe also avoid any potential overheating issues, though it’d be a less-reliable option than the above, as you probably don’t want to make your system freeze just by running some program that happens to throw a lot of load at your card. That also might avoid any issues that the drivers could have that the game is tickling. Worth a shot, at least from an experimentation standpoint, if you are looking for things to try.

    EDIT2: If those do successfully address your problem and it looks like it’s an overheating problem, you might also try figuring out whether you can improve the cooling situation on the hardware side, rather than sacrificing performance for stability.




  • KSP does what it does well. Any sequel comes with huge questions of why people would want another space program simulator

    I think that there were pretty clear ways to expand KSP that I would have liked.

    • There was limited capacity to build bases and springboard off resources from those.

    • I’d have liked to be able to set up programmed flight sequences.

    • More mechanics, like radiation, micrometeorite impacts, etc.

    • The physics could definitely have been improved upon in a number of ways. I mean, I’ve watched a lot of rockets springily bouncing around at their joints.

    • Some of the science-gathering stuff was kind of…grindy. I would have liked that part of the game to be revamped.

    • I don’t think that graphics were a massive issue, but given how much time you spend looking at flames coming from rocket engines, it’d be nice to have improved on that somewhat. I’d have also liked some sort of procedural-terrain-generation system to permit for higher-resolution stuff when you’re on the ground; yeah, you’re mostly in the air or space, but when you’re on the ground, the fidelity isn’t all that great.



  • I use org-mode, which is kind of a structured text format, like Markdown but far fancier, in emacs. Can have to-do lists, deadlines, tables, display a weekly/monthly agenda with planned items, etc. I sometimes use it as a sort of mini-spreadsheet, as it can act something like a spreadsheet, with recalculating tables. I don’t go in for the “whole life organizing in a tool” thing, so there’s a lot of functionality that I don’t use, but it’s generally a superset of what I want, so it works well. There are various other software packages that support it.

    I figured out (while using obsidian) that my brain works better when I dont have to worry about where to put things, but just tag them with topics, by relevance, e.g. So tags and the option to filter them would be nice!

    Org-mode supports tagging, though I don’t use them.

    https://orgmode.org/manual/Tags.html

    That being said, while other software packages do have varying degrees of support, and vim has some support it’s really an emacs thing at its core, so I think that it’s most interesting if you use emacs.



  • Why do you think this happens when these developers already had a winning formula?

    I mean, all series are going to have some point where they dick things up, else we’d have never-ending amazing video game series. I don’t think that the second game in the series is uniquely bad.

    Some of it is just going to be luck. Like, hitting just the right combination of employees, market timing, consumer interest, design decisions, scoping a game’s development time and so forth isn’t a perfectly-understood science. Making the best game of the year probably means that a studio can make a good game, but that’s not the same thing as being able to consistently make the game of the year, year after year.

    Some of it is novelty. I mean, part of most outstanding games is that they’re doing at least something that hasn’t been done before, and doing so again — especially if other studios are trying to copy and build on the winning formula as well — may not be enough.

    Some of it is that most resources don’t always make a game better. I know that at least some past series have failed when a studio made a good game, (understandably) get more resources for the next game in the series, but then try to expand their scope and don’t do well at that new scope.

    Engine rewrites are technically-risky, can get scope wrong, and a number of games that have really badly failed have happened because a studio tries to rebuild everything from the ground up rather than to do an incremental improvement.

    You mention Cities: Skylines 2, and I think that “more resources don’t always help”, “luck”, and “engine rewrite” were all factors. When I play a city-builder, I really don’t care all that much about graphics; I’ve played and enjoyed some city-builders with really unimpressive graphics, like the original lincity. CS2 got a lot of budget and had a dev team that tried to use a lot of resources on graphics (which I think was already not a good idea, and not just due to my own preferences; reading player comments on things like Steam, what players were upset about were that they wanted more-interesting gameplay mechanics, not fancier graphics). Basically, trying to make the world’s prettiest city-builder with the money maybe wasn’t a good idea. Then they made some big internal technical shifts that involved some bad bets on how well some technology that they wanted to use for those graphics would work, and found that they’d dug themselves deeply into a hole.

    Sometimes it’s a game trying to shift genres. To use the Fallout series as an example of both doing this what I’d call successfully and unsuccessfully, the Fallout series were originally isometric real-time-until-combat-then-turn-based games. With Fallout 3, Bethesda took the game to be a pausable 3D first-person-shooter series. That requires a whole lot of software and mechanics changes. That was, I think, successful — while the Wasteland series that the original Fallout games were based on continued the isometric turn-based model successfully, Fallout 3 became a really big hit. On the other hand, Fallout 76 was an attempt to take the series to be a live-action multiplayer game. That wasn’t the only problem — the game shipped in an extremely buggy state, after the team underestimated the technical challenges in taking their single-player game multiplayer. But some of it was just that the genre change took away some of what was nice about about the earlier games — lots of plot and story and scripted content and a world that the player was the center of and could change and an immersive environment that didn’t have other players acting out of character. The audience who loves a game in one genre isn’t necessarily a great fit for another genre. In that situation, it’s not so much that the developers don’t have a winning formula as that they’ve decided to toss their formula out and try to write a new one that’s as successful.