

A highly compressed, global base map at 1m resolution is somewhere on the order of 10TB. MSFS is probably using higher resolution commercial imagery, and that’s just the basemap textures, most of which you’ll never see.
A highly compressed, global base map at 1m resolution is somewhere on the order of 10TB. MSFS is probably using higher resolution commercial imagery, and that’s just the basemap textures, most of which you’ll never see.
MSFS implements optimizations on top of that (progressive detail, compression, etc), but that’s how almost all map systems work under the hood. It’s actually an efficient way to represent real environments where you don’t have the luxury of procedural generation.
Cleanroom RE is how you prove that’s what you did to a court. The point is to avoid into a courtroom with Nintendo at all, making the point moot.
The thing is, steam’s market dominance is one of user choice rather than anticompetitive strategies or lack of alternatives. Steam doesn’t do exclusives, they don’t charge you for external sales, they don’t even prevent you from selling steam keys outside the platform, or users from launching non steam games in the client. The only real restriction is that access to steam services requires a license in the active steam account. Even valve-produced devices like the steam deck can install from other stores.
Sure, dominance is bad in an abstract theoretical way and it’d be nice if Gog, itch.io, etc were more competitive, but Steam is dominant because consumers actively choose it.
Not bad, but you’re missing that the Bluetooth device can report audio latency back to the source so it can delay anything that needs to synchronize. In practice there’s half a dozen more buffers in between and a serious tradeoff between latency, noise sensitivity, and bandwidth.
Extradition treaties are almost always reciprocal and this particular treaty is publicly available. No public treaty is going to include a promise not to coup another government because of the obvious political consequences of admitting you might to everyone else.
No, the “non-fungibility” simply means that anyone who creates an NFT with the same link will be distinct from your link to the image, even if the actual URL is the same. Both NFTs can also be traced back to when they were created/minted because they’re on a blockchain, a property called provenance. If the authentic tokens came from a well known minting, you can establish that your token is “authentic” and the copy token is a recreation, even if the actual link (or other content) is completely identical.
Nothing about having the “authentic” token would give you actual legal rights though.
That’s perfectly solveable with math. Each grid square can take 10 colors, so there are 10^100 possibilities. That’s about 330 bits of entropy, or equivalent to a 51 character password. That’s gross overkill if the underlying cryptosystem isn’t broken, but insufficient if it is (depending on the details).
Cryptography routinely deals with much, much larger numbers than what you’re suggesting (e.g. any RSA key), and even those get broken occasionally.
No. Nvidia will be licensing the designs to mediatek, who will build out the ASIC/silicon in their scaler boards. That solves a few different issues. For one, no FPGAs involved = big cost savings. For another, mediatek can do much higher volume than Nvidia, which brings costs down. The licensing fee is also going to be significantly lower than the combined BOM cost + licensing fee they currently charge. I assume Nvidia will continue charging for certification, but that may lead to a situation where many displays are gsync compatible and simply don’t advertise it on the box except on high end SKUs.
Flat cables can be conformant and they still have twisted pairs. Cables just have to meet the physical properties set by the standard.
Other than Apple music and iCloud, they’re generally less intrusive about popups than Microsoft. Their tactic is to completely prevent competitors from integrating with the system at all rather than nag you to use a setting. For example, there’s no way to use Google maps or Spotify in all the same ways you can use Apple music or Maps.
Just did a quick eBay check. The cheapest 350hp ICE I could find was a rebuilt $3,000 Chevy engine. A new one is more like $6-8k. An equally powerful, brand new Siemens motor was $1,500.
This makes sense when you think about it though. An electric motor is basically just steel with a bunch of coiled wire with some control electronics. An ICE is hundreds of pounds of precision cast and machined metal. The cost driver in electric vehicles is not the motor, it’s the batteries.
A torque converter is part of the whole transmission system even if it’s a separate housing. When you buy a new transmission, it comes with a torque converter.
Torque converters also create the majority of heat in automatic transmissions and are why automatic transmissions get coolers in the first place. How many manuals have you seen with transmission coolers?
deleted by creator
There is independent government oversight. That’s NHTSA, the agency doing these investigations. The companies operating these vehicles also have insurance as a requirement of public operating permits (managed by the states). NHTSA also requires mandatory reporting of accidents involving these vehicles and has safety standards.
The only thing missing is the fee, and I’m not sure what purpose that’s supposed to serve. Regulators shouldn’t be directly paid by the organizations they’re regulating.
Just for context, a large chunk of “top tech talent” at the companies in the study are going to be making 200-400k. While there’s still going to be issues with pay, it’s a pretty different situation than fast food workers or similar.
I’m not assuming it’s going to fail, I’m just saying that the exponential gains seen in early computing are going to be much harder to come by because we’re not starting from the same grossly inefficient place.
As an FYI, most modern computers are modified Harvard architectures, not Von Neumann machines. There are other architectures being explored that are even more exotic, but I’m not aware of any that are massively better on the power side (vs simply being faster). The acceleration approaches that I’m aware of that are more (e.g. analog or optical accelerators) are also totally compatible with traditional Harvard/Von Neumann architectures.
ML is not an ENIAC situation. Computers got more efficient not by doing fewer operations, but by making what they were already doing much more efficient.
The basic operations underlying ML (e.g. matrix multiplication) are already some of the most heavily optimized things around. ML is inefficient because it needs to do a lot of that. The problem is very different.
I couldn’t find official dimensional accuracy specs for any formlabs machines except the 1, which lists 150um. Perhaps you’re talking about the 3, which has a specified minimum spot size of 85um according to this paper. Where did they claim micron dimensional accuracy?
Any cryptography you’re likely to encounter uses fixed size primes over a residue ring for performance reasons. These superlarge primes aren’t relevant for practical cryptography, they’re just fun.