• 0 Posts
  • 1.42K Comments
Joined 10 months ago
cake
Cake day: July 15th, 2024

help-circle








  • On the basis of having bought it. If they haven’t sold it but made such an impression, then they’ve committed a crime.

    When you are buying a cure against all problems with miniscule text saying it’s just a metaphor, the seller is committing a crime. It’s the same here.

    Morally. Regardless of how courts interpret this right now. That feature that courts and practice officially do not equal morality and thus we can decide differently this time, if we can provide an explanation, is the main advantage of English legal system and those descended from it over others.


  • Edit: I should add, if corporations can’t be bothered to respect what the word buy means, why I should I bother to provide them money? morality is a two way street, if one side is dishonest and shady, do they really have a right to whine when others steal from them?

    Ah, yes, remember all that tone of honesty and seriousness from companies in the 00s against bad, bad pirates, and also scorn at FOSS, like those amateur toys, we make better things? And now from time to time those “serious professional” programs from then are found to contain GPL violations. Or how Sony put a virus on music CDs.

    TBH, there was a time when things were better with actually buying software and music and such. And probably the surge of piracy was first.

    But somehow that doesn’t hurt Steam. Quoting GN - because piracy is a service problem. People generally pirate what they can’t comfortably buy. There were games I’ve never seen in stores in my childhood (no official localization, and by the time I got interested in them people selling bootleg discs in subway road crossings were coming out of fashion here). Piracy was the way I got them.





  • It was in the movies they liked when they were kids. Or at least in the movies they think users want to see brought to reality.

    As in an answer to the question “what’s cool and futuristic”. Solving medieval barbarism and wars is futuristic, but turns out to not be achievable. Same with floating/underwater oceanic cities, blooming deserts, Mars colonies and 20 minutes on train from Moscow to New Delhi. At the same time the audience has been promised by advertising over years that future will be delivered to them. So - AR. For Apple this is the most important part, I think.

    Also to augment something you have to analyze it, and if you have to analyze it, you are permitted to scan and analyze it. That’s a general point of attraction, I think. They are just extrapolating what led them to current success.

    Also in some sense popular things were toys or promises of future for businesses and individuals alike, in the last 10-15 years. The audience is getting tired of toys and promises, while these companies don’t know how to make something else.

    So let Tim Apple care about anything from AR in front of him to apples in his augmented rear, he surely knows what he wants. As another commenter says, a source of instructions and hints for a human walking drone is one, with visualization. I’m not sure that’s good, because if you can get that information for the machine, having a human there seems unnecessary. And if that information is not reliable enough, then it may not improve human’s productivity and error rate.

    And the most important part is that humans learn by things being hard to do, it’s like working out in an exoskeleton, what’s the purpose? And if training and work are separated here, then it seems more effort is spent in total. Not sure.


  • Makes sense why they want this technology so much, one thing has really been achieved - in year 2005 you couldn’t make a program that would be a keylogger and a useful thing all in one, so you had to make a keylogger somehow detect those rare events one can risk it running, or something like that. You couldn’t instruct it in English “send me his private messages on sites like Facebook”, you had to be specific and solve problems. Now you can. And these “AI”'s are usually one program with generic purpose. To stuff everything together with kinda useful things.


  • All you need for this is a global overlay network and a global DNS untied from physical infrastructure. Cryptographic identities (hash of pubkey will do) instead of IP addresses (because NATs are PITA and too many people use mobile devices behind big bad NATs), and finding (in something like Kademlia) records signed by authority you yourself chose to trust instead of asking DNS.

    Then come encryption and dynamic routing and synchronization of published states.

    One can have some kind of Kademlia for discovery of projects too, but on the next level.

    I2P comes close, but it’s more focused on anonymity.

    OK, I’m not sure what I wrote makes sense. These things are easy to grasp somehow, but hard to understand well.



  • Let’s look at a scenario where there’s an exploit that requires a change to an API.

    To the plugin API, you mean? Yes, that’s the borderline case of added complexity of having modularity.

    But in that case it’ll work similarly to browser APIs fo JS being fixed. In one case browser devs break plugins, in another they break JS libraries.

    Some plugin vendors will be slower than others, so the whole thing will see massive delays and end users are more likely to stick to insecure browser versions.

    How is this different from JS libs? Except for power imbalance.

    Just - if we are coming to Chrome devs being able to impose their will on anyone, let’s be honest about it. It has some advantages, yes. Just like Microsoft being able to impose Windows as the operating system for desktop users. Downsides too.

    Plugin vendors are going to demand the same API surface as current web standards and perhaps more, so you’re not saving anything by using plugins, and you’re dramatically increasing the complexity of rolling out a fix.

    Well, I described before why it doesn’t seem so for me.

    What I meant is that the page outside of a plugin should be static. Probably even deprecate JS at all. So - having static pages with some content in them executed in a sandbox by a plugin. Have dynamic content in containers inside static content, from user’s perspective. Like it was with Flash applications except NPAPI plugins weren’t isolated in a satisfactory manner.

    I like some things of what we have now. Just - drop things alternative browsers can’t track, and have in the browser a little standardized VM, inside which plugins (or anything) are executed. Break the vertical integration. It’s not a technical problem as much as social.

    With the web being a “platform for applications” now, as opposed to year 1995, that even makes more sense.

    I think the current web is a decent compromise. If you want your logic in something other than JavaScript, you have WebAssembly, but you don’t get access to nearly as many APIs and need to go through JavaScript. You can build your own abstraction in JavaScript however to hide that complexity from your users. The browser vendor retains the ability to fix things quickly, and devs get flexibility.

    We should have the ability to replace the browser vendor.

    Yes, WebAssembly is good, it would be even better were it the only layer for executable code in a webpage.


  • The modularization was a security nightmare. These plugins needed elevated privileges, a d they all needed to handle security themselves, and as I hope you are aware, Flash was atrocious with security.

    Those - yes. But generally something running on a page receiving keystrokes when selected and drawing in a square and interpreting something can be done securely.

    And modern browsers have done a pretty good job securing the javascript sandbox.

    One can have such a sandbox for some generic bytecode separated from everything else on the page. Would be “socially” same as then, technically better.