• 0 Posts
  • 59 Comments
Joined 2 years ago
cake
Cake day: June 25th, 2023

help-circle

  • Atleast for android and Bluetooth. Not an absolute matter protocol thing, but probably “normal android allows that only woth google signed app”. Some OS access thing and so on. “You don’t get to access that Bluetooth matter discovery call mode etc. without official sign off, security hurdur”.

    Since matter has ways to connected fully without Bluetooth depending on device. Bluetooth is just the easy simple way, instead of having to hunt for pairing QR symbols or number codes. Go to the other device makers app to activate pairing mode or generate one off pairing codes and so on.





  • Because they can’t without backdooring the software? Just like they also refuse to co-operate with Swedish government and threatened to leave the market should Sweden try to force them.

    You know Russian spies can also use TOR onion routing and so on.

    As for phishing there is nothing Signal can do about someone scanning a signal contact sharing QR and adding it to their contracts list beyond informative “hey are you really sure, really really sure you want to add this contact”. If user trusts someone they shouldn’t, no amount of app policy protections help. Or maybe they manage to shish them to scan and approve “share account to another device”. Again nothing Signal can do about that.




  • 30 years away from it (reduced from the original 100 years they provided only 5 years ago)

    More like estimates on this are completely unreliable. As in that 100 years could have as well been 1000 years. It was pretty much “until an unpredictable technological paradigm shift happens”. “100 years in future” is “when we have warp drives and star gates” of estimates. Pretty “when we have advanced to next level of advancement and technology, whenever it happens. 100 years should be good minimum of this not being taken as an actual year number estimate”.

    30 years is “we see maybe a potential path to this via hypothetical developments of technology in horizon”. It’s the classical “Fusion is always 30 years away”. Until one time it isn’t, but that 30 year loop can go on indefinitely, if the hypothetical don’t turn to reality. Since you know we thought “maybe that will work, once we put out mind in to it”. Oh it didn’t, on to chasing next path.

    I only know of one project, that has 100 year estimate, that is real. That is the Onkalo deep repository of spent fuel in Finland. It has estimate of spending 100 years being filled and is to be sealed in 2120’s and that is an actual date. Since all the tech is known, the sealing process is known, it just happens to take a century to fill the repository bit by bit. Finland is kinda stable country and radiation hazard such long term, that whatever government is to be there in 2120’s, they will most likely seal the repository.

    Unless “we invent warp drives” happens before that and some new process of actually efficiently and very safely getting rid of the waste is found in some process. (and no that doesn’t include current recycling methods. Since those aren’t that good to get rid of this large amount and with small enough risk of side harms. Surprise, this was studied by Finland as alternative and it was simply decided “recycling is not good enough, simple enough, efficient enough and safe enough yet. Bury it in bedrock tomb”).


  • Main issue comes from GDPR. When one uses the consent basis for collecting and using information it has to be a free choice. Thus one can’t offer “Pay us and we collect less information about you”. Hence “pay or consent” is blatantly illegal. Showing ads in generic? You don’t need consent. That consent is “I vote with my browser address bar”. Thing just is nobody anymore wants to use non tracked ads…

    So in this case DMA 5(2) is just basically re-enforcement and emphasis of previous GDPR principle. from verge

    “exercise their right to freely consent to the combination of their personal data.”

    from the regulation

    1. The gatekeeper shall not do any of the following:
      (a) process, for the purpose of providing online advertising services, personal data of end users using services of third parties that make use of core platform services of the gatekeeper;
      (b) combine personal data from the relevant core platform service with personal data from any further core platform services or from any other services provided by the gatekeeper or with personal data from third-party services;
      © cross-use personal data from the relevant core platform service in other services provided separately by the gatekeeper, including other core platform services, and vice versa; and
      (d) sign in end users to other services of the gatekeeper in order to combine personal data,

    unless the end user has been presented with the specific choice and has given consent within the meaning of Article 4, point (11), and Article 7 of Regulation (EU) 2016/679.

    surprise 2016/679 is… GDPR. So yeah it’s new violation, but pretty much it is “Gatekeepers are under extra additional scrutiny for GDPR stuff. You violate, we can charge you for both GDPR and DMA violation, plus with some extra rules and explicity for DMA”.

    I think technically already GDPR bans combining without permission, since GDPR demands permission for every use case for consent based processing. There must be consent for processing… combining is processing, needs consent. However this is interpretation of the general principle of GDPR. It’s just that DMA makes it explicit “oh these specific processing, yeah these are processing that need consent per GDPR”. Plus it also rules them out of trying to argue “justified interest” legal basis of processing case of the business. Explicitly ruling “these type of processing don’t fall under justified interest for these companies, these are only and explicitly per consent type actions”.


  • That is just its core function doing its thing transforming inputs to outputs based on learned pattern matching.

    It may not have been trained on translation explicitly, but it very much has been trained on these are matching stuff via its training material. Since you know what its training set most likely contained… dictionaries. Which is as good as asking it to learn translation. Another stuff most likely in training data: language course books, with matching translated sentences in them. Again well you didnt explicitly tell it to learn to translate, but in practice the training data selection did it for you.




  • Well difference is you have to know coming to know did the AI produce what you actually wanted.

    Anyone can read the letter and know did the AI hallucinate or actually produce what you wanted.

    On code. It might produce code, that by first try does what you ask. However turns AI hallucinated a bug into the code for some edge or specialty case.

    Hallucinating is not a minor hiccup or minor bug, it is fundamental feature of LLMs. Since it isn’t actually smart. It is a stochastic requrgitator. It doesn’t know what you asked or understand what it is actually doing. It is matching prompt patterns to output. With enough training patterns to match one statistically usually ends up about there. However this is not quaranteed. Thus the main weakness of the system. More good training data makes it more likely it more often produces good results. However for example for business critical stuff, you aren’t interested did it get it about right the 99 other times. It 100% has to get it right, this one time. Since this code goes to a production business deployment.

    I guess one can code comprehensive enough verified testing pattern including all the edge cases and with thay verify the result. However now you have just shifted the job. Instead of programmer programming the programming, you have programmer programming the very very comprehensive testing routines. Which can’t be LLM done, since the whole point is the testing routines are there to check for the inherent unreliability of the LLM output.

    It’s a nice toy for someone wanting to make a quick and dirty test code (maybe) to do thing X. Then try to find out does this actually do what I asked or does it have unforeseen behavior. Since I don’t know what the behavior of the code is designed to be. Since I didn’t write the code. good for toying around and maybe for quick and dirty brainstorming. Not good enough for anything critical, that has to be guaranteed to work with promise of service contract and so on.

    So what the future real big job will be is not prompt engineers, but quality assurance and testing engineers who have to be around to guard against hallucinating LLM/ similar AIs. Prompts can be gotten from anyone, what is harder is finding out did the prompt actually produced what it was supposed to produce.



  • Specially in say foggy conditions and little bit distance. At which point you won’t clearly maybe differentiate individual elements and more like that’s the rear and “block of light in middle, left and right”. At which point it all little blending one might infact be under impression “the light intensity lowered at the rear, huh, not braking then, did they have they parking break dragging they released or something… ohhhjj shuiiiiiit no it is braking hard”.

    My two cents from here north of Europe and land of snow, rain, fog and occasional white out conditions.



  • He is successful enough, old enough and made enough money, that he can just retire. Threatening him is an empty threat. He is 60 and probably given his long career earned more than he can spend in rest of his life, unless he goes super yacht and private jet crazy.

    The whole show was a come back from retirement essentially. A voluntary indulgence on his part. Surely lucrative indulgence, but indulgence still. Apple needed him, he didn’t need Apple.

    Most of the crew probably will leave for other project with a letter of recommendation from John in their pocket.


  • Well many adblockers can be clever enough to load the asset, but then just drop it. As in yeah the ad image got downloaded to browser, but then the page content got edited to drop the display of the add or turn it to not shown asset in css.

    This is age old battle. Site owners go you must do X or no media. However then ad blocker just goes “sure we do that, but then we just ghost the ad to the user”.

    Some script needs to be loaded, that would display the ad? All the parts of the script get executed and… then CSS intervention just ghosts the ad that should be playing and so on.

    Since the browser and extension are in ultimate control. As said the actual add video might be technically “playing” in the background going through motions, but it’s a no show, no audio player… ergo in practice the ad was blocked, while technically completely executed.

    Hence why they want to scan for the software, since only way they can be sure ad will be shown is by verifying a known adhering to showing the ad software stack.

    Well EU says that is not allowed, because privacy. Ergo the adblocker prevention is playing a losing battle. Whatever they do on the “make sure ad is shown” side, adblocker maker will just implement counter move.


  • Don’t threaten us with good time, Elon.

    Also no way he is going through. He is way too much in financial hole to give up European market. Like Google or Meta, sure they have the financial standing to maybe pull such move and survive.

    Xitter? They need every visitor and account they can have globally to even think about staying viable.

    Empty bluster and pointless empty bluster, since EU would just go “fine. Our continental economy or prosperity doesn’t depend on your social media company. Social media isn’t a critical industry, so we are just fine with you leaving. Plus there is 10 others like you anyway”.

    You can’t threaten people with something that doesn’t damage them and heck might be seen as benefit.