• 0 Posts
  • 61 Comments
Joined 1 year ago
cake
Cake day: March 3rd, 2024

help-circle



  • this is my most controversial take in computing in general:

    i’ve always hated the browser. the reason there are only a few working browser engines is that HTTP and the HTML/CSS/JS tech stack is a gigantic pile of tech debt, and even using Chromium and Firefox you run into edge cases where, for certain edge cases, they don’t always follow the specs as defined in these ancient RFCs. and these specs: why tf are they treated as gospel? which software product specs drafted 50 years ago get this kind of reverence? why is it that other GUIs have had tons of iteration, not just of their spec but their full stack implementation (Wayland, .NET, Kotlin Compose, SwiftUI, etc), but we’re all just fine with this mess of janky boomer protocols cuz it lets startups get to market faster? why is downloading an entire app (less some caching) every time you want to use it feel less cumbersome than installing something native to the runtime environment where the protocols can be tightly controlled by the developer and not subject to whatever security and storage protocols whatever browser implementation decides is good for you? cookies? really? the browser should be reimagined with a tighter set of protocols that allow you to look at brochure sites and download content, ie apps. even the best web apps are a janky mess and have never worked better than properly developed desktop GUI. /rant









  • chrash0@lemmy.worldtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    29 days ago

    i doubt the recent uptick in traffic is from “stealing data” for training but rather from agents scraping them for context, eg Edge Copilot, Google’s AI search, SearchGPT, etc.

    poisoning the data will likely not help in this situation since there’s a human on the other side that will just do the same search again given unsatisfactory results. like how retries and timeouts can cause huge outages for web scale companies, poisoning search results will likely cause this type of traffic to increase and further increase the chances of DoS and higher bandwidth usage.







  • you have to do a lot of squinting to accept this take.

    so his wins were copying competitors, and even those products didn’t see success until they were completely revolutionized (Bing in 2024 is a Ballmer success? .NET becoming widespread is his doing?). one thing Nadela did was embrace the competitive landscape and open source with key acquisitions like GitHub and open sourcing .NET, and i honestly don’t have the time to fully rebuff this hot take. but i don’t think the Ballmer haters are totally off base here. even if some of the products started under Ballmer are now successful, it feels disingenuous to attribute their success to him. it’s like an alcoholic dad taking credit for his kid becoming an actor. Microsoft is successful despite him


  • these days Hyprland but previously i3.

    i basically live in the terminal unless i’m playing games or in the browser. these days i use most apps full screen and switch between desktops, and i launch apps using wofi/rofi. this has all become very specialized over the past decade, and it almost has a “security by obscurity” effect where it’s not obvious how to do anything on my machines unless you have my muscle memory.

    not that i necessarily recommend this approach generally, but i find value in mostly using a keyboard to control my machines and minimizing visual clutter. i don’t even have desktop icons or a wallpaper.


  • All programs were developed in Python language (3.7.6). In addition, freely available Python libraries of NumPy (1.18.1) and Pandas (1.0.1) were used to manipulate data, cv2 (4.4.0) and matplotlib (3.1.3) were used to visualize, and scikit-learn (0.24.2) was used to implement RF. SqueezeNet and Grad-CAM were realized using the neural network library PyTorch (1.7.0). The DL network was trained and tested using a DL server mounted with an NVIDIA GeForce RTX 3090 GPU, 24 Intel Xeon CPUs, and 24 GB main memory

    it’s interesting that they’re using pretty modest hardware (i assume they mean 24 cores not CPUs) and fairly outdated dependencies. also having their dependencies listed out like this is pretty adorable. it has academic-out-of-touch-not-a-software-dev vibes. makes you wonder how much further a project like this could go with decent technical support. like, all these talented engineers are using 10k times the power to work on generalist models like GPT that struggle at these kinds of tasks, while promising that it would work someday and trivializing them as “downstream tasks”. i think there’s definitely still room in machine learning for expert models; sucks they struggle for proper support.