

If it’s behind a VPN, there’s no extra attack surface.
Mama told me not to come.
She said, that ain’t the way to have fun.
If it’s behind a VPN, there’s no extra attack surface.
You can make multiple files with different encodings and select based on the Accept-Encoding
header.
That sounds like a cop-out to me. Surely they could have snapshots of data in a more reasonable system to make common operations fast (mostly querying data), while keeping the old systems as the source of truth, no? We do that, and we have far fewer customers than a major bank does…
Dang, I was hoping for a FOSS project that would do most of the heavy lifting for me. Maybe such a thing exists, idk, but it would be pretty cool to have a pluggable system that analyzes activity and tags connections w/ some kind of identifier so I could configure a web server to either send it nonsense (i.e. poison AI scrapers), zip bombs (i.e. bots that aren’t respectful of resources), or redirect to a honey pot (i.e. malicious actors).
A quick search didn’t yield anything immediately, but I wasn’t that thorough. I’d be interested if anyone knows of such a project that’s pretty easy to play with.
We do road trips as well, and it would be a lot more fun if I could just take some motion sickness meds and play video games or watch a movie or something.
That’s sick! If I could buy a simple car to take me to/from work automatically, I’d absolutely get that. My state seems intent on not rolling out better mass transit, so this would at least let my commute not suck nearly as much.
That sounds like a lot of effort. Are there any tools that get like 80% of the way there? Like something I could plug into Caddy, nginx, or haproxy?
Then we’ll just be more clever as well. It’s an arms race after all.
Brotli gets it to 8.3K, and is supported in most browsers, so there’s a chance scrapers also support it.
Brotli is an option, and it’s comparable to Bzip. Brotli works in most browsers, so hopefully these bots would support it.
I just tested it, and a 10G file full of zeroes is only 8.3K compressed. That’s pretty good, though a little bigger than BZip.
Yup, use something sensible like 10M or so.
How do you tell scrapers from regular traffic?
RAID is production ready on btrfs, the only issue is the write hole on RAID 5/6. If you don’t need RAID 5/6, you’re fine. I use RAID 1, which is 100% production ready.
multi-device support
Ah, I’ve never considered that use case. My HDD RAID 1 array is plenty fast for what I need.
But isn’t that basically what a cache drive does? It mostly caches reads, but I think it can cache writes too.
Good to know if that’s your use case, but it sounds pretty niche to me.
Surely it has a USB controller that translates SATA to USB, no? I’ve heard many of these JBOB enclosures have problems with drives falling off the bus or something in 24/7 operation.
Here’s a video from Level1Techs about USB enclosures, and at the 12 min mark or so, he talks about the USB controllers on these enclosures typically being trash. The one he recommends was $130 ($150 currently) and still has that issue with getting locked up if the connection is bad (e.g. cable gets bumped).
He does mention the USB-C controllers are getting better, so maybe those cheap emclosures are fine.
Yeah, I really don’t know what constraints OP is working under. Here are mine:
If I was building today, I’d probably still go HDD because few mobos have >2 NVMe slots, and NVMe gets expensive at higher capacities, especially if RAID is on the table.
If my NAS was 100% backed up, I wouldn’t need RAID and I would probably use NVMe to save on space and complexity.
bcachefs
Why tho? Just use btrfs or zfs, they’re proven in production, and have a lot of good documentation.
How reliable are those though? The ones I’ve looked at have really crappy controllers.
Memory is not cheap
The thing is, these mantras are always taken out of context.
“Memory is cheap” is in comparison to other options. For example, if you have a the choice between optimizing for CPU or memory, you should optimize for CPU almost every time because it’s a lot cheaper to add more RAM than add more CPU.
But for some reason, we’ve taken this to mean, “I don’t need to optimize memory or CPU because I can just upgrade them.” That’s only true until it isn’t, and it’s generally easier to optimize things as you go than optimize once everything is broken.
Good post. I really don’t understand how apps have gotten so terrible.
The app I work on is slow, but that’s because we’re doing pretty heavy things (3D canvas stuff), but even then we do a really bad job of lazy loading stuff (e.g. images used for that 3D stuff are loaded way before you get to the 3D part, and many users don’t use the 3D feature at all in a session).
But at least we have an excuse. Why does the bank app take forever to load when it just needs to query around balances and submit tasks to their backend to process? That should be incredibly lightweight.
Aren’t you though? I’m a dev too, and at the end of the day, I’m responsible for the correctness of my code, even though we have a QA team that also helps with testing.
Automatic updates are absolutely a thing on some distros, but you can change the schedule to be outside your gaming time.