six figures for a junior programmer, no less
six figures for a junior programmer, no less
onn 4k googletv box is amazing. $20 and a launcher swap and it’s like ads don’t exist any more
This is from last year.
ECS/EKS: The ocean belongs to someone else.
I don’t have to, I watched Planet of the Apes
No. Symlinks and hardlinks are two approaches to creating a “pointer to a file.” They are quite different in implementation, but at the high level:
In both cases, the only additional data used is the metadata used for the link itself. The contents of the file on disk are not copied.
This is neat but the selling point for me with the Pebble is the e-ink display. If repebble fails though, my next watch will be a Pine. Hopefully my Versa 2 holds on for a bit longer 🤞
I don’t necessarily agree with it, but there’s the third option of just disabling SELinux and removing the frustration entirely.
…but you have to get whatever it is you’re transporting to the moon first
No, but you’ll have much more overhead. I have a VM that hosts all Docker deployments which don’t need much disk space (most of them)
This is a big point. One of the key advantages of docker is the layering and the fact that you can build up a pretty sizeable stack of isolated services based on the same set of core OS layers, which means significant disk space savings.
Sure, 200-700MB for a stack of core layers seems small but multiply that by a lot of containers and it adds up.
Can we just let this system die already? ffs
Maybe I’m dense but shouldn’t the clock be:
Yep, I’m a dumb, realized after a cup of coffee. Confirmed by the reply below.
I think I’m just going to go back to bed and skip today
Sovol is another option, decent quality out of the box and their corexy units are stupid fast.
That’s not how golf works but I like where your head is at.
deleted by creator
Ultimately it’s a matter of personal choice and risk tolerance.
The Z1 will be simpler and have larger capacity, but if you have a drive fail you’ll need to quickly get it replaced or risk having to rebuild/restore if the mirror drive follows the first one to the grave.
Your Z2 setup right now can have two drives fail and still be online, and having a wider spread of power-on hours is usually a good thing in terms of failure probability.
I manage a large (14,000±) number of on-site RAID1 arrays in various environments and there is definitely a trend for drives shipped at the same time to fail at roughly the same time. It’s common enough that we often intentionally swap drives out before shipping a new unit to the customer site.
On my homelab, I’m much more tolerant of risk since I have trust in my 3-2-1 backup solution and if my NAS goes down it’s not going to substantially affect anything while I wait for a drive replacement.
Please do not the arch viles.
It’s all fun and games until you go remove it from the code your new quirky junior programmer checked in, and now production is dead because the artificial delay just happened to avoid some weird nearly-untraceable race condition.
https://xkcd.com/1172/