• 1 Post
  • 151 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle

  • Average spending is not a good metric for addictive behaviors - spending/consumption tends to be extremely concentrated in a small fraction. My go-to example for this is alcohol where, in the US, 10 drinks/week is the population average, but also enough to get you into the “top 10%” or “heavy drinker” bin, where the average consumption of that bin is 74 drinks/week. In both alcohol and gacha, a huge fraction of the population don’t pay anything.

    I mean, even if the article’s $30/month average spend is entirely within their 20% “problem” spenders, it would only be $150, but it’s a little easier (for me) to see where $150/month gacha habit could be a problem for young people already on the financial edge. Not the fundamental problem that skyrocketing rent and stagnant wages are, but more in the last-straw sense.





  • tburkhol@lemmy.worldtoSelfhosted@lemmy.worldISO Selfhost
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 month ago

    Wonder if there’s an opportunity there. Some way to archive one’s self-hosted, public-facing content, either as a static VM or, like archive.org, just the static content of URLs. I’m imagining a service one’s heirs could contract to crawl the site, save it all somewhere, and take care of permanent maintenance, renewing domains, etc. Ought to be cheap enough to maintain the content; presumably low traffic in most cases. Set up an endowment-type fee structure to pay for perpetual domain reg.


  • tburkhol@lemmy.worldtoSelfhosted@lemmy.worldISO Selfhost
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 month ago

    At least my descendants will own all my comments and posts.

    If you self-host, how much of that content disappear when your descendants shut down your instance?

    I used to host a bunch of academic data, but when I stopped working, there was no institutional support. Turned off the server and it all went away (still Wayback Machine archives). I mean, I don’t really care whether my social media presence outlives me, the experience just made me aware that personal pet projects are pretty sensitive to that person.



  • Back in the day, I set up a little cluster to run compute jobs. Configured some spare boxes to netboot off the head-node, figured out PBS (dunno what the trendy scheduler is these days), etc. Worked well enough for my use case - a bunch of individually light simulations with a wide array of starting conditions - and I didn’t even have to have HDs for every system.

    These days, with some smart switches, you could probably work up a system to power nodes on/off based on the scheduler demand.



  • You can configure HA to use an external database, so you could (presumably) config two instances to use the same DB. Not sure how much conflict that would cause for entities that are only attached to one of those instances, but it seems like both should have the same access to state data and history. Could probably even set one instance up with read-only DB access to limit data conflicts, although I imagine HA will complain about that.

    Even with an external database, HA still uses its internal DB for some things, so I don’t think you’d ever get identically mirrored instances.


  • tburkhol@lemmy.worldtoSelfhosted@lemmy.worldStarting to self host
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 months ago

    If you’re already running Pihole, I’d look at other things to do with the Pi.

    https://www.adafruit.com/ has a bunch of sensors you can plug into the Pi, python libraries to make them work, and pretty good documentation/examples to get started. If you know a little python, it’s pretty easy to set up a simple web server just to poll those sensors and report their current values. Only slightly more complicated to set up cron jobs to log data to a database and a web page to make graphs.

    It’s pretty straightforward to put https://www.home-assistant.io/ in a docker on a Pi. If you have your own local sensors, it will keep track of them, but it can also track data from external sources, like weather & air quality. There a bunch of inexpensive smart plugs ($20-ish) that will let you turn stuff on/off on a schedule or in response to sensor data.

    IMO, Pi isn’t great for transport-intensive services like radarr or jellyfin, but, with a Usb HD/SSD might be an option.


  • If you make it to Medicare age, it gets a lot less stressful. eg: my folks have had 4 knees replaced with very little out-of-pocket cost. There’s still supplemental insurance, but Medicare, not the profit-driven insurance company, determines what gets covered, and they mostly listen to doctors. There’s always edge cases, where some treatment might not be covered, but I feel like those are uncommon.

    One way or the other, my ultimate health care plan is 9mm.


  • That’s my point: fusion is just another heat source for making steam, and with these experimental reactors, they can’t be sure how much or for how long they will generate heat. Probably not even sure what a good geometry for transferring energy from the reaction mass to the water. You can’t build a turbine for a system that’s only going to run 20 minutes every three years, and you can’t replace that turbine just because the next test will have ten times the output.

    I mean, you could, but it would be stupid.



  • I’ve always understood 2 as 2 physically different media - i.e., copies in different folders or partitions of the same disk is not enough to protect against failure of that disk, but a copy on a different disk does. Ideally 2 physically different systems, so failure/fire in the primary system won’t corrupt/damage the backup.

    Used to be that HDDs were expensive and using them as backup media would have been economically crazy, so most systems evolved backup media to be slower and cheaper. The main thing is that having /home/user/critical, /home/user/critical-backup, and /home/user/critical-backup2 satisfies 3 copies, but not 2 media.


  • 3: RAID-1 pair + manual periodic sync to an external HD, roughly monthly. Databases synced to cloud.

    2: external HD is unplugged when not syncing

    1: External HD is a rotating pair, swapped in a bank box, roughly quarterly. Bank box costs $45/year.

    If the RAID crashes, I lose at most a month. If the house burns down, I lose at most 3 months. Ransomware, unless it’s really stealthy, I lose 3 months. If I had ongoing development projects, a month (or 3) would be a lot to lose, and I’d probably switch to weekly syncs and monthly swaps, but for what I actually do - media files, financial and smart-home data, 3 months would not be impossible to recreate.

    All of this works because my system is small enough to fit on one HDD. A 3-2-1 system for tens of TB starts to look a lot like an enterprise system.



  • I have 8 Z-wave devices now, including a couple “long range” devices. With the first couple, I would sometimes have trouble with the farthest, battery-powered device dropping out of the network occasionally, but that hasn’t happened as I’ve added more devices. I fought with pairing the initial devices - clicking the right series of buttons at the same time as telling HA to look for devices to join - but all the recent devices have just has a QR code - scan it into HA, and the device just shows up when I turn it on. I don’t know how much of this difference between new and old is my learning curve vs better product support, but I am really happy with my Z-waves now.

    Z-wave rather than wifi so I know they aren’t phoning home.