

It has been over a quarter of a century since SimCity 3000 was released in 1999.
First off, rude.
Secondly, great write up. :)
— GPG Proofs —
This is an OpenPGP proof that connects my OpenPGP key to this Lemmy account. For details check out https://keyoxide.org/guides/openpgp-proofs
[ Verifying my OpenPGP key: openpgp4fpr:27265882624f80fe7deb8b2bca75b6ec61a21f8f ]
It has been over a quarter of a century since SimCity 3000 was released in 1999.
First off, rude.
Secondly, great write up. :)
Yeah, evidently they really, really want you to sign up. They upped the period from the first three years to as long as you own the car (it doesn’t transfer to a new owner if you sell it).
When I bought my Hyundai they pushed fucking hard to sign up for their Blue link product. Free for life! Look, map updates! You can personalize your driver profile pic! Want to remote start your car over the Internet?
Luckily, my VIN wasn’t working for registration (I guess it hadn’t quite gone through fast enough that the car was purchased). I’ve gone two months without BlueLink and I’m hoping that’s saving me from some of the info gathering (or at least it’s not directly linked to me.
Good as in “the worst that the Internet has to offer is an old man orgy”.
It was a simpler time.
Jesus, tell me that site is still up. I need to be reminded of when the Internet had good things.
I bought a car that comes with a “free” 300k/30 year warranty, but only if to do oil changes every 4k miles or 3 months. Maybe this guy has something similar?
For me, I may try And keep it up for a bit, but driving to one particular dealer every 3 months just to get a ridiculous warranty that will probably never actually pay out isn’t worth it.
Yeah, that’s the other thing to keep in mind, since the KVM APIs are different from the vSphere APIs, you can’t just swap providers without changes. But if you were going from a test vSphere stack to a prod, you could update the endpoint and be just fine.
Hashicorp has caught some shit in the past about claiming the code covers multiple providers. Technically, it can if you do weird shit with modules, but in reality there isn’t a clean way to have a single, easily understandable project that can provision to multiple platforms.
nothing about it is common or portable, so if you change your VM host, it might all fall apart.
Disclaimer, I’m pretty much elbow deep into terraform daily and have written/contributed to a few providers.
A lot of this is highly dependent upon the providers (the thing that allows the Terraform engine to interface with APIs for AWS, Proxmox, vSphere, etc. The Telmate Proxmox provider in particular is/was quite awful with not realizing a provisioned VM had moved to a new host.
Also, the default/tutorial code tends to be not very flexible. The game changer for me was using the built-in functions for decoding yaml from a config file (like yamldecode(file(config.yml))
in a locals block. You can then specify your desired infrastructure with yaml and (if you write your Terraform code correctly) you can blowout hundreds of VMs, policies, firewall rules, dns records etc with a single manifest. I’ve also used the local_file
resource with a Terraform file template to dynamically create an Ansible inventory file based on what’s deployed.
I’m not sure if the person I replied to was thinking about this movie in particular, but it certainly came to mind when I posted that gif:
Also of note - if you’re using docker (and Linux), make sure the user is/group id match across everything to eliminate any permissions issues.
Not really, but I can give you my reasons for doing so. Know that you’ll need some shared storage (NFS, CIFS, etc) to take full advantage of the cluster.
I hope that helps give some reasons for doing a cluster, and apologies for not replying immediately. I’m happy to share more about my homelab/answer other questions about my setup.
I dunno, Trump showed the world you can completely give up decades of hard won soft power in only two months, maybe China will think it’s cool?
/S
I had some luck cobbling together a HiFiBerry with some speakers. It’s not the same thing, but it does show up like a streamable audio device.
It’s not super cheap though.
Those are beasts! My homelab has three of them in a Proxmox cluster. I love that for not a ton of extra money you can throw in a PCIe expansion slot and the power consumption for all three is less than my second hand Dell Tower server.
I only use voice to interact with my phone when I’m driving and want to set a destination, or if I’m at home and can’t find my phone.
Somehow I doubt that AI will help tell me my phone fell off the nightstand and is half way under the bed.
Sorry, I wasn’t clear - I use PowerDNS so that I can more easily deploy services that can be resolved by my internal networks (deployed via Kubernetes or Terraform). In my case, the secondary PowerDNS server does regular zone transfers from the primary in order to ensure it has a copy of all A, PTR, CNAME, etc records.
But PowerDNS (and all DNS servers really), can either be authoritative resolvers or recursors. In my case, the PDNS servers are authoritative for my homelab zone/domain and they perform recursive lookups (with caching) for non-authoritative domains like google.com, infosec.pub, etc. By pointing my PDNS servers to PiHole for recursive lookups, I ensure that I have ad blocking while still allowing for my automation to handle the homelab records.
This is overkill.
I have a dedicated raspberry pi for pihole, then two VMs running PowerDNS in Master/Slave mode. The PDNS servers use the Pihole as their primary recursive lookup, followed by some other Internet privacy DNS server that I can’t recall right now.
If I need to do maintenance on the pihole, power DNS can fall back to the internet DNS server. If I need to do updates on the PowerDNS cluster, I can do it one at a time to reduce the outage window.
EDIT: I should have phrased the first sentence: “My setup is overkill” rather than “This is overkill” - the Op is asking a very valid question and the passive phrasing of my post’s first sentence could be taken multiple ways.
I put my Plex media server to work doing Ollama - it has a GPU for transcoding that’s not awful for simple LLMs.
And make sure those beans have a recent roast date. A lot of coffee bags sold in grocery stores near me don’t even post their roast date.
If you like coffee from a local shop, see if they’ll sell you a bag.