• 0 Posts
  • 72 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle


  • What would you suggest they sell on their Android store that users would be so encouraged to install a new store and then what they want?

    Steam already has a store on Android, you just can’t play games there because most games on steam either already exist on the native google play store, or aren’t compatible with mobile architectures like Arm64. Most mobiles unlike a arm laptop, have no x86/amd64 emulator which is what those games are compiled as by their developers.

    So what’s left?




  • biscuitswalrus@aussie.zonetoProgrammer Humor@programming.devSafe passwords
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    21 days ago

    Enterprise applications are often developed by the most “quick, ship this feature” form of developers on the world. Unless the client is paying for the development a quick look at the sql table shows often unsalted passwords in a table.

    I’ve seen this in construction, medical, recruitment and other industries.

    Until cyber security requires code auditing for handling and maintaining PII as law, mostly its a “you’re fine until you get breached” approach. Even things like ACSC Australia cyber security centre, has limited guidelines. Practically worthless. At most they suggest having MFA for Web facing services. Most cyber security insurers have something but it’s also practically self reported. No proof. So if someone gets breached because someone left everyone’s passwords in a table, largely unguarded, the world becomes a worse place and the list of user names and passwords on haveibeenpwned grows.

    Edit: if a client pays and therefore has control to determine things like code auditing and security auditing etc as well as saml etc etc, then it’s something else. But say in the construction industry I’ve seen the same garbage tier software used at 12 different companies, warts and all. The developer is semi local to Australia ignoring the offshore developers…




  • I’m far from an expert sorry, but my experience is so far so good (literally wizard configured in proxmox set and forget) even during a single disk lost. Performance for vm disks was great.

    I can’t see why regular file would be any different.

    I have 3 disks, one on each host, with ceph handling 2 copies (tolerant to 1 disk loss) distributed across them. That’s practically what I think you’re after.

    I’m not sure about seeing the file system while all the hosts are all offline, but if you’ve got any one system with a valid copy online you should be able to see. I do. But my emphasis is generally get the host back online.

    I’m not 100% sure what you’re trying to do but a mix of ceph as storage remote plus something like syncthing on a endpoint to send stuff to it might work? Syncthing might just work without ceph.

    I also run zfs on an 8 disk nas that’s my primary storage with shares for my docker to send stuff, and media server to get it off. That’s just truenas scale. That way it handles data similarly. Zfs is also very good, but until scale came out, it wasn’t really possible to have the “add a compute node to expand your storage pool” which is how I want my vm hosts. Zfs scale looks way harder than ceph.

    Not sure if any of that is helpful for your case but I recommend trying something if you’ve got spare hardware, and see how it goes on dummy data, then blow it away try something else. See how it acts when you take a machine offline. When you know what you want, do a final blow away and implement it with the way you learned to do it best.


  • 3x Intel NUC 6th gen i5 (2 cores) 32gb RAM. Proxmox cluster with ceph.

    I just ignored the limitation and tried with a single sodim of 32gb once (out of a laptop) and it worked fine, but just backed to 2x16gb dimms since the limit was still 2core of CPU. Lol.

    Running that cluster 7 or so years now since I bought them new.

    I suggest only running off shit tier since three nodes gives redundancy and enough performance. I’ve run entire proof of concepts for clients off them. Dual domain controllers and FC Rd gateway broker session hosts fxlogic etc. Back when Ms only just bought that tech. Meanwhile my home “ARR” just plugs on in docker containers. Even my opnsense router is virtual running on them. Just get a proper managed switch and take in the internet onto a vlan into the guest vm on a separate virtual NIC.

    Point is, it’s still capable today.







  • It’s solving a real problem in a niche case. Someone called it gimmicky, but it’s actually just a good tool currently produced by an unknown quantity. Hopefully it’ll be sorted or someone else takes up the reigns and creates an alternative that works perfectly for all my different isos.

    For the average home punter maybe even up to home lab enthusiast, probably not saving much time. For me it’s on my keyring and I use it to reload proxmox hosts, Nutanix hosts, individual Ubuntu vms running ROS Noetic and not to mention reimaging for test devices. Probably a thrice weekly thing.

    So yeah, cumulatively it’s saving me a lot of time and just in trivialising a process.

    If this was a spanner I’d just go Sidchrome or kingchrome instead of my Stanley. But it’s a bit niche so I don’t know what else allows for such simple multi iso boot. Always open to options.


  • I think you probably don’t realise you hate standards and certifications. No IT person wants yet another system generating more calls and complexity. but here is iso, or a cyber insurance policy, or NIST, or acsc asking minimums with checklists and a cyber review answering them with controls.

    Crazy that there’s so little understanding about why it’s there, that you just think it’s the “IT guy” wanting those.