Just a basic programmer living in California

  • 3 Posts
  • 72 Comments
Joined 1 year ago
cake
Cake day: February 23rd, 2024

help-circle

  • It comes down to, what can be done or pre-generated at build or publish time versus what must be done at runtime (such as when a viewer accesses a post)? Stuff that must be done at runtime is stuff you don’t have the necessary information to do at publish time. For example you can’t pre-generate a comments section because you don’t know what the comments will be before a post is published.

    For stuff like email digests and social media posts I might set up a CI/CD system (likely using Github Actions) that publishes static content, and does those other tasks at the same time. Or if I want email digests delivered on a set schedule instead of at publish time I might set a scheduled workflow in the same CI/CD system. Either way you can have automation that is associated with your website that isn’t directly integrated with your web server.

    As you suggest some stuff that must be done at runtime can be done with frontend Javascript. That’s how I implement comments on my static site. I have Javascript that fetches a Mastodon thread that I set up for the purpose, and displays replies under the post.

    I don’t exactly follow your first and fourth requirements so it’s hard for me to comment more specifically. Transforming information from CSVs to HTML sounds like something that could naturally be done at build time if you have the CSVs at build time. But I’m not clear if that’s the case in your situation.


  • This is a big reason for me. Also because if anything breaks - even if my system becomes unbootable - I can select the previous generation from the boot menu, and everything is back to working.

    It’s very empowering, the combination of knowing that I won’t irrevocably break things, and that I won’t build up cruft from old packages and hand-edited config files. It’s given me confidence to tinker more than I did in other distros.




  • It seems to me that you’re asking about two different things: zero-knowledge authentication, and public key authentication. I think you’d have a much easier time using public key auth. And tbh I don’t know anything about the zero-knowledge stuff. I don’t know what reading resources to point to, so I’ll try to provide a little clarifying background instead.

    The simplest way to a authenticate a user if you have their public key is probably to require every request to be signed with that key. The server gets the request, verifies the signature, and that’s it, that’s an authenticated request. Although adding a nonce to the signed content would be a good idea if replay attacks might be a problem.

    If you want to be properly standards-compliant you want a standard “envelope” for signed requests. Personally I would use the multipart/signed MIME type since that is a ready-made, standardized format that is about as simple as it gets.

    You mentioned JSON Web Tokens (JWTs) which are a similar idea. That’s a format that you might think you could use for signing requests - it’s sort of another quasi-standardized envelope format for signed data. But the wrinkle is that JWTs aren’t used to sign arbitrary data. The data is expected to be a set of “claims”. A JWT is a JSON header, JSON claims, and a signature all of three which are serialized with base64 and concatenated. Usually you would put a JWT in the Authorization header of an HTTP request like this:

    Authorization: Bearer $jwt
    

    Then the server verifies the JWT signature, inspects the “claims”, and decides whether the request is authorized based on whether it has the right claims. JWTs make sense if you want an authentication token that is separate from the request body. They are more complicated than multipart/signed content since the purpose is to standardize a narrow use case, but to also support all of the features that the stakeholders wanted.

    Another commenter suggested Diffie-Hellman key exchange which I think is not a bad idea as a third alternative if you want to establish sessions. Diffie-Hellman used in every https connection to establish a session key. In https the session key is used for symmetric encryption of all subsequent traffic over that connection. But the session key doesn’t have to be an encryption key - you could use the key exchange to establish a session password. You could use that temporary password to authenticate all requests in that session. I do know of an intro video for Diffie-Hellman: https://youtu.be/Ex_ObHVftDg

    The first two options I suggested require the server to have user public keys for each account. The Diffie-Hellman option also requires users to have the server’s public key available. An advantage is that Diffie-Hellman authenticates both parties to each other so users know they can trust the server. But if your server uses https you’ll get server authentication anyway during the connection key exchange. And the Diffie-Hellman session password needs an encrypted connection to be secure. The JWT option would probably also need an encrypted connection.


  • hallettj@leminal.spacetoLinux@lemmy.mlHow do you backup?
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 month ago

    My conclusion after researching this a while ago is that the good options are Borg and Restic. Both give you incremental backups with cheap timewise snapshots. They are quite similar to each other, and I don’t know of a compelling reason to pick one over the other.



  • hallettj@leminal.spacetoLinux@lemmy.mlSWAY desktop
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 month ago

    Are you using swayidle? It’s supposed to automatically keep the screen on when there is full-screen video playing. It’s the same in Gnome: you generally don’t need caffeine if a full-screen video is going.

    How are you playing videos? Maybe the player doesn’t correctly implement the idle inhibit protocol. Or if you’re using sway bindings to make the window fullscreen instead of using the app’s own fullscreen mode then maybe the player doesn’t know it’s fullscreen, and doesn’t set up the idle inhibit.

    If you do want manual idle inhibit control, if you use Waybar it has an idle inhibitor module that mimics caffeine. If you don’t use Waybar there is a little Python script you can run. Kill it when you want to stop inhibiting idle. actually wib looks like a better option


  • This seems like a restatement of X. We still don’t understand Y. I’m especially confused about:

    • Why are SHA-256 and friends ok, but IPFS CIDs are not? They have basically the same functionality.
    • Do you need a distributed network, or is a single server ok?

    There was some hint that maybe you’re concerned about reproducibility for CIDs? If you fix the block size, hash algorithm, and content codec you’ll get consistent results. SHA-256 also breaks data into chunks of 64 bytes as it happens.

    Anyway Wikipedia has a list of content-addressable store implementations. A couple that stand out to me are git and git-annex.


  • I’ve mainly worked as an employee so I don’t have as much experience with freelance gigs. But nearly every job I’ve had in 18 years has been through networking. Organizing and speaking at programming meetups opened a lot of doors for me. It gets a lot of attention on me while I get a chance to present myself as an expert.

    Eventually I’d worked with enough people that when I’ve been looking for work I find I know people who’ve moved to new companies that are hiring.


  • I’m gonna take a couple of stabs in the dark.

    According to this Stack Overflow answer using tee can prevent the prompt from drawing which makes it appear that a script has not terminated. The answerer’s workaround is to put a very short sleep command after the tee command.

    If this is what happened to you maybe the reason the script works in bash but not in zsh is because you have different prompts configured in those two shells.

    Another idea is to replace tee with sponge from moreutils. The difference is that sponge waits for the end of stdin before it starts writing which can avoid problems in some situations.


  • hallettj@leminal.spacetoLinux@lemmy.mlPlug-and-play development environment
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    2 months ago

    Oh yeah, and Nix has the advantage that you don’t need containers. If you want to run a graphical app in a container it might be tricky to access the window manager on the host system. Maybe that’s why you were setting up i3? Yeah, containers are great and flexible, but they also have a variety of downsides so Nix is better ;)



  • I’ve been using nushell as my shell for a long while. Completions are not as polished as zsh - both the published completions for each program, and the UX for accepting completions. But you get some nice things in exchange.

    I LOVE using nushell for scripting! CLI option parsing and autocompletions are nicely built into the function syntax. You don’t have to use the shell for this: you can write standalone scripts, and I do that sometimes. But if you don’t use it as your shell you don’t get the automatic completions.

    Circling back to my first point, writing your own completions is very easy if you don’t like the options that are out there. You write a function with the same name as the program you want completions for, use the built-in completions feature, and it’s done.





  • It would be great if there were a way to translate x86 binaries for ARM without emulation. Has Valve found some way to do that? From a bit of searching I see they’ve been testing games on ARM, and that testing involves a version of Proton/Wine that runs on ARM. But it looks to me like they’re testing with ARM binaries for those games?

    I’m as enthusiastic as anyone about more Linux usage, and I agree that Linux support for ARM is a good selling point. But the reason Linux works so well on ARM is that we use all this open-source software that anyone can compile for ARM. I don’t think it’s honest to point to closed-source software that we can’t recompile, and imply that it will work better on Linux because other software runs natively on ARM on Linux.