• 0 Posts
  • 10 Comments
Joined 2 years ago
cake
Cake day: June 9th, 2023

help-circle
  • Color is mostly a biological sensation. In low light, humans lose color acuity because rods are activated more than cones. Objects reflect the same wavelengths, but our cones can’t activate due to low energy. Does this mean color fades in low light? It depends on the physiology of the perceiver.

    Humans have three color receptors peak-sensitive to red, green, and blue. Dogs have only two: yellow and blue. This means they can’t distinguish certain wavelengths. To dogs and colorblind humans, red and green look the same because their receptors are activated similarly. Color isn’t just a property of light; it’s a biological perceptual experience.


  • ianonavy@lemmy.worldtoSelfhosted@lemmy.worldWhat is Docker?
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    1 month ago

    A signature only tells you where something came from, not whether it’s safe. Saying APT is more secure than Docker just because it checks signatures is like saying a mysterious package from a stranger is safer because it includes a signed postcard and matches the delivery company’s database. You still have to trust both the sender and the delivery company. Sure, it’s important to reject signatures you don’t recognize—but the bigger question is: who do you trust?

    APT trusts its keyring. Docker pulls over HTTPS with TLS, which already ensures you’re talking to the right registry. If you trust the registry and the image source, that’s often enough. If you don’t, tools like Cosign let you verify signatures. Pulling random images is just as risky as adding sketchy PPAs or running curl | bash—unless, again, you trust the source. I certainly trust Debian and Ubuntu more than Docker the company, but “no signature = insecure” misses the point.

    Pointing out supply chain risks is good. But calling Docker “insecure” without nuance shuts down discussion and doesn’t help anyone think more critically about safer practices.





  • Adding onto what TheMrDrProf said: basically LetsEncrypt just wants to know you actually control the domain you’re using to get the certificate. With HTTP challenges, your domain has to resolve to a working HTTP server. With DNS challenges, you need API access to your DNS provider so that Certbot can set a temporary record that proves ownership.

    If you’re using NPM to manage your certs, then as TheMrDrProf said as long as the HTTP request from LetsEncrypt can make it to your NPM through the VPS proxy, you should be able to pass the challenge and get a certificate. The IP address of the domain doesn’t really matter as long as the request makes it all the way to the challenge HTTP server, which in this case is NPM.

    In NPM, you should see “Use a DNS challenge” option. If you use that and your DNS Provider is supported (if not, I recommend Cloudflare), then your VPS proxy does not even need to be working in order to renew certificates. This has a few advantages such as being able to shut off unencrypted traffic on port 80 completely.


    1. The certificate and private key need to be on your home server since that’s where the TLS is decrypted.
    2. You should be able to tunnel TLS traffic through WireGuard, so no port forwarding is needed.
    3. You’d probably want to move Nginx Proxy Manager to your home server as an ingress gateway (and you can keep all the config + TLS certificates). Then on your VPS, you would no longer need the complexity and something like HAProxy, vanilla Nginx, or Traefik would suffice. Seems like NPM has an open issue to add support for TLS passthrough, but in my opinion it’s simpler to just have your VPS forward all traffic to one port on your home server.

    For added security, you can make sure the proxy on the VPS only routes traffic for the correct domain using SNI. That way if someone hits your IP randomly, it only goes to your home server if the correct domain name was requested as well.

    What you’re doing makes sense to me. Good luck!




  • For rule 2, I would suggest two changes:

    1. Rename “blacklist” to “blocklist” in the spirit of inclusivity
    2. Focus on reliability and accuracy rather than political bias

    My guess is the purpose of rule #2 is to prevent opinion pieces and misinformation from being published as “news”. If the goal is to limit opinion articles presented as “news”, then perhaps the rule should instead clarify A) whether opinion pieces are allowed (and how that is defined) and B) if they are allowed whether they should be marked as such.

    If the goal of rule #2 is to achieve some sort of “political neutrality”, I would challenge whether that should be a goal. This community has an inherent political bias that manifests in which articles people share and how they upvote or downvote. I don’t think that removing sources on the basis of political affiliation per se minimizes harm, and I strongly prefer a focus on removing posts that contain verifiable inaccuracies. Of course, it will ultimately be up to the moderation team to decide what actually constitutes misinformation (and there is bias there too), but I hope that shifting the focus toward that goal explicitly will mean that they will more carefully consider their own biases when exercising the moderation power.

    Edit: typo