• 0 Posts
  • 158 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle


  • VMix popularity exploded during the pandemic. A lot of conferences became a blend of teams/zoom/Google and VMix.

    Might be hardware based like a multi-m/e video mixer (blackmagic make cheap ones), or maybe more of a screen manager (like barco e2, analog way livecore). But, unless there are production requirements, vmix is much more likely. It’s (now) proven, and much cheaper!

    OBS can absolutely do it. There are other open source softwares that can do it.
    I’ve seen people bastardise Resolume into something that looks decent.
    There are some online studio systems so everything you do is virtualized. Streamyard used to be like this, till it was bought by hopin (I think it was hopin)


  • You can do reverse proxy on the VPS and use SNI routing (because the requested domain is in clear text over HTTPS), then use Proxy Protocol to attach the real source IP to the TCP packets.
    This way, you don’t have to terminate HTTPS on the VPS, and you can load balance between a couple wireguard peers so you have redundancy (or direct them to different reverse proxies or whatever).
    On your home servers, you will need an additional frontend(s) that accepts Proxy Protocol from the VPS (as Proxy Protocol packets aren’t standard HTTP/S packets, so standard HTTPS reverse proxies will drop them as unknown/broken/etc).
    This way, your home reverse proxy knows the original IP and can attach it to the decrypted http requests as x-forward-for. Or you can do ACLs based on original client IP. Or whatever.

    I haven’t found a way to get a firewall that pays attention to Proxy Protocol TCP headers, but I haven’t found that to really be an issue. I don’t really have a use case







  • towerful@programming.devtoGames@lemmy.worldThe N64
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 month ago

    Any older disk based console also required a memory card.
    Pretty sure the controller was the first to have an analogue joystick.
    I think a lot of the quirks of the N64 were because they were essentially first drafts. A lot of first, a lot of ground breaking tech.
    Nobody knew what they were doing, at that time: nothing was wrong


  • It’s not a workaround.
    In the old days, if you had 2 services that were hard coded to use the same network port, you would need virtualization or a different server and make sure the networking for those is correct.

    Network ports allow multiple services to use the same network adapter as a port is like a “sub” address.
    Docker being able to remap host network ports to containers ports is a huge feature.
    If a container doesn’t need to be accessed outside of the docker network, you don’t need to expose the port.

    The only way to have multiple services on the same port is to use either a load balancer (for multiple instances of the same service) or an application-aware reverse proxy (like nginx, haproxy, caddy etc for web things, I’m sure there are other application-aware reverse proxies).



  • How the Linux kernel “made it” and is still free and open source is - imo - one of the pinnacles of humanity.
    It’s inspired so much other software to adopt the same philosophy, and modern humanity/science/society stands on those shoulders.

    I think science has missed that boat.
    Or that pinnacle was before the tools to support such an open source atmosphere/community were around… So not missed the boat, but swam before the boat was built






  • If they are on the same subnet, why are they going via the router? Surely the NIC/OS will know it’s a local address within its subnet, and will send it directly; as opposed to not knowing where to send the packet, so letting the router deal with it.

    I’m assuming you are using a standard 24 bit subnet mask, because you haven’t provided anything that indicates otherwise and the issue you present would be indicative of a local link being used - this possible


  • Holy shit.
    Like, I can see a brief of “thrust stage with 3 screens, entrances between the screens, and access to the screens for manual pointing” might lead to this stage design. The cutouts at the side might be able to be justified for camera access.
    But that cutout triangle in the middle would be a pain to engineer and a health & safety nightmare to justify. So the cutout triangle is absolutely deliberate, and will have been discussed in depth. The only reason to keep it is because someone knew.

    Edit.
    Also, considering their logo is star shaped, I’m surprised the center thrust isn’t more star shaped. Seems odd to go from 5 points and many sides to 4 points and 4 sides. Especially considering they are fine with the engineering and h&s justification of a triangle cutout mid-stage


  • So, is public accessibility actually required?
    Does it need to be exposed to the public internet?

    Why not use wireguard (or another VPN)? Even easier is tailscale.
    If you are hand selecting users (IE, doesn’t actually need to be publicly accessible), then VPN is the most secure and just run a reverse proxy for ease & certs.
    Or set up client certificate authentication, so only users that install a certificate issued by you can connect to the service (dunno how that works for 3rd party apps to immich)

    Like I asked, what is your actual threat model?
    What are your requirements?
    Is public accessibility actually required?