All the times I just put docker-compose.yml to one user (my user) directory and call it a day.

But what about a service with multiple admins or with more horizontally split up load?

  • witten@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    I’m not sure I understand the question. By “data” do you mean “configuration”? If you’ve got multiple devs working on a project (or even if you don’t), IMO your Docker Compose or Podman configuration should be in source control. That will allow multiple devs to all collaborate on the config, do code reviews, etc. Then, you can use whatever your deployment method is to effect those changes on your server(s)… manually run Ansible, automatically run CI-triggered deployment, whatever.

  • ghulican@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    Env variables get saved to 1Password (self hosted alternative would be Infisical) with a project for each container.

    Docker compose files get synced up to my GitHub account.

    I have been using the new “include” attribute to split up each container into its own docker compose file.

    Usually I organize by service type: media

    • sonarr
    • radarr downloaders
    • sab

    Not sure if that answers the question…

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    I had Portainer setup, but it was clunky and the web UI added little value.

    Now I just have a local git repo with a directory for each compose stack and run docker compose commands as needed. The repo holds all yaml and config files I care to keep track. Env variables are in gitignored .env files with similar .env.example in version control. I keep sensitive info in my password manager if I have to recreate a .env from its example counterpart.

    To handle volumes, I avoid docker-managed volumes at all costs to favor cleaner bind mounts instead. This way the data for each stack is always along with the corresponding configuration files. If I care about keeping the data, it’s either version controlled (when mostly text) or backed up with kopia (when mostly binary).

    • retrodaredevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I do something similar, but I avoid gitignore at all costs because any secret data should have root read only permissions on it. Plus any data that is not version controlled goes in a common directory, so all I have to do is backup that directory and I’m good. It makes moving between machines easy if I ever need to do that.

  • Toribor@corndog.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    I’ve been slowly moving all my containers from compose to pure Ansible instead. Makes it easier to also manage creating config files, setting permissions, cycling containers after updating files etc.

    I still have a few things in compose though and I use Ansible to copy updates to the target server. Secrets are encrypted with Ansible vault.

  • skadden@ctrlaltelite.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I host forgejo internally and use that to sync changes. .env and data directories are in .gitignore (they get backed up via a separate process)

    All the files are part of my docker group so anyone in it can read everything. Restarting services is handled by systemd unit files (so sudo systemctl stop/start/restart) any user that needs to manipulate containers would have the appropriate sudo access.

    It’s only me they does all this though, I set it up this way for funsies.

  • Lodra@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Well I’m also not entirely sure what you’re looking for. But here’s my guess 😅

    None of this stuff should run under the account of a human user. Without docker/compose, I would suggest that you create one account for each service, deploy them to different directories with different permissions. With docker compose, just deploy them all together and run it all under a single service account. Probably name it “docker”. When an admin needs to access, you sudo su - docker and then do stuff.