Email or username:

Password:

Forgot your password?
Top-level
Sir Garbagetruck

@drq

Yes, there likely is; however I am not sure I can answer this because the question is missing the topology - which you LIKELY defined in your docker compose file.

Unless you DIDN'T and this is "across multiple machines running the same docker-compose item" - again, not sure of the topology.

So essentially your question is "what is the best practice for synching data between backend services." Apply those practices to your idea.

Docker is, you know, just one way to deal with writing service topology and infrastructure codified.

(I might suggest using a distributed filesystem, maybe gluster or something like that. However I haven't had to do this recently despite having a project that really needed it, because that portion of the project was overengineered and we solved the issue by NOT OVERENGINEERING IT, eliminating the need for 6 unsynched backends instead of a single central data store)

3 comments
Dr. Quadragon ❌

@Truck
> across multiple machines running the same docker-compose item

Yes, exactly. I deplioy the same docker-compose file across multiple machines for HA and need to share sone data between them.

Sir Garbagetruck

@drq

Yeah. So basically, a backend filesystem, absolutely NOT frontend, that is shared between docker hosts.

Sort of outside the realm of docker/docker compose and in the realm of ansible/salt/puppet/chef ; maybe this is where that 'terraform' thing comes in with some folks.

Me, I'd look at it overall and design the system to be robust first, then worry about automating it, but ... maybe someone who is more 'automation first' should chip in with an idea

Dr. Quadragon ❌

@Truck The system is shit.

I just need those synced so it works not 1/n of the time where n is number of nodes.

Also, this data is not important, I'm okay with losing it, they just need to be synced up.

Go Up