Of course this is containerized, so we just need to know the Docker commands and the cost of hosting it.
Maybe we should make an inventory of “our” resources (I think a lot of people here use Hetzner, so it might be interesting to see how much “cloud” could be turned into a colocation for example.)
If we are to automate, we need to establish some “protocol” beyond the technology itself.
If we converge on Docker, we already have a common base for disseminating the containers. But I find the challenge on a few other questions:
Which addresses should these containers probe?
Do they scrape some entries on the librehosters.json?
Do they scrape all addresses published there, or a few of them (meaning there is some orchestration in saying e.g. an address is to be probed by 3 other containers, think of replicas)?
Can all the information to be scraped public, or do we have an internal trusted chain?
I think that Bob has to register it’s list of endpoint, probably an array in a json.
And Alice would have to watch that endpoint to rerender a prometheus configuration and restart prom.
Or, when Bob changes his endpoint, he could send a webhook to Alice (which is better for the environement).
In a k8s context, we could also imagine that Alice hosts prom in a k8s cluster. The list of services to monitor is just a CRD, and Bob can modify this CRD himself.