Lessons from launching billions of Docker containers

The Iron.io Platform is an enterprise job processing system for building powerful, job-based, asynchronous software. Simply put, developers write jobs in any language using familiar tools like Docker, then trigger the code to run using Iron.io’s REST API, webhooks, or the built-in scheduler. Whether the job runs once or millions of times per minute, the work is distributed across clusters of “workers” that can be easily deployed to any public or private cloud, with each worker deployed in a Docker container.

At Iron.io we use Docker both to serve our internal infrastructure needs and to execute customers’ workloads on our platform. For example, our IronWorker product has more than 15 stacks of Docker images in block storage that provide language and library environments for running code. IronWorker customers draw on only the libraries they need to write their code, which they upload to Iron.io’s S3 file storage, where our message queuing service merges the base Docker images with the user’s code in a new container, runs the process, then destroys the container.

To read this article in full or to leave a comment, please click here

from InfoWorld Cloud Computing http://ift.tt/2ceUGEC
via IFTTT

Advertisements