Local vs Cloud
A pipeline listed at conducto.com/app can be in a variety of modes. If it's in local mode, its commands will run in containers on the machine that launched it. If it's in cloud mode, then Conducto will run them in the cloud. The pipeline's mode determines where its resources come from (CPU, memory, network access, etc).
Local mode is free to anybody with a Conducto account, but Cloud mode requires some extra steps. This page will describe the two and show you how to enable cloud mode for your Conducto org.
This section will explain how local mode local mode works. Then we'll look at some interesting ways to use local pipelines.
To launch a pipeline in local mode, call its definition with
python pipeline.py --local
New pipelines in any mode get a random id (
foo-bar, for instance).
For local pipelines, Conducto uses the local docker daemon to start a container with a name like
It connects with the Conducto web app and waits for instructions.
When you run the pipeline from a browser, we instruct the manager to run your commands in worker containers
You can watch this happen by running
docker ps while your pipeline is runs.
$ docker ps NAMES conducto_manager_foo-bar <- per-pipeline bridge to conducto.com conducto_worker_foo-bar… <- running one of your commands
If you time it right, you'll catch a worker in action.
If you sleep a local pipeline, the manager container stops.
(Stopped containers aren't gone forever, they're just "turned off".
You can see them with
docker ps -a).
The manager holds the runtime data that it collected while it ran your pipeline. This includes how long your commands took to complete, their return codes, and anything that they wrote to stdout/stderr.
If the container is stopped, it can't serve runtime data to the Conducto web app--so you'll need to wake it if you want to examine past pipeline runs. This makes it easy to discover what happened after the fact, and it means that we don't need to us to store your runtime data.
From the Pipelines tab, you can see which of your machines is ready to run pipelines by selecting the computer icon on the right.
Since managers are a per-pipeline thing, this indicator is actually driven by whether that machine is running an agent container. Agents are similar to managers, but they're not pipeline-specific. They're documented in more detail in Agents.
To see the machine that launched a local pipeline, hover over the icon to the left of that pipeline's entry.
The first entry tells you where its containers will run.
The second entry gives the host that will be used for filesystem dependent features (like
In local mode, these will always be the same machine.
The filled-in circle indicates that that host is connected and ready for action.
Once it's launched, you can use conducto.com/app to control a pipeline from anywhere. And if your pipeline is in local mode, you can do it for free.
We think that this opens some interesting doors. Let's point out just two:
Typical CI/CD pipelines are triggered by events in source control. So you usually need to commit a change before your CI can tell you that it had a problem. When this happens, the CI usually starts from the beginning--even if the relevant tests happen much later.
Conducto can short-circuit this delay by letting you manually run parts of your CI in a targeted way--even against uncommitted changes. Live debug makes this especially easy because you don't even need wait for an image to build before running your new changes in a CI context.
And if the pipeline is local, it's free, so you can iterate as often as you like without thinking about billing or quotas.
You can configure Conducto to listen for cloud events (say, new pull requests on GitHub) and respond to them with local pipelines (that perform CI/CD tasks). If you can dedicate a local machine to the task, you can stop for paying for cloud hosted CI/CD.
If you later decide you want your CI/CD in the cloud after all, we support that too.
Sometimes, local mode is not convenient. We'll show you how to enable cloud mode, and what to expect from it. Then we'll look at an example of how it can allow your pipeline to scale beyond what local resources are likely to be available.
To enable cloud mode, access your org's settings.
On the billing tab, you'll see Conducto's pricing information. Keep in mind that cloud pipelines don't incur CPU or Memory costs except while they are running. So the total monthly cost will depend on how often you expect to run cloud pipelines, and on the resource dependencies that you define for them.
If this looks good, then click the button at the bottom. It will take you to a page that collects a billing address. From there, you'll head to Stripe so that you can add a payment method. Then you can head back to Conducto.
Once billing is set up through Stripe, you can enable cloud mode from the billing tab.
After doing this, all of your org's users will be able to launch pipelines in cloud mode. Hovering over the icon for such a pipeline will show you that AWS resources are used when it runs.
When you launch a Cloud pipeline from your computer, it will still need to mount parts of the local filesystem to support following features:
If your pipeline doesn't need those, then you don't need to worry about which machines are associated with with your cloud pipelines. Otherwise, the features above will use the filesystem of machine that launched it--much like local pipelines do. Access in these cases is provided by an agent.
When a Conducto integration launches a pipeline in cloud mode, none of your machines are in the loop. We say that these pipelines are rootless because no part of the local filesystem is mounted in the container.
For rootless pipelines, the features listed in the above section won't be available. Since they don't depend on any particular filesystem you can be sure they're not influenced by some detail that's not included in source control. This is makes rootless cloud pipelines ideal to serve as a single source of truth for a team (which is what you typically want for CI/CD).
In Controlling a Pipeline we used a local pipeline to compare strategies for using parallelism to speed up a job with five steps. We found that five parallel containers are faster than completing five tasks sequentially in a single container. But what if we want to ask: How much faster would 50 parallel containers be?
This is a job for cloud mode, which is great when you want more muscle than you have locally.
To explore this question with 50 tasks instead of 5, check out the other pipeline definition in the compression race example:
It accepts a parameter which determines how many parallel nodes to use. Unless you have 50 processor cores at hand, you'll want to run it in cloud mode:
python scale.py race 50 --cloud
This will launch a pipeline with 50 parallel nodes whose commands will run unconstrained by your local hardware.
If your task can split into many parallel pieces, then taking a similar approach with a cloud pipeline might help you get it done in record time.
If you have a local CPU with cycles to spare, then pipelines launched in local mode will put them to use. Once they're launched, conducto.com/app serves as dashboard and control panel for these pipelines. This leverages your local hardware, so it doesn't cost us much; we make it available free of charge.
To run your pipelines in the cloud, we'll need way to bill you for that usage. Once that's set up, you can dial up as much muscle as you need by running your pipelines in the cloud.