Advanced

What's Your Pipeline For?

Pipelines organize units of work into a DAG--and in Conducto's case, a tree. In a typical pipeline, that work has one of these goals:

  • building, testing, and deploying of applications (CI/CD)
  • collecting, transforming, and learning from data (Data Science)

Conducto's way of organizing work probably has applications besides CI/CD and Data Science. For instance, imagine a kitchen robot that makes brownies for you. If progress stopped due to a lack of cocoa powder in your pantry, a good way for you to see what went wrong would be to navigate a failed pipeline for the recipe. After providing the missing ingredient, you could reset that node and the robot would pick up where it left off.

If you're using Conducto for something novel, like controlling a robot, the articles in Basics will explain the building blocks so that you can start combining them in creative ways.

On the other hand, the Advanced section is all about helping you use Conducto in ways that we've anticipated. For now, that's mostly CI/CD and Data Science.

CI/CD

However your code looks, there are probably a few things that need to happen after you save your changes, but before they're ready for a user to appreciate them. A CI/CD pipeline performs those steps. Nodes in such a pipeline might do some of these things:

  • lint/style check
  • build the code
  • run unit tests
  • run integration tests
  • sign the built artifact
  • generate docs and release notes
  • deploy the artifact to production

Depending on your application, you might have to repeat those steps for many services. There are a variety of ways to arrange your Conducto pipeline to automate these things in a way that will make problems easy to find.

Data Science

"Data Science" can mean a wide variety of things, but a data science pipeline typically looks something like this:

  • get the data
  • ensure its integrity
  • analyze it somehow
  • provide helpful results

Conducto can help ensure that your data science pipeline doesn't waste time repeating a step that was done previously, and can help you parallelize parts of your workflow for quicker execution.

Integrations

We provide integrations that you can enable for your Conducto org. Each integration allows Conducto to communicate with some other platform (like GitHub or Slack).

This is useful if you want to:

  • create or rerun pipelines based on an event outside of Conducto.
  • keep other software informed about what's up with your Conducto pipelines.

Repo Config

If you need to tell Conducto something about a repo that you control, you can place a file called conducto.cfg at the root of that repo. This file might contain details specific to an integration

Caching

Conducto nodes run their commands in containers, and changes to a container filesystem don't usually live longer than the container. Some tools like to stash files in your filesystem for later use. This creates a problem.

If Conducto calls such a tool in one container, its stashed files won't be available in a later container. The solution is to use Conducto's data stores to cache these files between calls.

Still Want More?

Sorry, that's all for now. But keep an eye on this page, we'll be adding to it in the near future. In the meantime, if you've got a task in mind for Conducto and you're not sure how to approach it, let us know. We're interested to hear about how you want to use Conducto.

If Conducto is the right tool for the job, you'll find that we're pretty excited about finding ways to make that job easier.

Chat with us for a live demo right now!
(If we're awake 😴)

avatar