Pipeline Management

Creating and managing pipelines

After you start a pipeline, you will be redirected to the pipeline details view.

pipelineDetailsView

This page will allow you to manage your pipeline job graph.

As soon as you start editing you enter the edit mode. A black "Changes pending" bar will appear at the top, like in the following screen:

createNewPipeline

When you create a new pipeline, you will already start in edit mode.

Pipeline job graph

The pipeline job graph is a visual representation of your pipeline. It is composed of the following elements:

  • Sources
  • Filters
  • Parser
  • Data warehouse
  • Log reducer
  • Sinks

Sources

Sources are the entry point of your pipeline. They are the place where your logs are coming from. You can have multiple sources in a pipeline.

A source can be an integration like Datadog.

If you don't have any integrations at this point, you will see the following screen:

createIntegrationFromSource

Click on the create integration button to create a new integration.

createDatadogIntegration

Once you have created your integration, you can select it as a source for your pipeline, by clicking on the add source button.

pipelineAddSource

Once added, you can see the source in the pipeline job graph. For datadog agents, you will have to add the ingestion URL to your agent configuration.

pipelineSources

Filters

Filters are used to filter out logs that you don't want to keep. You can have multiple filters in a pipeline.

There are three places you can add filters, if you want to drop logs at a certain point in the pipeline:

  • After the source
  • After the parser
  • After the log reducer

To add a filter, click on the add filter button.

pipelineAddFilter

You will then see an input where you can put you query. This is a Datadog-syntax query for the logs you would like to keep. Everything else is dropped.

Parser

The Grepr allows you to add parsing rules to your logs using Grok patterns. To add a parser, click on the add parser button.

pipelineAddParser

You will then see an input where you can put your parsing rules.

Data warehouse

The data warehouse is where your raw logs will be stored before being reduced. To add a data warehouse, click on the add data warehouse button. This will allow you to select a data warehouse integration.

pipelineAddDataWarehouse

Log reducer

The log reducer is used to reduce the amount of logs stored.

pipelineLogReducer

In the log reducer, you can choose how to group the logs and the aggregation threshold to start reducing the logs.

pipelineLogReducerSettings

Sinks

Sinks are the exit point of your pipeline. They are the place where your reduced logs are going to.

You can choose your sink by clicking on the add sink button.

pipelineAddSink

You can also add some additional tags you will want to add to your logs. By default, we add processor:grepr and pipeline:{YOUR_PIPELINE_NAME}.

Once you have added all the elements to your pipeline, you can click on the save button to save your pipeline!