Using Integrations via the APIs
The connection details are stored in the payload field of the integration object. The payload is a json object which contains the connection details for the type of the integration, as well as other configuration details required for using the integration. On reads, the payload may contain additional read-only properties.
An integration can be created using the UI or API and then used in a job configuration. An integration can be updated and deleted as well.
An example integration object is as follows:
{
"name": "datadog-integration",
"type": "datadog",
"version": "1",
"payload": {
"site": "datadoghq.com",
"filter_query": "service:my-service",
"additional_tags": "env:prod",
"app_key": "app_key",
"api_key": "api_key"
}
}
Below are more details on each type of integration.
Datadog
A datadog integration lets you connect to your datadog account and source and sink data to datadog. The connection requires you to provide a site url, an optional app key and an api key for connecting to datadog. You can also provide a filter query to limit any data being read from Datadog Cloud when the integration is used to connect to Datadog to pull data. It also allows you to specify additional tags to add when writing data to datadog. To set up a datadog integration using the APIs, first create an integration using the write payload and then create the api key and app key (if required) using the specific APIs for those. The read payload will contain the masked api key and app key. The keys can be rotated using the API after creation.
An example write payload for datadog integration is as follows:
{
"site": "datadoghq.com",
"filter_query": "service:my-service",
"additional_tags": "env:prod"
}
When an integration is configured with an app key, Grepr will scan the alerts in Datadog to pull any log queries currently in use. You can then tell Grepr to skip aggregation for these queries by adding them to the Log Reducer configuration. This helps make sure that adopting Grepr requires minimal changes to your existing Datadog setup. These exceptions will appear in the Datadog integration read payload.
{
"site": "datadoghq.com",
"filter_query": "service:my-service",
"additional_tags": "env:prod",
"app_key": "app_key",
"api_key": "api_key",
"logReducerExceptions": [
{"query": "tags:my-tag1"},
{"query": "tags:my-tag2"}
]
}
Data Warehouse
Data Warehouse integration lets you connect to a Grepr-hosted data warehouse for sourcing and sinking data. There is no configuration needed other than a name.
S3 Data Warehouse
S3 Data Warehouse integration lets you connect to your S3 bucket for sourcing and sinking data. This integration requires you to provide the name of the S3 bucket to use for the data warehouse, and optionally the region for connecting to the S3 bucket (default is us-east-1). The payload for this integration is as follows:
{
"bucketName": "bucket-name",
"region": "us-east-1"
}
There are 2 ways to set up an S3 DataWarehouse integration:
- Using Cloudformation which automatically creates all objects.
- By creating the resources manually.
Cloudformation (Recommended)
Follow these steps to create an S3 DataWarehouse integration using Cloudformation.
- Get the Cloudformation setup information using Grepr S3 DataWarehouse API.
- Create the Cloudformation stack using the setup information.
- Create the integration with the bucket name from the setup information.
Manual
Follow these steps to create an S3 DataWarehouse integration manually.
- Get setup information from Grepr S3 DataWarehouse api providing the bucket name.
- Create an AWS S3 bucket with the following resource policy attached, making sure
to replace
{YOUR_S3_BUCKET_NAME}
with the bucket name from the setup information and{YOUR_ORG_NAME}
with your organization name (E.g. if the URL you are using is https://name.app.grepr.io/ (opens in a new tab), your org name is name.):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::992382778380:role/customer-role-{YOUR_ORG_NAME}"
},
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "arn:aws:s3:::{YOUR_BUCKET_NAME}"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::992382778380:role/customer-role-{YOUR_ORG_NAME}"
},
"Action": [
"s3:DeleteObjectTagging",
"s3:PutObject",
"s3:GetObject",
"s3:PutObjectTagging",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::{YOUR_BUCKET_NAME}/*"
}
]
}
- Create the integration with the bucket name from the setup information.