Terraform is a popular infrastructure-as-code software tool built by HashiCorp. You use it to provision all kinds of infrastructure and services, including New Relic and alerts. In this guide you will learn how to set up New Relic with Terraform, but you can review the New Relic Terraform provider documentation to learn how to set up other New Relic resources.
Sugerencia
Simplify your workflow by bringing the Terraform documentation right into your IDE with New Relic's CodeStream IDE extension. Add templates for any New Relic resource type with just a click.
Install CodeStream for VS Code, Visual Studio or any JetBrains IDE, and then look for the wrench icon at the top of the CodeStream pane.
You'll start by provisioning an alert policy, four alert conditions, and a notification channel. The four alert conditions are based on the four golden signals of monitoring introduced in Google’s Site Reliability Engineering book:
- Latency: The amount of time it takes your application to service a request.
- Traffic: The amount of requests your system receives.
- Errors: The rate of requests that fail.
- Saturation: The stress on resources to meet the demands of your application.
Before you begin
To use this guide, you should have some basic knowledge of both New Relic and Terraform. If you haven't deployed a New Relic open source agent yet, install New Relic for your application. Also, install the Terraform CLI.
Bootstrap Terraform and the New Relic provider
Start by initializing a working directory and creating a Terraform configuration file:
$mkdir terraform-project && cd terraform-project$touch main.tf
Next, instruct Terraform to install and use the New Relic provider, by setting the terraform
and required_providers
blocks in main.tf:
terraform { # Require Terraform version 1.0 (recommended) required_version = "~> 1.0"
# Require the latest 2.x version of the New Relic provider required_providers { newrelic = { source = "newrelic/newrelic" } }}
In this code block, you're setting the required version of Terraform to 1.0 and setting the New Relic provider to the latest 2.x version. Using the right version constraints for your setup will provide better stability with your Terraform runs.
Now that you've set your Terraform and New Relic provider versions, you need to configure the New Relic provider.
Configure the New Relic provider
With terraform
all set, configure the New Relic provider
with the following items:
Your New Relic account ID.
Your New Relic . Most user keys begin with the prefix
NRAK-
.Your New Relic region. Your region is
US
if your New Relic URL isone.newrelic.com
, andEU
if your URL is atone.eu.newrelic.com
.In main.tf, set those values on the provider:
provider "newrelic" {account_id = 12345 # Your New Relic account IDapi_key = "NRAK-***" # Your New Relic user keyregion = "US" # US or EU (defaults to US)}By setting these values on the New Relic provider, you're configuring that provider to make changes on behalf of your account through New Relic APIs.
Sugerencia
You can also configure the New Relic provider using environment variables. This is a useful way to set default values for your provider configuration.
For more information about configuring the New Relic provider, please feel free to check out our official provider documentation.
With your New Relic provider configured, initialize Terraform:
bash$terraform initWhen Terraform finishes installing and registering the New Relic provider, you'll receive a success message and some actionable next steps, such as running
terraform plan
. Before you can runterraform plan
, however, you need to create your resources.
Create a New Relic alert policy with the golden signal alerts
With the New Relic provider configured and initialized, you can define an alerting strategy for your application.
Since you're targeting a specific application, use a newrelic_entity
to fetch the application information from New Relic and allow us to reference that data elsewhere in the configuration:
data "newrelic_entity" "example_app" { name = "Your App Name" # Must be an exact match to your application name in New Relic domain = "APM" # or BROWSER, INFRA, MOBILE, SYNTH, depending on your entity's domain type = "APPLICATION"}
Next, create a newrelic_alert_policy
. Give the policy a dynamic name based on your application's name. This helps specify the scope of the policy:
resource "newrelic_alert_policy" "golden_signal_policy" { name = "Golden Signals - ${data.newrelic_entity.example_app.name}"}
At this point, you should be able to test your configuration with a dry run:
$terraform plan
You should see output that displays Terraform's execution plan. The plan contains the actions Terraform performs when your run terraform apply
:
# Example output------------------------------------------------------------------------An execution plan has been generated and is shown below.Resource actions are indicated with the following symbols: + createTerraform will perform the following actions: # newrelic_alert_policy.golden_signal_policy will be created + resource "newrelic_alert_policy" "golden_signal_policy" { + account_id = (known after apply) + id = (known after apply) + incident_preference = "PER_POLICY" + name = "Golden Signals - Your App Name" }Plan: 1 to add, 0 to change, 0 to destroy.------------------------------------------------------------------------
In this case, the plan shows you that Terraform will create a new alert policy when you run terraform apply
. After verifying the details, execute the plan to provision the alert policy resource in your New Relic account:
$terraform apply
Every time you apply
changes, Terraform asks you to confirm the actions you've told it to run. Type "yes".
While it's running, Terraform sends logs to your console:
# Example output of `terraform apply`newrelic_alert_policy.golden_signal_policy: Creating...newrelic_alert_policy.golden_signal_policy: Creation complete after 1s [id=111222333]Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Log in to New Relic and navigate to Alert Policies to confirm that Terraform created your new policy.
As you move through the next steps of creating alert conditions, you can run terraform apply
after configuring each resource. Refresh your alert policy webpage to see the new resources.
Provision alert conditions based on the four golden signals
Next, add alert conditions for your application based on the four golden signals: latency, traffic, errors, and saturation. Apply these alert conditions to the alert policy you created in the previous step.
Latency
Most folks want to avoid slow response times. You can create a newrelic_alert_condition
that triggers if the overall response time of your application rises above five seconds for five minutes:
# Response timeresource "newrelic_alert_condition" "response_time_web" { policy_id = newrelic_alert_policy.golden_signal_policy.id name = "High Response Time (Web) - ${data.newrelic_entity.example_app.name}" type = "apm_app_metric" entities = [data.newrelic_entity.example_app.application_id] metric = "response_time_web" runbook_url = "https://www.example.com" condition_scope = "application"
term { duration = 5 operator = "above" priority = "critical" threshold = "5" time_function = "all" }}
Note that you're linking this alert condition to the previously configured alert policy with policy_id
.
The newrelic_alert_condition
is being deprecated and you will want to use NRQL alerts in the future. You can use the following configuration to create a NRQL alert that performs the same function as above:
# Response time - Create Alert Conditionresource "newrelic_nrql_alert_condition" "response_time_alert" { policy_id = newrelic_alert_policy.golden_signal_policy.id type = "static" name = "Response Time - ${data.newrelic_entity.example_app.name}" description = "High Transaction Response Time" runbook_url = "https://www.example.com" enabled = true violation_time_limit_seconds = 3600
nrql { query = "SELECT filter(average(newrelic.timeslice.value), WHERE metricTimesliceName = 'HttpDispatcher') OR 0 FROM Metric WHERE appId IN (${data.newrelic_entity.example_app.application_id}) AND metricTimesliceName IN ('HttpDispatcher', 'Agent/MetricsReported/count')" }
critical { operator = "above" threshold = 5 threshold_duration = 300 threshold_occurrences = "ALL" }}
Traffic
Traffic represents how much demand is placed on your system at any given moment. Throughput is a metric that measures how much traffic goes to your application. Create a newrelic_alert_condition
that triggers if the overall response rate of your application falls below five requests per minute for five minutes:
# Low throughputresource "newrelic_alert_condition" "throughput_web" { policy_id = newrelic_alert_policy.golden_signal_policy.id name = "Low Throughput (Web)" type = "apm_app_metric" entities = [data.newrelic_entity.example_app.application_id] metric = "throughput_web" condition_scope = "application"
# Define a critical alert threshold that will # trigger after 5 minutes below 5 requests per minute. term { priority = "critical" duration = 5 operator = "below" threshold = "5" time_function = "all" }}
This type of alert is useful when you expect a constant baseline of traffic throughout the day — a drop off in traffic can indicate a problem.
Errors
If your application's error rate spikes, you need to know about it. Create a newrelic_alert_condition
that triggers if your application's error rate rises above 5% for five minutes:
# Error percentageresource "newrelic_alert_condition" "error_percentage" { policy_id = newrelic_alert_policy.golden_signal_policy.id name = "High Error Percentage" type = "apm_app_metric" entities = [data.newrelic_entity.example_app.application_id] metric = "error_percentage" runbook_url = "https://www.example.com" condition_scope = "application"
# Define a critical alert threshold that will trigger after 5 minutes above a 5% error rate. term { duration = 5 operator = "above" threshold = "5" time_function = "all" }}
Saturation
Saturation represents how "full" your service is and can take many forms, such as CPU time, memory allocation, or queue depth. In this example, assume you already have a New Relic infrastructure agent installed on the hosts serving your application, and you want to configure an alert for when CPU utilization spikes above a certain threshold:
# High CPU usageresource "newrelic_infra_alert_condition" "high_cpu" { policy_id = newrelic_alert_policy.golden_signal_policy.id name = "High CPU usage" type = "infra_metric" event = "SystemSample" select = "cpuPercent" comparison = "above" runbook_url = "https://www.example.com" where = "(`applicationId` = '${data.newrelic_entity.example_app.application_id}')"
# Define a critical alert threshold that will trigger after 5 minutes above 90% CPU utilization. critical { duration = 5 value = 90 time_function = "all" }}
For the infrastructure alert, you created a newrelic_infra_alert_condition
that triggers if the aggregate CPU usage on these hosts rises above 90% for five minutes.
Get notified when an alert triggers
Now that you've configured some important alert conditions, add a notification destination and a notification channel to your alert policy to ensure the proper folks get notified when an alert triggers. To do so, use a newrelic_notification_destination
and a newrelic_notification_channel
.
To begin, create an email notification destination to configure your recipients list, which can be a specific person or a team. This will be used when creating a notification channel:
resource "newrelic_notification_destination" "team_email_destination" { name = "email-example" type = "EMAIL"
property { key = "email" value = "team.member1@email.com,team.member2@email.com,team.member3@email.com" }}
If you want to specify multiple emails, use a comma-delimited list of emails.
Then, create an email notification channel template to send alert notifications to your email. Associate the channel with the destination id:
resource "newrelic_notification_channel" "team_email_channel" { name = "email-example" type = "EMAIL" destination_id = newrelic_notification_destination.team_email_destination.id product = "IINT"
property { key = "subject" value = "New Subject" }}
Last, but not least, in order to apply the notification channel to your alert policy, create a newrelic_workflow
:
resource "newrelic_workflow" "team_workflow" { name = "workflow-example" enrichments_enabled = true destinations_enabled = true workflow_enabled = true muting_rules_handling = "NOTIFY_ALL_ISSUES"
enrichments { nrql { name = "Log" configurations { query = "SELECT count(*) FROM Metric" } } }
issues_filter { name = "filter-example" type = "FILTER"
predicate { attribute = "accumulations.sources" operator = "EXACTLY_MATCHES" values = [ "newrelic" ] } }
destination { channel_id = newrelic_notification_channel.team_email_channel.id }}
A newrelic_workflow
links the notification channel you just created to your alerts.
To finalize your notifications configuration, run terraform apply
one last time to make sure all of your configured resources are up to date.
Get notified when an alert triggers (deprecated)
Importante
Alert channels is deprecated and won't be supported in future versions.
Now that you've configured some important alert conditions, add a notification channel to your alert policy to ensure the proper folks get notified when an alert triggers. To do so, use a newrelic_alert_channel
.
To begin, create an email notification channel to send alert notifications to your email. Use this when you want to notify a specific person or team when alerts are triggered:
resource "newrelic_alert_channel" "team_email" { name = "example" type = "email"
config { recipients = "yourawesometeam@example.com" include_json_attachment = "1" }}
If you want to specify multiple recipients
, use a comma-delimited list of emails.
Last, but not least, in order to apply the notification channel to your alert policy, create a newrelic_alert_policy_channel
:
resource "newrelic_alert_policy_channel" "golden_signals" { policy_id = newrelic_alert_policy.golden_signal_policy.id channel_ids = [newrelic_alert_channel.team_email.id]}
A newrelic_alert_policy_channel
links the notification channel you just created to your alert policy.
To finalize your golden signal alerts configuration, run terraform apply
one last time to make sure all of your configured resources are up to date.
Extra Credit
new_relic_alert_channel
supports several types of notification channels, including email, Slack, and PagerDuty. So, if you want to explore this more, try creating an alert channel for a second channel type, such as Slack:
# Slack notification channelresource "newrelic_alert_channel" "slack_notification" { name = "slack-example" type = "slack"
config { # Use the URL provided in your New Relic Slack integration url = "https://hooks.slack.com/services/XXXXXXX/XXXXXXX/XXXXXXXXXX" channel = "your-slack-channel-for-alerts" }}
Before you apply
this change, you need to add the New Relic Slack App to your Slack account and select a Slack channel to send the notification. With this new alert channel, triggered alerts send notifications to the Slack channel of your choice.
Conclusion
As your team evaluates the alerting system you’ve put in place, you’ll find that you may need to tweak configuration values, such as the alert threshold and duration. If you manage your Terraform project in a remote repository, you can submit a pull request so your team can review these changes alongside the rest of your code contributions.
Sugerencia
You may also want to consider automating this process in your CI/CD pipeline. Use Terraform's recommended practices guide to learn more about their recommended workflow and how to evolve your provisioning practices.