-
Notifications
You must be signed in to change notification settings - Fork 10k
Description
Hi guys!
First of all, thank you for your massive amount of work: Terraform is improving everyday.
I'm using Terraform to provision DynamoDB tables. Currently, read_capacity
and write_capacity
are required arguments, so that you can specify default values for read and write initial capacity:
resource "aws_dynamodb_table" "accounts" {
name = "foo-staging-accounts"
read_capacity = 1
write_capacity = 1
hash_key = "ACC"
attribute {
name = "ACC"
type = "S"
}
attribute {
name = "FBID"
type = "S"
}
global_secondary_index {
name = "fbid-index"
read_capacity = 1
write_capacity = 1
hash_key = "FBID"
projection_type = "ALL"
}
}
The problem is that I'm using a tool called Dynamic DynamoDB in order to automatically adjust the provisioned capacities, depending on the actual consumed capacities. But when I want to plan or apply changes with Terraform, it always tries to update the capacities to the default values I have in .tf
files. With the example above, it will always try to set read and write capacities to 1 (for the global secondary index too), even if Dynamic DynamoDB changed them because of a traffic increase.
I would love to solve this issue by adding a new argument to aws_dynamodb_table
resource: something like update_capacities
(or maybe better two new ones, update_read_capacity
and update_write_capacity
): if set to false
, Terraform will not try to update capacities if the table has been already created. If the table is not present yet, Terraform will create it as always, setting the default capacities accordingly.
What do you think guys? Do you have a better idea? How would you solve this issue without touching Terraform code?