Skip to content

[ISSUE] Issue with renaming databricks_catalog resource.  #3818

@mattthaber

Description

@mattthaber

Configuration

before running the code, we had someone manually create a catalog from a delta share called "prod2_share". the goal here is 1) import that catalog into terraform then (2) rename it then (3) create a workspace binding for it

resource "databricks_catalog" "prod2_share" {
  name           = "prod2"
  share_name     = "prod-share"
  isolation_mode = "ISOLATED"
  provider_name  = "aws:us-east-1:xxxxx-xxxx-xxx-xxx-xxxxxxxxx"
}

import {
  provider = databricks.workspace

  to = databricks_catalog.prod2_share
  id = "prod2_share"
}

resource "databricks_catalog_workspace_binding" "prod2_shared" {
  provider = databricks.workspace

  securable_name = databricks_catalog.prod2_share.name
  workspace_id   = "XXXXXX"
}

plan from the above showed....

  # databricks_catalog.prod will be updated in-place
  # (imported from "prod2_share")
!   resource "databricks_catalog" "prod2_share" {
+       force_destroy  = false
        id             = "prod2_share"
        isolation_mode = "ISOLATED"
        metastore_id   = "xxxxxxx-xxx-xxx-xxxx-xxxxxx"
!       name           = "prod2_share" -> "prod2"
        owner          = "xxxxxxx-xxxx-xxxx-xxxx-xxxx"
        provider_name  = "aws:us-east-1:xxxxx-xxxx-xxx-xxx-xxxxxxxxx"
        share_name     ="prod-share"
    }

  # databricks_catalog_workspace_binding.prod2_shared will be created
  + resource "databricks_catalog_workspace_binding" "prod2_shared" {
      + binding_type   = "BINDING_TYPE_READ_WRITE"
      + id             = (known after apply)
      + securable_name = "prod2"
      + securable_type = "catalog"
      + workspace_id   = xxxxxxxxx
    }

Expected Behavior

  • on terraform apply, the above plan works as intended, it renames the catalog and all is good

Actual Behavior

  • can see in apply it thinks it did the modification
databricks_catalog.prod2_share[0]: Modifying... [id=prod2_share]
databricks_catalog.prod2_share[0]: Modifications complete after 0s [id=prod2_share]
  • then throws this error...
│ Error: Provider produced inconsistent final plan
│ 
╷
│ When expanding the plan for
│ databricks_catalog_workspace_binding.prod2_shared[0] to include new values
│ learned so far during apply, provider
│ "registry.terraform.io/databricks/databricks" produced an invalid new value
│ for .securable_name: was cty.StringVal("prod2"), but now
│ cty.StringVal("prod2_share").
│ 
│ This is a bug in the provider, which should be reported in the provider's
│ own issue tracker.

  • aftewards i checked if the imported catalog was renamed, and it was NOT. This i assume is the root issue here, which is renaming it did nothing.
  • obviously i should of imported first, then done follow up PRs to rename/workspace bindings, but not sure if that would fix, considering it didnt rename the catalog despite it thinking it did. This also resulted in a bad tf state, where it thinks the catalog has a name now that it does not.

Steps to Reproduce

  • in one apply, import a delta share catalog, rename it, and create a workspace binding from it

Terraform and provider versions

1.49

Debug Output

Important Factoids

Would you like to implement a fix?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions