You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Doc] Add deprecation notice to databricks_dbfs_file and databricks_mount (#4876)
## Changes
- Add deprecation notice to `databricks_dbfs_file` and
`databricks_mount`
- Resolves#4865
## Tests
<!--
How is this tested? Please see the checklist below and also describe any
other relevant tests
-->
- [x] relevant change in `docs/` folder
- [x] has entry in `NEXT_CHANGELOG.md` file
---------
Co-authored-by: Alex Ott <[email protected]>
* Document `environment` block in `databricks_pipeline` ([#4878](https://github.com/databricks/terraform-provider-databricks/pull/4878)).
21
21
* Updated documentation for `databricks_disable_legacy_dbfs_setting` resource ([#4870](https://github.com/databricks/terraform-provider-databricks/pull/4870)).
22
+
* Add deprecation notice to `databricks_dbfs_file` and `databricks_mount` ([#4876](https://github.com/databricks/terraform-provider-databricks/pull/4876))
Copy file name to clipboardExpand all lines: docs/resources/cluster.md
+6-27Lines changed: 6 additions & 27 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -156,27 +156,27 @@ To install libraries, one must specify each library in a separate configuration
156
156
157
157
-> Please consider using [databricks_library](library.md) resource for a more flexible setup.
158
158
159
-
Installing JAR artifacts on a cluster. Location can be anything, that is DBFS or mounted object store (s3, adls, ...)
159
+
Installing JAR artifacts on a cluster. Location can be a workspace file, Unity Catalog volume or cloud object storage location (s3, ADLS, ...)
160
160
161
161
```hcl
162
162
library {
163
-
jar = "dbfs:/FileStore/app-0.0.1.jar"
163
+
jar = "/Volumes/catalog/schema/volume/app-0.0.1.jar"
164
164
}
165
165
```
166
166
167
-
Installing Python EGG artifacts. Location can be anything, that is DBFS or mounted object store (s3, adls, ...)
167
+
Installing Python EGG artifacts (Deprecated)
168
168
169
169
```hcl
170
170
library {
171
171
egg = "dbfs:/FileStore/foo.egg"
172
172
}
173
173
```
174
174
175
-
Installing Python Wheel artifacts. Location can be anything, that is DBFS or mounted object store (s3, adls, ...)
175
+
Installing Python Wheel artifacts. Location can be a workspace file, Unity Catalog volume or cloud object storage location (s3, ADLS, ...)
176
176
177
177
```hcl
178
178
library {
179
-
whl = "dbfs:/FileStore/baz.whl"
179
+
whl = "/Volumes/catalog/schema/volume/baz.whl"
180
180
}
181
181
```
182
182
@@ -223,16 +223,6 @@ library {
223
223
224
224
### cluster_log_conf
225
225
226
-
Example of pushing all cluster logs to DBFS:
227
-
228
-
```hcl
229
-
cluster_log_conf {
230
-
dbfs {
231
-
destination = "dbfs:/cluster-logs"
232
-
}
233
-
}
234
-
```
235
-
236
226
Example of pushing all cluster logs to S3:
237
227
238
228
```hcl
@@ -268,7 +258,7 @@ There are a few more advanced attributes for S3 log delivery:
268
258
269
259
To run a particular init script on all clusters within the same workspace, both automated/job and interactive/all-purpose cluster types, please consider the [databricks_global_init_script](global_init_script.md) resource.
270
260
271
-
It is possible to specify up to 10 different cluster-scoped init scripts per cluster. Init scripts support DBFS, cloud storage locations, and workspace files.
261
+
It is possible to specify up to 10 different cluster-scoped init scripts per cluster. Init scripts support volumes, cloud storage locations, and workspace files.
272
262
273
263
Example of using a Databricks workspace file as init script:
274
264
@@ -290,16 +280,6 @@ init_scripts {
290
280
}
291
281
```
292
282
293
-
Example of taking init script from DBFS (deprecated):
294
-
295
-
```hcl
296
-
init_scripts {
297
-
dbfs {
298
-
destination = "dbfs:/init-scripts/install-elk.sh"
299
-
}
300
-
}
301
-
```
302
-
303
283
Example of taking init script from S3:
304
284
305
285
```hcl
@@ -573,7 +553,6 @@ The following resources are often used in the same context:
573
553
*[databricks_instance_profile](instance_profile.md) to manage AWS EC2 instance profiles that users can launch [databricks_cluster](cluster.md) and access data, like [databricks_mount](mount.md).
574
554
*[databricks_job](job.md) to manage [Databricks Jobs](https://docs.databricks.com/jobs.html) to run non-interactive code in a [databricks_cluster](cluster.md).
575
555
*[databricks_library](library.md) to install a [library](https://docs.databricks.com/libraries/index.html) on [databricks_cluster](cluster.md).
576
-
*[databricks_mount](mount.md) to [mount your cloud storage](https://docs.databricks.com/data/databricks-file-system.html#mount-object-storage-to-dbfs) on `dbfs:/mnt/name`.
577
556
*[databricks_node_type](../data-sources/node_type.md) data to get the smallest node type for [databricks_cluster](cluster.md) that fits search criteria, like amount of RAM or number of cores.
578
557
*[databricks_pipeline](pipeline.md) to deploy [Lakeflow Declarative Pipelines](https://docs.databricks.com/aws/en/dlt).
579
558
*[databricks_spark_version](../data-sources/spark_version.md) data to get [Databricks Runtime (DBR)](https://docs.databricks.com/runtime/dbr.html) version that could be used for `spark_version` parameter in [databricks_cluster](cluster.md) and other resources.
Copy file name to clipboardExpand all lines: docs/resources/dbfs_file.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,6 +3,8 @@ subcategory: "Storage"
3
3
---
4
4
# databricks_dbfs_file Resource
5
5
6
+
-> Please switch to [databricks_file](file.md) or [databricks_workspace_file](workspace_file.md) to manage files. Databricks recommends against storing any production data or sensitive information in the DBFS root.
7
+
6
8
This is a resource that lets you manage relatively small files on [Databricks File System (DBFS)](https://docs.databricks.com/data/databricks-file-system.html). The best use cases are libraries for [databricks_cluster](cluster.md) or [databricks_job](job.md). You can also use [databricks_dbfs_file](../data-sources/dbfs_file.md) and [databricks_dbfs_file_paths](../data-sources/dbfs_file_paths.md) data sources.
7
9
8
10
-> This resource can only be used with a workspace-level provider!
Copy file name to clipboardExpand all lines: docs/resources/global_init_script.md
+1-2Lines changed: 1 addition & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -77,6 +77,5 @@ The following resources are often used in the same context:
77
77
*[End to end workspace management](../guides/workspace-management.md) guide.
78
78
*[databricks_cluster](cluster.md) to create [Databricks Clusters](https://docs.databricks.com/clusters/index.html).
79
79
*[databricks_cluster_policy](cluster_policy.md) to create a [databricks_cluster](cluster.md) policy, which limits the ability to create clusters based on a set of rules.
80
-
*[databricks_dbfs_file](dbfs_file.md) to manage relatively small files on [Databricks File System (DBFS)](https://docs.databricks.com/data/databricks-file-system.html).
80
+
*[databricks_workspace_file](workspace_file.md) to manage small files in Databricks workspace
81
81
*[databricks_ip_access_list](ip_access_list.md) to allow access from [predefined IP ranges](https://docs.databricks.com/security/network/ip-access-list.html).
82
-
*[databricks_mount](mount.md) to [mount your cloud storage](https://docs.databricks.com/data/databricks-file-system.html#mount-object-storage-to-dbfs) on `dbfs:/mnt/name`.
Copy file name to clipboardExpand all lines: docs/resources/job.md
+1-5Lines changed: 1 addition & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -258,7 +258,7 @@ The `power_bi_task` triggers a Power BI semantic model update.
258
258
259
259
#### spark_python_task Configuration Block
260
260
261
-
*`python_file` - (Required) The URI of the Python file to be executed. [databricks_dbfs_file](dbfs_file.md#path), cloud file URIs (e.g. `s3:/`, `abfss:/`, `gs:/`), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with `/`. For files stored in a remote repository, the path must be relative. This field is required.
261
+
*`python_file` - (Required) The URI of the Python file to be executed. Cloud file URIs (e.g. `s3:/`, `abfss:/`, `gs:/`), workspace paths and remote repository are supported. For Python files stored in the Databricks workspace, the path must be absolute and begin with `/`. For files stored in a remote repository, the path must be relative. This field is required.
262
262
*`source` - (Optional) Location type of the Python file. When set to `WORKSPACE` or not specified, the file will be retrieved from the local Databricks workspace or cloud location (if the python_file has a URI format). When set to `GIT`, the Python file will be retrieved from a Git repository defined in `git_source`.
263
263
*`WORKSPACE`: The Python file is located in a Databricks workspace or at a cloud filesystem URI.
264
264
*`GIT`: The Python file is located in a remote Git repository.
@@ -632,12 +632,8 @@ The following resources are often used in the same context:
632
632
*[databricks_cluster](cluster.md) to create [Databricks Clusters](https://docs.databricks.com/clusters/index.html).
633
633
*[databricks_cluster_policy](cluster_policy.md) to create a [databricks_cluster](cluster.md) policy, which limits the ability to create clusters based on a set of rules.
634
634
*[databricks_current_user](../data-sources/current_user.md) data to retrieve information about [databricks_user](user.md) or [databricks_service_principal](service_principal.md), that is calling Databricks REST API.
635
-
*[databricks_dbfs_file](../data-sources/dbfs_file.md) data to get file content from [Databricks File System (DBFS)](https://docs.databricks.com/data/databricks-file-system.html).
636
-
*[databricks_dbfs_file_paths](../data-sources/dbfs_file_paths.md) data to get list of file names from get file content from [Databricks File System (DBFS)](https://docs.databricks.com/data/databricks-file-system.html).
637
-
*[databricks_dbfs_file](dbfs_file.md) to manage relatively small files on [Databricks File System (DBFS)](https://docs.databricks.com/data/databricks-file-system.html).
638
635
*[databricks_global_init_script](global_init_script.md) to manage [global init scripts](https://docs.databricks.com/clusters/init-scripts.html#global-init-scripts), which are run on all [databricks_cluster](cluster.md#init_scripts) and [databricks_job](job.md#new_cluster).
639
636
*[databricks_instance_pool](instance_pool.md) to manage [instance pools](https://docs.databricks.com/clusters/instance-pools/index.html) to reduce [cluster](cluster.md) start and auto-scaling times by maintaining a set of idle, ready-to-use instances.
640
-
*[databricks_instance_profile](instance_profile.md) to manage AWS EC2 instance profiles that users can launch [databricks_cluster](cluster.md) and access data, like [databricks_mount](mount.md).
641
637
*[databricks_jobs](../data-sources/jobs.md) data to get all jobs and their names from a workspace.
642
638
*[databricks_library](library.md) to install a [library](https://docs.databricks.com/libraries/index.html) on [databricks_cluster](cluster.md).
643
639
*[databricks_node_type](../data-sources/node_type.md) data to get the smallest node type for [databricks_cluster](cluster.md) that fits search criteria, like amount of RAM or number of cores.
Copy file name to clipboardExpand all lines: docs/resources/library.md
+8-14Lines changed: 8 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,6 +10,7 @@ Installs a [library](https://docs.databricks.com/libraries/index.html) on [datab
10
10
-> `databricks_library` resource would always start the associated cluster if it's not running, so make sure to have auto-termination configured. It's not possible to atomically change the version of the same library without cluster restart. Libraries are fully removed from the cluster only after restart.
11
11
12
12
## Plugin Framework Migration
13
+
13
14
The library resource has been migrated from sdkv2 to plugin framework。 If you encounter any problem with this resource and suspect it is due to the migration, you can fallback to sdkv2 by setting the environment variable in the following way `export USE_SDK_V2_RESOURCES="databricks_library"`.
Installing Python libraries listed in the `requirements.txt` file. Only Workspace paths and Unity Catalog Volumes paths are supported. Requires a cluster with DBR 15.0+.
!> Importing this resource is not currently supported.
@@ -141,11 +139,7 @@ The following resources are often used in the same context:
141
139
*[databricks_clusters](../data-sources/clusters.md) data to retrieve a list of [databricks_cluster](cluster.md) ids.
142
140
*[databricks_cluster](cluster.md) to create [Databricks Clusters](https://docs.databricks.com/clusters/index.html).
143
141
*[databricks_cluster_policy](cluster_policy.md) to create a [databricks_cluster](cluster.md) policy, which limits the ability to create clusters based on a set of rules.
144
-
*[databricks_dbfs_file](../data-sources/dbfs_file.md) data to get file content from [Databricks File System (DBFS)](https://docs.databricks.com/data/databricks-file-system.html).
145
-
*[databricks_dbfs_file_paths](../data-sources/dbfs_file_paths.md) data to get list of file names from get file content from [Databricks File System (DBFS)](https://docs.databricks.com/data/databricks-file-system.html).
146
-
*[databricks_dbfs_file](dbfs_file.md) to manage relatively small files on [Databricks File System (DBFS)](https://docs.databricks.com/data/databricks-file-system.html).
147
142
*[databricks_global_init_script](global_init_script.md) to manage [global init scripts](https://docs.databricks.com/clusters/init-scripts.html#global-init-scripts), which are run on all [databricks_cluster](cluster.md#init_scripts) and [databricks_job](job.md#new_cluster).
148
143
*[databricks_job](job.md) to manage [Databricks Jobs](https://docs.databricks.com/jobs.html) to run non-interactive code in a [databricks_cluster](cluster.md).
149
-
*[databricks_mount](mount.md) to [mount your cloud storage](https://docs.databricks.com/data/databricks-file-system.html#mount-object-storage-to-dbfs) on `dbfs:/mnt/name`.
150
144
*[databricks_pipeline](pipeline.md) to deploy [Lakeflow Declarative Pipelines](https://docs.databricks.com/aws/en/dlt).
151
145
*[databricks_repo](repo.md) to manage [Databricks Repos](https://docs.databricks.com/repos.html).
*`name` - (Required) Name of MLflow experiment. It must be an absolute path within the Databricks workspace, e.g. `/Users/<some-username>/my-experiment`. For more information about changes to experiment naming conventions, see [mlflow docs](https://docs.databricks.com/applications/mlflow/experiments.html#experiment-migration).
35
-
*`artifact_location` - Path to dbfs:/ or s3:// artifact location of the MLflow experiment.
35
+
*`artifact_location` - Path to artifact location of the MLflow experiment.
Copy file name to clipboardExpand all lines: docs/resources/mount.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,14 +3,16 @@ subcategory: "Storage"
3
3
---
4
4
# databricks_mount Resource
5
5
6
-
This resource will [mount your cloud storage](https://docs.databricks.com/data/databricks-file-system.html#mount-object-storage-to-dbfs) on `dbfs:/mnt/name`. Right now it supports mounting AWS S3, Azure (Blob Storage, ADLS Gen1 & Gen2), Google Cloud Storage. It is important to understand that this will start up the [cluster](cluster.md) if the cluster is terminated. The read and refresh terraform command will require a cluster and may take some time to validate the mount.
6
+
-> Please switch to [databricks_volume](volume.md). DBFS mounts are deprecated.
7
7
8
8
-> This resource can only be used with a workspace-level provider!
9
9
10
10
-> When `cluster_id` is not specified, it will create the smallest possible cluster in the default availability zone with name equal to or starting with `terraform-mount` for the shortest possible amount of time. To avoid mount failure due to potentially quota or capacity issues with the default cluster, we recommend specifying a cluster to use for mounting.
11
11
12
12
-> CRUD operations on a databricks mount require a running cluster. Due to limitations of terraform and the databricks mounts APIs, if the cluster the mount was most recently created / updated using no longer exists AND the mount is destroyed as a part of a terraform apply, we mark it as deleted without cleaning it up from the workspace.
13
13
14
+
This resource will [mount your cloud storage](https://docs.databricks.com/data/databricks-file-system.html#mount-object-storage-to-dbfs) on `dbfs:/mnt/name`. Right now it supports mounting AWS S3, Azure (Blob Storage, ADLS Gen1 & Gen2), Google Cloud Storage. It is important to understand that this will start up the [cluster](cluster.md) if the cluster is terminated. The read and refresh terraform command will require a cluster and may take some time to validate the mount.
15
+
14
16
This resource provides two ways of mounting a storage account:
15
17
16
18
1. Use a storage-specific configuration block - this could be used for the most cases, as it will fill most of the necessary details. Currently we support following configuration blocks:
[Alert V2](https://docs.databricks.com/sql/user/security/access-control/alert-acl.html) which is the new version of SQL Alert have 4 possible permission levels: `CAN_READ`, `CAN_RUN`, `CAN_EDIT`, and `CAN_MANAGE`.
866
+
[Alert V2](https://docs.databricks.com/sql/user/security/access-control/alert-acl.html) which is the new version of SQL Alert have 4 possible permission levels: `CAN_READ`, `CAN_RUN`, `CAN_EDIT`, and `CAN_MANAGE`.
[SQL alerts](https://docs.databricks.com/sql/user/security/access-control/alert-acl.html) have three possible permissions: `CAN_VIEW`, `CAN_RUN` and `CAN_MANAGE`:
0 commit comments