Schema projects are automatically created and managed by Immuta. They group all the data sources of the schema, and when new data sources are created, manually or with schema monitoring, they are automatically added to the schema project. They work as a tool to organize all the data sources within a schema, which is particularly helpful with schema monitoring enabled.
Schema projects are created when tables are registered as data sources in Immuta. The user creating the data source does not need the CREATE_PROJECT permission to have the project auto-create because no data sources can be added by the owner. Instead, new data sources are managed by Immuta. The user can manage Subscription policies for schema projects, but they cannot apply Data policies or purposes to them.
The schema settings, such as schema evolution and connection information, can be edited from the project overview tab. Note: Deleting the project will delete all of the data sources within it as well.
Schema settings are edited from the project overview tab:
Schema Project Connection Details: Editing these details will update them for all the data sources within the schema project.
Data Source Naming Convention: When schema monitoring is enabled, new data sources will be automatically detected and added to the schema project. Updating the naming convention will change how these newly detected data sources are named by Immuta.
Schema Detection Owner: When schema monitoring is enabled, a user is assigned to be the owner of any detected and Immuta created data source.
Disable or delete your schema project: Deleting the project will delete all of the data sources within it as well.
Schema monitoring allows organizations to monitor their data environments. When it is enabled, Immuta monitors the organization's servers to detect when new tables or columns are created or deleted, and automatically registers (or disables) those tables in Immuta. These newly updated data sources will then have any global policies and tags that are set in Immuta applied to them. The Immuta data dictionary will be updated with any column changes, and the Immuta environment will be in sync with the organization's data environment. This automated process helps organizations keep compliant without the need to manually keep data sources up to date.
Schema monitoring is enabled while creating or editing a data source and only registers new tables and columns within known schemas. It does not register new schemas. Data owners or governors can edit the naming convention for newly detected data sources and the schema detection owner from the schema project page after it has been enabled.
See the Register a data source guides for instructions on enabling schema monitoring or Manage schema monitoring for instructions on editing the schema monitoring settings.
Column detection is a part of schema monitoring, but can also be enabled on its own to detect the column changes of a select group of tables. Column detection monitors when columns are added or removed from a table and when column types are changed and updates those changes in the appropriate Immuta data source's data dictionary.
See one of the Register a data source guides for instructions on enabling column detection.
When new data sources and columns are detected and added to Immuta, or when column types have changed, they will always automatically be tagged with the New
tag. This allows governors to use the seeded New Column Added
global policy to mask columns with the New
tag, since they could contain sensitive data.
The New Column Added
global policy is staged (inactive) by default.
See the Clone, activate, or stage a global policy guide to activate this seeded global policy if you want any columns with the New
tag to be automatically masked.
When schema monitoring is enabled and there is an active policy that targets the New
tag, Immuta sends validation requests to data owners for the following changes made in the remote data platform:
Column added: Immuta applies the New
tag on the column that has been added and sends a request to the data owner to validate if the new column contains sensitive data. Once the data owner confirms they have validated the content of the column, Immuta removes the New
tag from it and as a result any policy that targets the New
column tag no longer applies.
Column data type changed: Immuta applies the New
tag on the column where the data type has been changed and sends a request to the data owner to validate if the column contains sensitive data. Once the data owner confirms they have validated the content of the column, Immuta removes the New
tag from it and as a result any policy that targets the New
column tag no longer applies.
Column deleted: Immuta deletes the column from the data source's data dictionary in Immuta. Then, Immuta sends a request to the data owner to validate the deleted column.
Data source created: Immuta applies the New
tag on the data source that has been newly created and sends a request to the data owner to validate if the new data source contains sensitive data. Once the data owner confirms they have validated the content of the data source, Immuta removes the New
tag from it and as a result any policy that targets the New
data source tag no longer applies.
For instructions on how to view and manage your assigned tasks in the Immuta UI, see the Manage data source requests guide. To view and manage your assigned tasks via the Immuta API, see the Manage data source requests section of the API documentation.
Immuta user registers a data source with schema monitoring enabled.
Every 24 hours, at 12:30 a.m. UTC by default, Immuta checks the servers for any changes to tables and columns.
If Immuta finds a change, it will update the appropriate Immuta data source or column:
If Immuta finds a new table, then Immuta creates an Immuta data source for that table and tags it New
.
If Immuta finds a table has been deleted, then Immuta disables that table's data source.
If Immuta finds a previously deleted table has been re-created, then Immuta restores that table's data source and tags it New
.
If Immuta finds that the backing object type of a data source has been changed (for example, from a TABLE
to a VIEW
) in Snowflake or Databricks Unity Catalog, Immuta will reapply existing policies on the data source. Note that because of policy limitations on Unity Catalog views, changing a Databricks Unity Catalog object type from a table to a view could result in some types of data policies being removed. See the Databricks Unity Catalog integration reference guide for a list of data policies that are not supported for views.
If Immuta finds a new column within a table, then Immuta adds that column to the data dictionary and tags it New
.
If Immuta finds a column has been deleted, then Immuta deletes that column from the data dictionary.
If Immuta finds a column type has changed, then Immuta updates the column type in the data dictionary and tags it New
.
Active policies that target the New
data source or column tag will be applied until a data owner validates the changes.
To run schema monitoring or column detection manually, see the Run schema monitoring and column detection jobs page.
The default schedule for schema monitoring to run is every 24 hours. Some organizations may need to schedule it to run more often; however, this needs careful consideration as it can impact performance and compute costs.
Manually trigger schema monitoring (filtered down to the database) after your dbt or other transform workflows run. For more information, see the dbt and transform workflow for limited policy downtime guide.
When manually triggering schema monitoring, specify a table or database for maximum performance efficiency and to reduce data or policy downtime. For more information on triggering schema monitoring, see the Manually run schema monitoring guide.
If you are manually managing data tags, activate the "New Column Added" global policy to protect newly found and potentially sensitive data. This policy sets all columns with the tag New
to NULL until a data owner reviews and validates their content. Using this workflow protects your data and avoids data leaks on new columns getting automatically added. This recommendation is unnecessary for users leveraging sensitive data discovery (SDD) or using an external data catalog.