Skip to content

You are viewing documentation for Immuta version 2022.5.

For the latest version, view our documentation for Immuta SaaS or the latest self-hosted version.

POV Data Setup Guide

We’ve provided some data that will allow you to complete the walkthroughs provided. Of course, you would use your own data with Immuta in Production, but since we are going to walk through very specific use cases, it’s easier to work off the same sheet of music, data-wise.

While this page is long, you will only need to worry about your specific data warehouses/compute.

1 - Download the Data

Databricks Workspaces

A Databricks workspace (your Databricks URL) can be configured to use traditional notebooks or SQL Analytics. Select one of these options from the menu in the top left corner of the Databricks console.

Select one of the tabs below to download the script to generate fake data for your specific warehouse.

Databricks Notebooks (Data Science and Engineering or Machine Learning Notebooks)

Download this Databricks Notebook.

Databricks SQL (SQL Workspace)

Download this SQL script.

Snowflake

Download this SQL script.

Redshift

Download this SQL script.

Synapse

Download these resources:

Starburst (Trino)

Download these datasets:

2 - Load the Data

This will get the data downloaded in the first step into your data warehouse.

Assumptions: Part 2 assumes you have a user with permission to create databases/schemas/tables in your warehouse/compute (and potentially write files to cloud storage).

Databricks Notebooks

Databricks and SQL Analytics Imports

If you’ve already done the import using SQL Analytics and SQL Analytics shares the same workspace with Databricks Notebooks, you will not have to do it again in Databricks because they share a metastore.

  1. Before importing and running the notebook, ensure you are either logged in as Databricks admin or you are running it on a cluster that is NOT Immuta-enabled.
  2. Import the Notebook downloaded from step 1 into Databricks.
  3. Go to your workspace and click the down arrow next to your username.
  4. Select import.
  5. Import the file from step 1.
  6. Run all cells in the Notebook, which will create both tables.

    • For simplicity, the data is being stored in DBFS; however, we do not recommend this in real deployments, and you should instead store your real data in your cloud-provided object store (S3, ADLS, Google Storage).

Databricks SQL

Databricks and SQL Analytics Imports

Note, if you’ve already done the import using Databricks and Databricks shares the same workspace with SQL Analytics, you will not have to do it again in Databricks because they share a metastore.

  1. Before importing and running the script, ensure you are logged in as a user who can create databases.
  2. Select SQL from the upper left menu in Databricks.
  3. Click Create → Query.
  4. Copy the contents of the SQL script you downloaded from step 1 and paste that script into the SQL area.
  5. Run the script.

    • For simplicity, the data is being stored in DBFS, however, we do not recommend this in real deployments and you should instead store your real data in your cloud-provided object store (S3, ADLS, Google Storage).

Snowflake

  1. Open up a worksheet in Snowflake using a user that has CREATE DATABASE and CREATE SCHEMA permission. Alternatively, you can save the data in a pre-existing database or schema by editing the provided SQL script.
  2. To the right of your schema selection in the worksheet, click the ... menu to find the Load Script option.
  3. Load the script downloaded from step 1.
  4. Optional: Edit the database and schema if desired at the top of the script.
  5. Check the All Queries button next to the Run button.
  6. Ensure you have a warehouse selected, and then click the Run button to execute the script. (There should be 11 commands it plans to run.)

Both tables should be created and populated.

Redshift

Redshift RA3 Instance Type

You must use a Redshift RA3 instance type because Immuta requires cross-database views, which are only supported in Redshift RA3 instance types.

Unfortunately there is not a standard query editor for Redshift, so creating the POV tables in Redshift is going to be a bit less automated.

  1. Connect to your Redshift instance using your query editor of choice.
  2. Create a new database called immuta_pov using the command CREATE DATABASE immuta_pov; optionally, you can connect to a pre-existing database and just load these tables in there.
  3. After creating the database, you will need to disconnect from Redshift and connect to the new database you created (if you did so).
  4. Upload the script you downloaded from step 1 above. If your query editor does not support uploading a SQL script, you can simply open that file in a text editor to copy the commands and paste them in the editor.
  5. Run the script.
  6. Both tables should be created and populated.

Synapse

Synapse Analytics Dedicated SQL Pools

Immuta supports Synapse Analytics dedicated SQL pools only.

Creating the data in Synapse is potentially a four step process. First you will need to upload the data to a storage account, then create a Synapse Workspace (if you don’t have one in mind to use), then create a dedicated SQL pool, and then point Synapse to that stored data.

1 - Upload the Data to an Azure Storage Account

  1. Log in to the Azure Portal.
  2. Select or create a storage account.
  3. If selecting an existing storage account and you already have a Synapse Workspace you plan to use, make sure the storage container(s) are attached to that Synapse workspace.
  4. The selected or created storage account MUST have Data Lake Storage Gen2 Hierarchical namespace enabled. Note this has to be enabled at creation time and cannot be changed after creation.
  5. The setting Enable hierarchical namespace is found under advanced settings when creating the storage account.
  6. Click Containers.
  7. Select or create a new container to store the data.
  8. Upload both files from step 1 to the container by clicking the upload button.

2 - Create a Synapse Workspace (you can skip this step if you already have one you plan to use)

  1. Go to Azure Synapse Analytics (still logged in to Azure Portal).
  2. Create a Synapse workspace.
  3. Select a resource group.
  4. Provide a workspace name.
  5. Select a region.
  6. For account name use the storage account used from the above steps, remembering it MUST have Data Lake Storage Gen2 Hierarchical namespace enabled.
  7. Select the container you created in the above steps.
  8. Make sure Assign myself the Storage Blob Data Contributor role on the Data Lake Storage Gen2 account to interactively query it in the workspace is checked.
  9. Go to the security section.
  10. Enter your administrator username/password (save these credentials).
  11. Review and create.
  12. Once the Synapse Workspace is created, there should be a Workspace web URL (this is to the Synapse Studio) available on the overview page, go there.

3 - Create a Dedicated Pool

With Synapse, a dedicated pool is essentially a database. So we want to create a database for this POV data.

  1. On the Azure portal home page click Azure Synapse Analytics.
  2. Click the Synapse workspace above and then click + New dedicated SQL pool (Immuta only works on Synapse dedicated pools).
  3. Next, enter immuta_pov as the name of your dedicated SQL pool.
  4. Choose an appropriate performance level. For testing purposes (other than performance) it is recommended to use the lowest performance level to avoid high costs if the instance is left running.
  5. Once that information is chosen, click Review + Create and then Create.

4 - Point Synapse to the Stored Data

  1. From Synapse Studio (this is the Workspace web URL you were given when the Synapse Workspace was completed) click the Data menu on the left.
    1. Click on the Workspace tab.
    2. Expand databases and you should see the dedicated pool you created above. Sometimes, even if the dedicated pool has been deployed, it takes time to see it in Synapse Studio. Wait some time and refresh the browser.
    3. Once the dedicated pool is there, hover over it, and click the Actions button.
    4. Select New SQL script.
      1. Select Empty script.
      2. Paste the contents of the script you downloaded in Part 1 into the script window.
      3. Run it.
  2. From that same Synapse Studios window, click the Integrate menu on the left.
    1. Click the + button to Add a new resource.
    2. Select Copy Data tool.
    3. Leave it as Built-in copy task with Run once now, and then click Next >.
    4. For Source type select Azure Data Lake Storage Gen 2.
    5. For connection, you should see your workspace; select it.
    6. For File or folder, click the browse button and select the container where you placed the data.
    7. Dive into that container (double click the folder if in a folder) and select the immuta_fake_hr_data_tsv.txt file.
    8. Uncheck recursive and click Next >.
    9. For File format, select Text format.
    10. For Column delimiter, leave the default, Tab (\t).
    11. For Row delimiter, leave the default, Line feed (\n).
    12. Leave First row is header checked.
    13. Click Next >
    14. For Target type, select Azure Synapse dedicated SQL pool.
    15. For Connection, select the dedicated pool you created: immuta_pov.
    16. Under Azure Data Lake Storage Gen2 file, click Use existing table.
    17. Select the pov_data.immuta_fake_hr_data table.
    18. Click Next >.
    19. Leave all the defaults on the column mapping page, and then click Next >.
    20. On the settings page, name the task hr_data_import.
    21. Open the Advanced section.
    22. Uncheck Allow PolyBase.
    23. Uncheck the Edit button under Degree of copy parallelism.
    24. Click Next >.
    25. Review the Summary page and click Next >.
    26. This should run the task and load the data; you can click Finish when it completes.

If you’d like, you can test that it worked by opening a new SQL Script from the data menu and running: SELECT * FROM pov_data.immuta_fake_hr_data.

Repeat these steps for the immuta_fake_credit_card_transactions_tsv.txt file, loading it into the pov_data.immuta_fake_credit_card_transactions table.

Starburst (Trino)

Since Starburst (Trino) can connect to and query many different systems, it would be impossible for us to list instructions for every single one. To load these tables into Starburst (Trino), you should

  1. Upload the data from Part 1 to whatever backs your Starburst (Trino) instance. If that is cloud object storage, this would mean loading the files downloaded from Part 1. If it’s a database, you may want to leverage some of the SQL scripts listed for the other databases in Part 1.
  2. Follow the appropriate guide from here.
  3. Create a database: immuta_pov for the schema/tables.
  4. Create a schema: pov_data for the tables.
  5. Load both tables into that schema:
    • immuta_fake_hr_data
    • immuta_fake_credit_card_transactions

3 - Configure the Immuta Integration(s)

Assumptions: Part 3 assumes your user has the following permissions in Immuta (note you should have these by default if you were the initial user on the Immuta installation):

  • APPLICATION_ADMIN: in order to configure the integration
  • USER_ADMIN: in order to create a non-admin user

User Accounts (admin, non-admin)

When the integration is configured, since it allows you to query the data directly from the data warehouse/compute, there needs to be a mapping between Immuta users and data warehouse/compute users. Typically in production this is accomplished through a shared identity manager like LDAP or Okta.

However, for the purposes of this POV, you may prefer to do a simple mapping instead, which we will describe. Taking this a step further, you really need two different “levels” of users to really see the power of Immuta. For example, you want an “admin” user that has more permissions to create policies (and avoid policies) and a regular “non-admin” user to see the impacts of policies on queries - think of this as your regular downstream analyst.

It’s best to follow these rules of thumb:

  1. When running through the below native configurations, it’s best to use a system account from your data warehouse/compute to configure them (when we ask for credentials in the configuration steps, we are not talking about the actual Immuta login here), although you can use your admin account.

  2. You need some kind of Immuta admin account for registering data, building policies, etc, this should be the user that initially stood up the Immuta instance in many cases, but could be a different user as long as you give them all required permissions. This user should map to your data warehouse/compute admin user. We get more into segmentation of duties later in the walkthroughs.

  3. You need a non-admin user. This may be more difficult if you have an external identity / SSO system where you can’t login as a different user to your data warehouse/compute. But if possible you should create a second user for yourself with no special permissions in Immuta and be able to map that user to a user you can login as on your data warehouse/compute of choice.

Mapping the users

Understanding the rules of thumb above, you will need an admin and non-admin user in Immuta, and those users need to map to users in your data warehouse/computes of choice. Typically, if the users in both places are identified with email addresses, this all “just works” - they are already mapped. However, if they do not match, you can manually configure the mapping. For example, if you want to map steve@immuta.com to the plain old “steve” username in Synapse, you can do that by following the steps below. Again, this is not necessary if your users in Immuta match your users (spelling) in your data warehouse/compute (typically by email address).

  1. Log in to Immuta.
  2. Click the Admin icon in the left sidebar.
  3. Click on the user of interest.
  4. Next to the username on the left, click the three dot menu.
  5. Here you will see the following options: Change Databricks username, change Snowflake username, etc.
  6. Select which data warehouse/compute username you want to map.
  7. Enter the data warehouse/compute username that maps to that Immuta user.
  8. Click Save.

Configuring the Integrations per data warehouse/compute

For Immuta to enforce controls, you must enable what is called our integrations. This is done slightly differently per each database/warehouse/compute, and how it works is explained in more detail in our Query Your Data Guide. For now, let’s just get the integrations of interest configured.

Databricks

  1. Log in to Immuta.
  2. Click the App Settings icon in the left sidebar (the wrench).
  3. Under the Configuration menu on the left, click System API Key under HDFS.
  4. Click Generate Key.
  5. Click Save in the bottom left of the Configuration screen, and Confirm when prompted.
  6. Under the Configuration menu on the left, click Native Integrations.
  7. Click the + Add Native Integration button.
  8. Select Databricks Integration.
  9. Enter the Databricks hostname.
  10. For Immuta IAM, there should only be one option, bim. Select it. This is the built-in Immuta identity manager. It’s likely you would hook up a different identity manager in production, like Okta or LDAP, but this is sufficient for POV testing.
  11. Access Model: this one is a pretty big decision, read the descriptions for each and eventually you will need to decide which mode to use. But for the purposes of this POV guide, we assumed the default: Protected until made available by policy.
  12. Select the appropriate Storage Access Type.
  13. Enter the required authentication information based on which Storage Access Type you select.
  14. No Additional Hadoop Configuration is required.
  15. Click Add Native Integration.
  16. This will pop up a message stating that Your Databricks integration will not work properly until your cluster policies are configured. Clicking this button will allow you to select the cluster policies that are deployed to your Databricks instance. We encourage you to read that table closely including the detailed notes linked in it to decide which cluster policies to use.
  17. Once you select the cluster policies you want deployed, click Download Policies. This will allow you to either:
    • Automatically Push Cluster Policies to the Databricks cluster if you provide your Databricks admin token (Immuta will not store it), or
    • Manually Push Cluster Policies yourself without providing your Databricks admin token, you decide.
  18. Please also Download the Benchmarking Suite; you will use that later in the Databricks Performance Test walkthrough.
  19. If you choose to Manually Push Cluster Policies you will have to also Download Init Script.
  20. Click Download Policies or Apply Policies depending which option you selected.
  21. Once adding the integration is successful, Click Save in the bottom left of the Configuration screen (also Confirm when warned). This may take a little while to run.

If you took the manual approach, you must deploy those cluster policies manually in Databricks. You should configure Immuta-enabled cluster(s) using the deployed cluster policies.

Congratulations, you have successfully configured the Immuta integration with Databricks. To leverage it, you will need to use a cluster configured with one of the cluster policies created through the above steps.

Databricks SQL

  1. Log in to Immuta.
  2. Click the App Settings icon in the left sidebar (the wrench).
  3. Under the Configuration menu on the left, click Native Integrations.
  4. Click the + Add Native Integration button.
  5. Select Databricks SQL analytics.
  6. Enter the Databricks SQL analytics host.
  7. Enter the Databricks SQL analytics port.
  8. Enter the HTTP Path of the SQL Endpoint that will be used to execute DDL to create views. This is not to be confused with an HTTP Path to a regular Databricks cluster!
  9. Enter the Immuta Database: immuta_pov_secure. This is the database name where all the secure views Immuta creates will be stored. (You’ll learn more about this in the Query Your Data Guide.) That is why we named it immuta_pov_secure (since the original data is in the immuta_pov database), but, remember, it could contain data from multiple different databases if desired, so in production you likely want to name this database something more generic.
  10. Enter any additional required Connection String Options.
  11. Enter your Personal Access Token Immuta needs to connect to Databricks to create the integration database, configure the necessary procedures and functions, and maintain state between Databricks and Immuta. The Personal Access Token provided here should not have a set expiration and should be tied to an account with the privileges necessary to perform the operations listed above (e.g., an Admin User).
  12. Make sure the SQL Endpoint is running (if you don't, you may get a timeout waiting for it to start when you test the connection), and then click Test Databricks SQL Connection.
  13. Once the connection is successful, Click Save in the bottom left of the Configuration screen. This may take a little while to run.

Congratulations, you have successfully configured the Immuta integration with Databricks SQL. Be aware, you can configure multiple Databricks SQL workspaces to Immuta.

Snowflake

  1. Log in to Immuta.
  2. Click the App Settings icon in the left sidebar (the wrench).
  3. Under the Configuration menu on the left, click Native Integrations.
  4. Click the + Add Native Integration button.
  5. Select Snowflake.
  6. Enter the Snowflake host.
  7. Enter the Snowflake port.
  8. Enter the default warehouse. This is the warehouse Immuta uses to compute views, so it does not need to be very big; XS is fine.
  9. Enter the Immuta Database: IMMUTA_POV_SECURE.
  10. This is the database name where all the secure schemas and views Immuta creates will be stored (you’ll learn more about this in the Query Your Data Guide). That is why we named it IMMUTA_POV_SECURE (since the original data is in the IMMUTA_POV database), but, remember, it could contain data from multiple different databases if desired, so in production you likely want to name this database something more generic.
  11. For Additional Connection String Options, you may need to specify something here related to proxies depending on how your Snowflake is set up.
  12. You now need to decide if you want to do an automated installation or not; Immuta can automatically install the necessary procedures, functions, and system accounts into your Snowflake account if you provide privileged credentials (described in next step). These credentials will not be stored or saved by Immuta. However, if you do not feel comfortable with providing these credentials, you can manually run the provided bootstrap script.
  13. Select Automatic or Manual depending on your decision above.

    • Automatic:

      1. Enter the username (when performing an automated installation, the credentials provided must have the ability to both CREATE databases and CREATE, GRANT, REVOKE, and DELETE roles.)
      2. Enter the password.
      3. You can use a key pair if required.
      4. For role, considering this user must be able to both CREATE databases and CREATE, GRANT, REVOKE, and DELETE roles, make sure you enter the appropriate role.
      5. Click Test Snowflake Connection.
    • Manual:

      1. Download the bootstrap script
      2. Enter a NEW user, this is the account that will be created, and then the bootstrap script will populate it with the appropriate permissions.
      3. Please feel free to inspect the bootstrap script for more details
      4. Enter a password for that NEW user
      5. You can use a key pair if required
      6. Click “Test Snowflake Connection”
      7. Once the connection is successful, click Save in the bottom left of the Configuration screen. This may take a little while to run.
      8. Run the bootstrap script in Snowflake.

Congratulations, you have successfully configured the Immuta integration with Snowflake. Be aware, you can configure multiple Snowflake instances to Immuta.

Redshift

Redshift RA3 Instance Type

You must use a Redshift RA3 instance type because Immuta requires cross-database views, which are only supported in Redshift RA3 instance types.

  1. Log in to Immuta.
  2. Click the App Settings icon in the left sidebar (the wrench).
  3. Under the Configuration menu on the left, click Native Integrations.
  4. Click the + Add Native Integration button and select Redshift.
  5. Enter the Redshift host.
  6. Enter the Redshift port.
  7. Enter the Immuta Database: immuta_pov_secure. This is the database name where all the secure schemas and views Immuta creates will be stored (you’ll learn more about this in the Query Your Data Guide). That is why we named it immuta_pov_secure (since the original data is in the immuta_pov database), but, remember, it could contain data from multiple different databases if desired, so in production you likely want to name this database something more generic.
  8. You now need to decide if you want to do an automated install or not, Immuta can automatically install the necessary procedures, functions, and system accounts into your Redshift account if you provide privileged credentials. These credentials will not be stored or saved by Immuta. However, if you do not feel comfortable with providing these credentials, you can manually run the provided bootstrap script. Please ensure you enter the username and password that were set in the bootstrap script.
  9. Select Automatic or Manual depending on your decision above.

    • Automatic:

      1. Enter the initial database. This should be a database that already exists; it doesn’t really matter which. Immuta simply needs this because you must include a database when connecting to Redshift.
      2. Enter the username (this must be a user that can create databases, users, and modify grants).
      3. Enter the password.
    • Manual:

      1. Download the bootstrap script.
      2. Enter a NEW user, this is the account that will be created, and then the bootstrap script will populate it with the appropriate permissions.
      3. Please feel free to inspect the bootstrap script for more details.
      4. Enter a password for that NEW user.
      5. Click Test Redshift Connection.
      6. Once the connection is successful, click Save in the bottom left of the Configuration screen.
      7. This may take a little while to run.
      8. Run the bootstrap scripts in Redshift.

Congratulations, you have successfully configured the Immuta integration with Redshift. Be aware, you can configure multiple Redshift instances to Immuta.

Synapse

  1. Log in to Immuta.
  2. Click the App Settings icon in the left sidebar (the wrench).
  3. Under the Configuration menu on the left, click Native Integrations.
  4. Click the + Add Native Integration button and select Azure Synapse Analytics.
  5. Enter the Synapse Analytics host (this should come from the SQL dedicated pool).
  6. Enter the Synapse Analytics port.
  7. Enter the Immuta Database. This should be a database that already exists; this is where Immuta will create the schemas that contain the secure views that will be generated. In our case, that should be immuta_pov.
  8. Enter the Immuta Schema: pov_data_secure.
  9. This is the schema name where all the secure views Immuta creates will be stored (you’ll learn more about this in the Query Your Data Guide). That is why we named it pov_data_secure (since the original data is in the pov_data schema), but, remember, it could contain data from multiple different schemas if desired, so in production you likely want to name this schema something more generic.
  10. Add any additional connection string options.
  11. Since Synapse does not support array/json primitives, Immuta must store user attribute information using a delimiter. If you expect any of these in user profiles, please update them accordingly. (It’s likely you don’t.)

You now need to decide if you want to do an automated installation or not. Immuta can automatically install the necessary procedures, functions, and system accounts into your Azure Synapse Analytics account if you provide privileged credentials. These credentials will not be stored or saved by Immuta. However, if you do not feel comfortable providing these credentials, you can manually run the provided bootstrap script (initial database) and bootstrap script.

  1. Select Automatic or Manual depending on your decision above.

    • Automatic:

      1. Enter the username. (We recommend using the system account you created associated to the workspace.)
      2. Enter the password.
    • Manual:

      1. Download both bootstrap scripts.
      2. Enter a NEW user; this is the account that will be created, and then the bootstrap script will populate it with the appropriate permissions.
      3. Please feel free to inspect the bootstrap scripts for more details.
      4. Enter a password for that NEW user.
      5. Click Test Azure Synapse Analytics Connection
      6. Once the connection is successful, click Save in the bottom left of the Configuration screen. This may take a little while to run.
      7. Run the bootstrap scripts in Synapse Analytics.

Congratulations, you have successfully configured the Immuta integration with Azure Synapse Analytics. Be aware, you can configure multiple Azure Synapse Analytics instances to Immuta.

Starburst (Trino)

  1. Log in to Immuta.
  2. Click the App Settings icon in the left sidebar (the wrench).
  3. Under the Configuration menu on the left, click Native Integrations.
  4. Click the + Add Native Integration button and select Trino.
  5. To connect a Trino cluster to Immuta, you must install the Immuta Plugin and register an Immuta Catalog. For Starburst clusters, the plugin comes installed, so you just need to register an Immuta Catalog. The catalog configuration needed to connect to this instance of Immuta is displayed in this section of the App Settings page. Copy that information to configure the plugin.
  6. Should you want Starburst (Trino) queries to be audited you must configure an Immuta Audit Event Listener. The event listener configuration needed to configure the audit listener is displayed below. The catalog name should match the name of the catalog associated with the Immuta connector configuration displayed above. Copy that information to configure the Audit Event Listener.
  7. Click Save in the bottom left of the Configuration screen. This may take a little while to run.
  8. Go to Starburst (Trino) and configure the plugin and audit listener.

Congratulations, you have successfully configured the Immuta integration with Starburst (Trino). Be aware, you can configure multiple Starburst (Trino) instances to Immuta.

4 - Register the Data with Immuta

Assumptions: Part 4 assumes your user has the following permissions in Immuta (note you should have these by default if you were the initial user on the Immuta installation):

  • CREATE_DATA_SOURCE: in order to register the data with Immuta
  • GOVERNANCE: in order to create a custom tag to tag the data tables with

These steps are captured in our first walkthrough under the Scalability & Evolvability theme: Schema monitoring and automatic sensitive data discovery. Please do that walkthrough to register the data to complete your data setup. Make sure you come back here to complete Part 5 below after doing this!

5 - Create a Subscription Policy to Open the Data to Everyone

Assumptions: Part 5 assumes your user has the following permissions in Immuta (note you should have these by default if you were the initial user on the Immuta install):

  • GOVERNANCE: in order to build policy against any table in Immuta OR
  • “Data Owner” of the registered tables from Part 4 without GOVERNANCE permission. (You likely are the Data Owner and have GOVERNANCE permission.)

Only do this part if you created the non-admin user in Part 3 (it is highly recommended you do that). If you did, you must give them access to the data as well. Immuta has what are called subscription policies, these are what controls access to tables, you may think of these as table GRANTs.

To get things going, let’s simply open those tables you created to anyone:

  1. Click the Policy icon in the left sidebar of the Immuta console.
  2. Click the Subscription Policies tab at the top.
  3. Click + Add Subscription Policy.
  4. Name the policy Open Up POV Data.
  5. For How should this policy grant access? select Allow Anyone.
  6. For Where should this policy be applied?, select On Data Sources.
  7. Select tagged for the circumstance (make sure you pick “tagged” and not “with columns tagged”).
  8. Type in Immuta POV for the tag. (Remember, this was the tag you created in the Schema Monitoring and Automatic Sensitive Data Discovery walkthrough under Part 4 above). Note that if you are a Data Owner of the tables without GOVERNANCE permission the policy will be automatically limited to the tables you own.
  9. Click Create PolicyActivate Policy

That will allow anyone access to those tables you created. We’ll come back to subscription policies later to learn more.

Next Steps

Return to the POV Guide to move on to your next topic.