Skip to main content

Overview

This guide walks you through connecting your Snowflake data warehouse to Permutive so you can import data for audience building and activation. You’ll configure your Snowflake instance with the necessary permissions using a setup script, then create the connection in the Permutive dashboard.
Prerequisites:
  • A Snowflake account with SECURITYADMIN and ACCOUNTADMIN roles
  • Access to run SQL scripts in your Snowflake instance
  • Knowledge of the database, schema, and warehouse you want to connect

Step 1: Set Up Your Snowflake Instance

To connect Permutive to your Snowflake instance, you’ll run a script that creates a dedicated user and role with read-only permissions. Choose between password authentication or key pair authentication.
Password authentication is the most straightforward method.

Run the Setup Script

Before running the script, replace the following placeholders:
  • <PASSWORD>: The password for PERMUTIVE_USER
  • <WAREHOUSE_NAME>: The warehouse Permutive will use to query data
  • <DATABASE_NAME>: The database you want to import
  • <SCHEMA_NAME>: The schema containing the tables you want to import
begin;

   -- create variables for user / password / role / warehouse / database (needs to be uppercase for objects)
   set role_name = 'PERMUTIVE_ROLE';
   set user_name = 'PERMUTIVE_USER';
   set user_password = '<PASSWORD>';
   set warehouse_name = '<WAREHOUSE_NAME>';
   set database_name = '<DATABASE_NAME>';
   set schema_name = '<SCHEMA_NAME>';

   -- change role to securityadmin for user / role steps
   use role securityadmin;

   -- create role for permutive
   create role if not exists identifier($role_name);

   -- create a user for permutive
   create user if not exists identifier($user_name)
   password = $user_password
   default_role = $role_name
   default_warehouse = $warehouse_name;

   -- grant the role to the permutive user
   grant role identifier($role_name) to user identifier($user_name);

   -- change role to accountadmin to grant permissions
   use role ACCOUNTADMIN;

   -- grant permutive role access to warehouse
   grant usage on warehouse identifier($warehouse_name)
   to role identifier($role_name);

   -- grant permutive access to database
   grant usage on database identifier($database_name)
   to role identifier($role_name);

   use database identifier($database_name);

   -- add a statement like this one for each schema you want to have synced by permutive
   grant usage on schema identifier($schema_name) to role identifier($role_name);

   -- add statements granting select permissions
   grant select on all TABLES in schema identifier($schema_name) to role identifier($role_name);

   -- allow Permutive to see future tables within this Snowflake schema
   grant select on future TABLES in schema identifier($schema_name) to role identifier($role_name);

 commit;
This script will:
  • Create a new role: PERMUTIVE_ROLE
  • Create a new user: PERMUTIVE_USER
  • Grant read access to the specified database and schema
  • Grant read access to all current and future tables within the schema

Step 2: Configure Network Policy (Optional)

Skip this section if you’re not using Network Policies to control traffic to your Snowflake instance.
If you use Network Policies, add Permutive’s IP addresses to your allowlist:
34.76.252.197
34.77.88.160
34.77.189.31
34.77.208.157
34.77.250.131
34.79.146.181
35.187.87.199
35.187.167.110
35.195.246.181
35.205.59.235
104.199.7.80
104.199.24.198

Create a User-Based Network Policy

We recommend creating a Network Policy attached to the Permutive user:
CREATE OR REPLACE NETWORK POLICY ALLOW_PERMUTIVE_POLICY ALLOWED_IP_LIST = (
    '34.76.252.197',
    '34.77.88.160',
    '34.77.189.31',
    '34.77.208.157',
    '34.77.250.131',
    '34.79.146.181',
    '35.187.87.199',
    '35.187.167.110',
    '35.195.246.181',
    '35.205.59.235',
    '104.199.7.80',
    '104.199.24.198'
) BLOCKED_IP_LIST = ();
Attach the policy to the Permutive user:
ALTER USER PERMUTIVE_USER
SET NETWORK_POLICY = ALLOW_PERMUTIVE_POLICY;

Step 3: Create the Connection in Permutive

1

Select Snowflake from the Catalog

In the Permutive dashboard, go to Connectivity > Catalog and select Snowflake.
2

Enter Connection Details

Fill in the following fields:
FieldDescription
DatabaseThe database name from your setup script (use UPPERCASE)
HostYour Snowflake account URL. Find this under Admin > Accounts > View Account Details and copy the “Account/Server URL” (without https:// prefix)
PortLeave as default 443
UserThe user created by the script (PERMUTIVE_USER)
RoleThe role created by the script (PERMUTIVE_ROLE or PERMUTIVE_READ_ROLE)
PasswordThe password from your script (for password authentication only)
WarehouseThe Snowflake compute warehouse to use
Snowflake uses UPPERCASE for all database, schema, user, and role names. Ensure you use uppercase values when entering connection details.
3

Save the Connection

Click Save to create the connection. It will appear on your Connections page with a “Processing” status while Permutive validates the credentials.

Step 4: Create an Import

Once your connection is active, you can create imports. Permutive uses incremental updates based on a cursor column to sync data efficiently.

Cursor Column Requirements

When creating an import, you’ll need to select a cursor column. This column should be:
  • Monotonically increasing over time
  • Not updated after creation
  • Preferably unique or high-cardinality
  • Not nullable
Good cursor examples:
  • CREATED_AT or UPDATED_AT timestamp columns
Supported cursor data types:
  • TIMESTAMP_TZ, TIMESTAMP_NTZ, TIME, DATE
1

Navigate to Imports

Go to Connectivity > Imports and click Create Import.
2

Configure the Import

  1. Select Snowflake as the source type
  2. Select your Snowflake connection
  3. Choose the schema and table to import
  4. Select your cursor column
  5. Continue with the standard import configuration
For more details on configuring imports, see Imports.

Supported Data Types

Permutive supports the following Snowflake data types:
CategorySupported Types
NumericNUMBER, INTEGER, FLOAT, DOUBLE
StringVARCHAR, TEXT, STRING
Date/TimeDATE, TIME, TIMESTAMP_NTZ, TIMESTAMP_TZ, TIMESTAMP_LTZ
LogicalBOOLEAN
Semi-structured data types (OBJECT and ARRAY) are not supported. If your tables contain these types, you must transform the data into a flattened format before Permutive can sync it.

Handling Semi-Structured Data

If your source tables contain OBJECT or ARRAY columns, create a View that flattens the data:
CREATE OR REPLACE VIEW FLATTENED_EVENTS AS
SELECT
    t.USER_ID,
    t.EVENT_TIMESTAMP,
    f.VALUE:item_id::VARCHAR AS ITEM_ID,
    f.VALUE:quantity::NUMBER AS QUANTITY
FROM
    RAW_EVENTS t,
    LATERAL FLATTEN(INPUT => t.ITEM_ARRAY) f;
Configure Permutive to sync from the view instead of the raw table.

Security Best Practices

  • Use a dedicated read-only role and user per environment
  • Limit grants to only the schemas you need
  • Consider multi-factor policies for administrative users

Granting Access to Multiple Schemas

To grant access to additional schemas, repeat the following for each schema:
grant usage on schema DATABASE_A.SCHEMA_X to role PERMUTIVE_READ_ROLE;
grant select on all tables in schema DATABASE_A.SCHEMA_X to role PERMUTIVE_READ_ROLE;
grant select on future tables in schema DATABASE_A.SCHEMA_X to role PERMUTIVE_READ_ROLE;

Troubleshooting

If you receive authentication errors:
  • Verify the account locator/region in your host URL
  • Double-check the username and password
  • Ensure you’re using UPPERCASE for database, schema, user, and role names
Solution: Re-check your connection details and ensure they match exactly what was created by the setup script.
If you receive permission errors:
  • Confirm USAGE grants on the warehouse, database, and schema
  • Verify SELECT grants on tables/views (including future tables)
Solution: Re-run the permission grants from the setup script or add missing grants manually. After updating permissions, run a schema resync in Permutive to refresh the available tables.
If you receive cursor-related errors:
  • Ensure the cursor column exists in the table
  • Verify it’s one of the supported data types
  • Avoid using nullable columns as cursors
Solution: Choose a different cursor column that meets the requirements.
If rows are missing or being re-synced:
  • This can occur if multiple rows share the same cursor value
Solution: If needed, reset the connection state in Permutive and re-run the import. Consider using a higher-cardinality cursor column.
If connections are being blocked:
  • Ensure Permutive’s IP addresses are allowlisted in your Network Policy
Solution: Add Permutive’s IP addresses to your Network Policy (see Step 2).
If the public key fingerprints don’t match:Solution: Reassign the public key to the user:
ALTER USER PERMUTIVE_USER SET RSA_PUBLIC_KEY='MIIBIjANBgkqh...';

FAQ

All standard Snowflake editions are supported.
Yes, with USAGE on the schema and SELECT on the view.
If a table lacks a monotonically increasing column like a timestamp, it may not be suitable for incremental sync. You would need to add one to the table schema before importing.

Next Steps