Skip to main content

Overview

Permutive Routing makes it easy to load all of your first-party event data from web and mobile into Snowflake. The Permutive Snowflake Router gives you access to your raw event data, allowing you to run your own analysis or integrate with your existing data pipelines. With routing enabled, Permutive writes event data to your Snowflake instance continually throughout the day, in roughly 5-minute or 500MB batches.
Prerequisites:
  • Active Snowflake account with administrative permissions
  • Ability to create databases, schemas, users, roles, and storage integrations
  • Access to a Snowflake warehouse

Running Snowflake SQL

This guide contains Snowflake SQL commands to run on your account. You can execute these by creating a new empty worksheet in Snowflake and running each step from there.
You can run specific parts of the SQL by highlighting the line and clicking Run in the top right. This will execute only the highlighted line.

Setup Steps

1

Create a Database and Schema

Permutive requires full access to a dedicated database and schema, as we may need to re-create some resources during setup. This database and schema will store all Events, Aliases, and Segments from your Permutive account.You can name these anything you like, though we recommend a database called PERMUTIVE and a schema called DATA.Create the database:
CREATE DATABASE IF NOT EXISTS "PERMUTIVE"
  COMMENT = 'Database used for routing data from Permutive';
Create the schema:
CREATE SCHEMA IF NOT EXISTS "PERMUTIVE"."DATA"
  COMMENT = 'Schema used for routing data from Permutive';
2

Create a New Snowflake User

Create a new user within your Snowflake instance to allow Permutive to write event data. You can name this user anything, though we recommend permutive_routing.
Snowflake Routing requires key-pair authentication. Password authentication is not supported. Permutive will generate a public/private key pair and provide you with the public key to attach to this user.
3

Attach Public Key to User

Contact Permutive to obtain the public key for your integration. Once you have the key, attach it to the user you created.Follow Snowflake’s key-pair authentication documentation (steps 4-5) to associate the public key with the user.
When setting the public key, remove the header and footer lines (-----BEGIN PUBLIC KEY----- and -----END PUBLIC KEY-----) and concatenate into a single string without line breaks.
Once the public key is attached, Permutive can log into your Snowflake instance to continue the setup process.
4

Create a Role for the User

Permutive requires permissions to create tables, stages, pipes, and a storage integration.Create the role:
CREATE ROLE permutive_routing_role
   COMMENT = 'Role used by Permutive to load data into Snowflake';
Grant permissions to the role:
GRANT USAGE
  ON DATABASE permutive
  TO ROLE permutive_routing_role;

GRANT USAGE, CREATE TABLE, CREATE STAGE, CREATE PIPE
  ON SCHEMA permutive.data
  TO ROLE permutive_routing_role;

GRANT SELECT
  ON FUTURE TABLES IN SCHEMA permutive.data
  TO ROLE PUBLIC;

GRANT OPERATE
  ON FUTURE PIPES IN SCHEMA permutive.data
  TO ROLE permutive_routing_role;

GRANT CREATE INTEGRATION
  ON ACCOUNT
  TO ROLE permutive_routing_role;

Permissions Reference

Object TypeObjectPermission RequiredGranted to Role
DatabasepermutiveUSAGEpermutive_routing_role
Schemapermutive.dataUSAGE, CREATE TABLE, CREATE STAGE, CREATE PIPEpermutive_routing_role
Future Tablespermutive.dataSELECTPUBLIC
Future Pipespermutive.dataOPERATEpermutive_routing_role
5

Grant Role to User

Associate the role with the user you created. If you used a different username than permutive_routing, replace it in the SQL below.
GRANT ROLE permutive_routing_role
  TO USER permutive_routing;

ALTER USER permutive_routing
  SET default_role = permutive_routing_role;
6

Grant Access to a Warehouse

Permutive requires access to a warehouse to select copy history data from PERMUTIVE.DATA.INFORMATION_SCHEMA, which allows us to monitor the data going into your system.We only require the smallest warehouse possible, but will use any warehouse size you provide.Replace <WAREHOUSE_NAME> with the warehouse you want to grant access to. Your warehouse list can be found in Admin > Warehouses in Snowflake.
GRANT USAGE
   ON WAREHOUSE <WAREHOUSE_NAME>
   TO ROLE permutive_routing_role;
7

Notify Permutive

Email [email protected] with the following details:
  • User/Username - The user created for Permutive
  • Role - The role name (e.g., permutive_routing_role)
  • Warehouse - The warehouse name you granted access to
  • Database - The database name (e.g., PERMUTIVE)
  • Schema - The schema name (e.g., DATA)
  • Account URL - Found in Admin > Account in Snowflake. Click the paperclip icon next to your account name to copy it. Format: https://<organization-id>-<account-id>.snowflakecomputing.com
Permutive will complete the storage integration and Snowpipe configuration.

What Happens After Setup

Once routing is active:
  1. Tables are created automatically for events, aliases, and segment metadata
  2. Events stream via Snowpipe in approximately 5-minute or 500MB batches
  3. All event types appear in a single EVENTS table with an EVENTNAME column to distinguish types
  4. Schemas update automatically when you add new event types or properties

Data Included in the Export

Events

All event data is uploaded to a table called EVENTS. All event types are written to this table; use the EVENTNAME column to identify the event type.
Column NameColumn Type
TIMETimestamp
ORGANIZATIONIDString
PROJECTIDString
VIEWIDString
SESSIONIDString
USERIDString
EVENTIDString
EVENTNAMEString
SEGMENTSArray
COHORTSArray
PROPERTIESObject
Schema Updates: When a new event type is added, it automatically appears in this table with the appropriate EVENTNAME. When properties change within the PROPERTIES field, the object automatically contains all changes.

Aliases

Alias data is written to a table called SYNC_ALIASES.
Column NameColumn Type
TIMETimestamp
EVENT_TYPEString
PERMUTIVE_IDString
WORKSPACE_IDString
IDString
TAGString

Segment Metadata

Segment metadata is written as a snapshot of the latest data, overwritten with each export.
Column NameColumn Type
INSERTED_ATTimestamp
WORKSPACE_IDString
NAMEString
TAGSArray
METADATAObject
NUMBERString

Notable Behavior

Reporting Lag

Snowflake provides metadata regarding ingested data from external sources. Permutive leverages Snowpipe to ingest data from our platform into your Snowflake instance, which means this metadata is available in the COPY_HISTORY table. This table provides metadata such as the location data was ingested from, the amount of data ingested, any errors during ingestion, and other statistics.
Snowflake’s documentation states that the COPY_HISTORY table has a lag of up to two hours. This means Permutive’s internal alerting may be delayed by up to two hours. This is a result of Snowflake’s implementation and has not been observed to cause issues in practice.

Next Steps