Overview
This guide walks you through creating an import from an active connection. Imports pull data from your connected data warehouses or data lakes into Permutive, where it can be used for audience building, targeting, and activation. Each import is configured with a specific data type and column mapping that determines how Permutive processes the incoming data.Prerequisites:
- An active connection (see guides for BigQuery, Snowflake, Amazon S3, or Google Cloud Storage)
- A table in your source that contains the data you want to import
- Knowledge of which data type best matches your use case (see Import Data Types below)
Import Data Types
Before creating an import, determine which data type matches the data you want to bring into Permutive. Each data type serves a different purpose and requires different column mappings.| Data Type | Purpose | Use Case |
|---|---|---|
| User Profile | Static user attributes | Demographics, subscription tiers, CRM segments, preference data |
| User Activity | Time-stamped events | Purchase history, content interactions, conversion events |
| User Segments | Segment membership lists | Pre-built audience segments, partner segments, CRM lists |
| Identity | User identity mappings | Linking user IDs across systems, cross-device identity resolution |
| Group | Household or group memberships | Household graphs, account-level groupings, shared identity relationships |
Step 1: Start the Import Wizard
Navigate to Imports
In the Permutive Dashboard, go to Connectivity > Imports and click Create Import.
Enter an Import Name
Provide a descriptive name for your import. Choose a name that identifies the data source and purpose (e.g., “BigQuery - Purchase History”, “Snowflake - CRM Profiles”).
Select the Source Type
Choose the source platform your data is stored in (e.g., Google BigQuery, Snowflake, Amazon S3, Google Cloud Storage).
Step 2: Select Schema and Table
Choose the Schema
Select the schema (dataset or database schema) that contains the table you want to import. The available schemas are discovered from your active connection.
Choose the Table
Select the table you want to import data from. Permutive displays the tables discovered within the selected schema.
If you don’t see an expected table, click Resync with source to refresh the list of available schemas and tables from your source platform. This is useful if tables have been added since the connection was created.
Step 3: Select the Data Type
Choose the data type that matches the structure and purpose of your data. The data type you select determines which columns you’ll need to map in the next step.- User Profile — For importing static user attributes such as demographics, subscription tiers, or CRM segments. Use this when your data describes properties of individual users that don’t change frequently.
- User Activity — For importing time-stamped event data such as purchase history, content interactions, or conversion events. Use this when your data contains records of actions users have taken, each associated with a timestamp.
- User Segments — For importing pre-built segment membership lists. Use this when your data maps users to segment codes that represent audience groupings.
- Identity — For importing identity mappings that link different user identifiers together. Use this to enrich Permutive’s Identity Graph with cross-system or cross-device identity relationships.
- Group — For importing household or group membership data. Use this to associate users with groups such as households, accounts, or other shared identity structures.
Step 4: Map Your Columns
After selecting your data type, map the columns from your source table to the fields Permutive expects. Each data type has a set of required fields and optional attribute fields.User Profile
| Field | Required | Description |
|---|---|---|
| User ID | Yes | The column containing unique user identifiers |
| Attributes | No | Additional columns to import as user properties (e.g., age, subscription tier, region) |
User Activity
| Field | Required | Description |
|---|---|---|
| User ID | Yes | The column containing unique user identifiers |
| User ID Type | Yes | The type of identifier used (e.g., Permutive User ID, email hash) |
| Cursor | Yes | A monotonically increasing column used for incremental sync (typically a timestamp) |
| Attributes | No | Additional columns to import as event properties (e.g., product category, purchase amount) |
User Segments
| Field | Required | Description |
|---|---|---|
| User ID | Yes | The column containing unique user identifiers |
| User ID Type | Yes | The type of identifier used |
| Segment | Yes | The column containing segment codes (string or array of strings) |
| Cursor | Yes | A monotonically increasing column used for incremental sync |
Identity and Group
For detailed column mapping and setup instructions for Identity and Group (household) imports, see Importing User Group Memberships.Step 5: Save the Import
Once your column mappings are configured, click Save to create the import. The import will begin processing and will sync data from your source on its next scheduled run. Imports sync on a daily schedule, pulling new and updated data from your source table into Permutive.Managing Imports
Viewing Import Status
After creating an import, you can monitor it from the Imports page. Each import displays:- Name — The name you provided during creation
- Source — The platform and connection used
- Data Type — The type of data being imported
- Status — Whether the import is active, processing, or has encountered errors
- Last Sync — When data was last successfully imported
Resyncing Schema
If your source table has new columns added since the import was created, you can resync the schema to discover them:- Go to Connectivity > Imports and click Create Import
- Enter a name, select the source, and select the connection
- Click Resync with source to refresh the available schemas, tables, and columns
Schema resync discovers new columns but does not automatically update existing imports. If you need to include new columns, you’ll need to create a new import with the updated column mapping.
Deleting an Import
You can delete an import when it’s no longer needed. Deleting an import has the following effects:- Data syncing stops immediately — No further data will be pulled from the source table
- Data retention — For non-composable deployments, a 30-day time-to-live (TTL) is applied to the imported data. After 30 days, the data is removed from Permutive. For composable deployments, the data remains in your cloud environment
- Cohort Builder impact — The import is removed from the Cohort Builder as an available data source. Any existing cohorts that reference the deleted import will display a warning indicating the import is no longer active
- Audience evaluation — Cohort expressions that depend on the deleted import will stop evaluating against the imported data
- Navigate to Connectivity > Imports
- Select the import you want to delete
- Click Delete Import and confirm the action
Troubleshooting
Import fails to process
Import fails to process
If your import fails during initial processing:
- Verify the source connection is still active
- Check that the selected table exists and contains data
- Ensure your column mappings match the actual column names and data types in the source table
Missing columns during mapping
Missing columns during mapping
If expected columns don’t appear during the column mapping step:
- The schema may not have been refreshed since the columns were added
- The column data type may not be supported
Cursor column not advancing
Cursor column not advancing
If your User Activity or other cursor-based import isn’t picking up new records:
- The cursor column may not be monotonically increasing
- Records may have been updated (changing the cursor value) rather than inserted as new rows
- The cursor column may contain null values
Data not appearing in Cohort Builder
Data not appearing in Cohort Builder
If imported data isn’t available in the Cohort Builder:
- The import may still be processing its first sync
- The column mapping may be incorrect
- The data type selected may not match the structure of your data