Skip to main content
Narrative’s architecture is designed around a core principle: data stays in place. Rather than moving data to queries, queries move to data. This approach enables collaboration across organizations while maintaining data residency and security requirements.

The principle: Queries go to data

Traditional data sharing requires copying data between systems. Narrative inverts this model:
Traditional approachNarrative approach
Copy data to a shared locationLeave data in place
Queries run against copiesQueries run against source data
Data freshness depends on sync frequencyData is always current
Multiple copies increase security surfaceSingle source reduces exposure
When you query data through Narrative, the control plane sends query instructions to your data plane. The data plane executes the query locally and returns only the results.

What flows where

Understanding what crosses boundaries helps clarify the architecture:

From user to control plane

  • NQL query text
  • Authentication credentials
  • Request metadata

From control plane to data plane

  • Compiled SQL (transpiled for the target database)
  • Job metadata and parameters
  • Execution instructions

From data plane to control plane

  • Status updates and completion signals
  • Error messages and diagnostics
  • Data samples (when requested for viewing results)

What never leaves the data plane

  • Raw source data
  • Intermediate query results
  • Materialized view contents (stored locally)

Query data flow

All queries—including interactive queries run from the Query Editor—follow the same pattern as materialized views. Results are stored as a dataset in the data plane, not returned directly to the user.
User                Control Plane              Data Plane
  │                      │                         │
  │──── NQL Query ──────>│                         │
  │                      │                         │
  │                      │──── Compiled SQL ──────>│
  │                      │                         │
  │                      │                    ┌────┴────┐
  │                      │                    │ Execute │
  │                      │                    │ & Store │
  │                      │                    │(Dataset)│
  │                      │                    └────┬────┘
  │                      │                         │
  │                      │<──── Status ───────────│
  │                      │                         │
Interactive queries are implemented as materialized views with two automatic constraints:
  • A 24-hour retention policy that expires the results automatically
  • A row limit that caps the result size
This architecture ensures that even ad-hoc queries never move full result sets through the control plane—data stays in your data plane.

Materialized view data flow

Materialized views follow a different pattern. Results stay in the data plane:
Control Plane              Data Plane
     │                         │
     │──── Refresh Job ───────>│
     │                         │
     │                    ┌────┴────┐
     │                    │ Execute │
     │                    │  Query  │
     │                    └────┬────┘
     │                         │
     │                    ┌────┴────┐
     │                    │  Store  │
     │                    │ Results │
     │                    └────┬────┘
     │                         │
     │<──── Status ────────────│
     │                         │
The control plane receives only a completion status—never the actual data. Results are stored as a dataset within your data plane.

How you view query results

Since query results stay in the data plane, how do you actually see them? Through data sampling.
User                Control Plane              Data Plane
  │                      │                         │
  │── View Results ─────>│                         │
  │                      │                         │
  │                      │── Sample Request ──────>│
  │                      │                         │
  │                      │                    ┌────┴────┐
  │                      │                    │  Read   │
  │                      │                    │ 1000    │
  │                      │                    │  Rows   │
  │                      │                    └────┬────┘
  │                      │                         │
  │                      │<── Sample Data ────────│
  │                      │                         │
  │<── Display Sample ───│                         │
  │                      │                         │
When you click to view query results in the UI or request results via the API:
  1. Sample job — The control plane requests a sample from the dataset in your data plane
  2. Row retrieval — Up to 1,000 rows are read from the stored results
  3. Sample storage — The sample is stored in the control plane for quick access
  4. Display — The sample appears in the UI or API response
This sample is a preview of your results, not the complete dataset. For full access to query results, use data exports or access the underlying dataset directly.
Samples are one of the few cases where actual data leaves your data plane. For governance implications, see Sample Data. To clear samples programmatically, see Managing Datasets.

Cross-data-plane queries

When a query references data in multiple data planes, the control plane coordinates execution:
  1. Query decomposition — The control plane breaks the query into subqueries, one per data plane
  2. Parallel execution — Subqueries execute independently in each data plane
  3. Result coordination — Results are combined as needed, with the control plane coordinating data movement between planes
The specific data movement depends on the query. For joins across data planes, some data may need to move between planes for the join to complete. The control plane optimizes this movement to minimize data transfer.

Data collaboration flow

When you share data with a partner organization, they query your data through the same architecture:
  1. Partner submits NQL query referencing your dataset
  2. Control plane verifies their permissions against your access rules
  3. Compiled query is sent to your data plane
  4. Your data plane executes and returns results
  5. Partner receives results through the control plane
Your raw data never leaves your data plane—only the query results authorized by your access rules.

Data egress via connectors

While the architecture above keeps data within data planes, you often need to deliver data to external destinations for activation. Connectors enable delivery to platforms like advertising DSPs, cloud storage, and data warehouses.

How connectors work

Connectors are pre-built integrations that handle authentication, data formatting, and delivery to external platforms. The data flow for connector-based delivery:
Data Plane                  Connector                External Destination
    │                          │                            │
    │                          │                            │
    │──── Dataset ────────────>│                            │
    │                          │                            │
    │                     ┌────┴────┐                       │
    │                     │ Format  │                       │
    │                     │  Data   │                       │
    │                     └────┬────┘                       │
    │                          │                            │
    │                          │──── Formatted Data ───────>│
    │                          │                            │
    │                          │<──── Confirmation ─────────│
    │                          │                            │
Unlike queries (where only results leave the data plane), connector delivery sends the full dataset to the external destination. This is intentional—connectors exist specifically to move data outside the platform for activation.

Setting up delivery

Connector-based delivery involves two configuration levels:
LevelWhat it configuresExample
ProfileAccount-level authentication and settingsAWS credentials, ad account ID
Delivery settingsJob-specific parametersDestination folder, delivery schedule
Once a connector is configured, delivery options appear automatically when creating materialized views or working with datasets.
Each connector requires specific identifier types to match users on the destination platform. See Connectors Reference for requirements by platform.