The query lifecycle
A query moves through six stages from submission to results:Stage 1: Query submission
You submit an NQL query to the control plane through:- The Query Editor in the Narrative I/O interface
- The NQL API for programmatic access
- Materialized view definitions that run on a schedule
Stage 2: Compilation and transpilation
The control plane transforms your NQL into executable SQL through several steps: Parsing. The NQL syntax is parsed and validated. Malformed queries fail here with syntax errors. Resolution. Dataset references are resolved to their actual locations. Rosetta Stone mappings translate standardized attribute names to physical column names. Permission verification. Your access rules are checked for every dataset and field in the query. If you lack permission to access any referenced data, the query fails before execution. Optimization. The query planner analyzes the query structure to determine an efficient execution strategy. For large dataset scans, this may include chunking—splitting the query into time-based segments for improved stability and cost efficiency. Transpilation. The NQL is converted to native SQL for your target data plane’s database engine. A Snowflake data plane receives Snowflake SQL; a Spark data plane receives Spark SQL. See NQL Design Philosophy for details on transpilation. At the end of this stage, the control plane has a compiled query ready for execution.Stage 3: Job enqueueing
The control plane creates a job containing the compiled SQL and adds it to the job queue. The job queue is a coordination mechanism between the control plane and data planes—it holds work waiting for execution. Query execution is one type of job. The job queue also handles other operations like dataset management and system tasks. See Job Types for a complete list.Stage 4: Operator polling
Each data plane runs an operator component that bridges the control plane and your data infrastructure. The operator:- Polls the control plane’s job queue for work targeting its data plane
- Authenticates to ensure only authorized operators can pick up jobs
- Claims jobs for execution
- Your data plane can be behind a firewall
- The control plane never needs credentials to your database
- You control when and how jobs are executed
Stage 5: Query execution
Once the operator picks up a job, it executes the compiled SQL against your data plane’s native query engine—Snowflake, Spark, or another supported system. The query runs entirely within your infrastructure. Your data never leaves the data plane; only query instructions enter and results exit.Stage 6: Result handling
After execution completes, results are stored in the data plane—never returned directly through the control plane: Interactive queries. Results are stored as a new dataset in your data plane. Interactive queries are implemented as materialized views with a 24-hour retention policy and an automatic row limit. The control plane receives only a completion status—not the data itself. You view results through data sampling, which retrieves a preview of up to 1,000 rows. Materialized views. Results are written to a dataset in your data plane. The control plane is notified of completion but doesn’t see the actual data. Materialized views can have custom retention policies and refresh schedules. Data exports. Results are written to a designated destination within your data plane. Throughout execution, status updates flow back to the control plane so you can monitor progress. See Data Flow for details on how data moves through the platform.Why this architecture?
The separation between control plane and data plane is deliberate: Security. Your data stays in your infrastructure. The control plane coordinates work without accessing raw data. This enables collaboration while maintaining data residency requirements. Flexibility. Each organization can use their preferred database system. The control plane transpiles NQL to whatever dialect your data plane requires. Scalability. Operators scale with your data plane’s capacity. You can run multiple operators for high-throughput workloads or share capacity across fewer operators. Reliability. The pull-based job queue provides resilience. If an operator is temporarily unavailable, jobs wait in the queue until capacity returns.Related content
NQL Design Philosophy
Why NQL is an interpreted language with transpilation
Chunking
How large queries are split into time-based segments
Control Plane
The orchestration layer that compiles queries and coordinates jobs
Data Planes
Where your data lives and queries execute

