Compute pool types
Dedicated
Dedicated compute pools provide isolated resources reserved for your workloads. Your queries don’t compete with other users for processing power, which results in more predictable performance. Use dedicated compute pools when:- Running production workloads where performance consistency matters
- Processing large or complex queries that need guaranteed resources
- Operating time-sensitive pipelines where latency must stay predictable
Shared
Shared compute pools use pooled resources across multiple users. This is more cost-effective but means your query performance may vary depending on current platform load. Use shared compute pools when:- Running exploratory queries or ad-hoc analysis
- Developing and testing queries before promoting to production
- Working with smaller datasets where performance variability is acceptable
Snowflake warehouse
On Snowflake-based data planes, each compute pool maps to a Snowflake virtual warehouse. When you register warehouses through the Snowflake Native App, each warehouse becomes a compute pool on your data plane. You can register multiple warehouses to separate workloads—for example, a smaller warehouse for exploratory queries and a larger one for production pipelines. Each Snowflake compute pool has a collaboration policy that controls which companies can use it, and one pool can be designated as the default for the data plane.Which compute pools are available
The compute pool options available to you depend on your data plane’s underlying provider:| Provider | Available compute pools | Notes |
|---|---|---|
| Snowflake | Snowflake warehouse | One compute pool per registered warehouse |
| Narrative (shared AWS) | Dedicated, Shared | Choose based on workload requirements |
| Customer AWS | Dedicated, Shared | Choose based on workload requirements |
When to use each type
| Scenario | Recommended pool | Why |
|---|---|---|
| Production data pipelines | Dedicated | Predictable performance, no resource contention |
| Ad-hoc data exploration | Shared | Cost-effective for variable, low-priority workloads |
| Testing queries before production | Shared | Saves dedicated resources for production use |
| Time-sensitive audience builds | Dedicated | Guaranteed resources ensure timely completion |
| Snowflake data planes | Snowflake warehouse | Register one or more warehouses sized for your workload |
How compute pools relate to the SDK
When executing queries through the TypeScript SDK, theexecution_cluster parameter maps to the compute pool concept:
execution_cluster.type accepts 'dedicated' or 'shared', corresponding directly to the Dedicated and Shared compute pool types. If omitted, the data plane’s default compute pool is used.
For Snowflake-based data planes, omitting execution_cluster uses the data plane’s default compute pool (the warehouse you’ve designated as default).
Related content
Execution Context
How data plane, compute pool, database, and schema work together
Data Planes
Where your data lives and is processed
Executing NQL Queries
Run queries programmatically with the TypeScript SDK
Migrate to Compute Pools
Transition from a single Snowflake warehouse to compute pools

