Skip to main content
A compute pool determines the compute resources allocated to process your queries within a data plane. When you execute a query, the compute pool controls how much processing power is available and whether those resources are shared with other users or dedicated to your workload. Compute pools are one of the four dimensions of your execution context, alongside data plane, database, and schema.

Compute pool types

Dedicated

Dedicated compute pools provide isolated resources reserved for your workloads. Your queries don’t compete with other users for processing power, which results in more predictable performance. Use dedicated compute pools when:
  • Running production workloads where performance consistency matters
  • Processing large or complex queries that need guaranteed resources
  • Operating time-sensitive pipelines where latency must stay predictable

Shared

Shared compute pools use pooled resources across multiple users. This is more cost-effective but means your query performance may vary depending on current platform load. Use shared compute pools when:
  • Running exploratory queries or ad-hoc analysis
  • Developing and testing queries before promoting to production
  • Working with smaller datasets where performance variability is acceptable

Snowflake warehouse

On Snowflake-based data planes, each compute pool maps to a Snowflake virtual warehouse. When you register warehouses through the Snowflake Native App, each warehouse becomes a compute pool on your data plane. You can register multiple warehouses to separate workloads—for example, a smaller warehouse for exploratory queries and a larger one for production pipelines. Each Snowflake compute pool has a collaboration policy that controls which companies can use it, and one pool can be designated as the default for the data plane.

Which compute pools are available

The compute pool options available to you depend on your data plane’s underlying provider:
ProviderAvailable compute poolsNotes
SnowflakeSnowflake warehouseOne compute pool per registered warehouse
Narrative (shared AWS)Dedicated, SharedChoose based on workload requirements
Customer AWSDedicated, SharedChoose based on workload requirements
You select your compute pool through the context selector in the platform’s top navigation.

When to use each type

ScenarioRecommended poolWhy
Production data pipelinesDedicatedPredictable performance, no resource contention
Ad-hoc data explorationSharedCost-effective for variable, low-priority workloads
Testing queries before productionSharedSaves dedicated resources for production use
Time-sensitive audience buildsDedicatedGuaranteed resources ensure timely completion
Snowflake data planesSnowflake warehouseRegister one or more warehouses sized for your workload

How compute pools relate to the SDK

When executing queries through the TypeScript SDK, the execution_cluster parameter maps to the compute pool concept:
const result = await api.executeNql({
  nql: 'SELECT _nio_id, _nio_updated_at FROM company_data."my_dataset" LIMIT 100',
  data_plane_id: null,
  execution_cluster: { type: 'dedicated' },
});
The execution_cluster.type accepts 'dedicated' or 'shared', corresponding directly to the Dedicated and Shared compute pool types. If omitted, the data plane’s default compute pool is used. For Snowflake-based data planes, omitting execution_cluster uses the data plane’s default compute pool (the warehouse you’ve designated as default).

Execution Context

How data plane, compute pool, database, and schema work together

Data Planes

Where your data lives and is processed

Executing NQL Queries

Run queries programmatically with the TypeScript SDK

Migrate to Compute Pools

Transition from a single Snowflake warehouse to compute pools