This reference documents the UI elements, configuration options, and actions available in the Model Studio interface.
Overview
Model Studio enables training and fine-tuning of AI models using datasets within Narrative’s platform. It integrates datasets, base models, and compute resources into a streamlined workflow.
Path: My Models → Model Studio
Base Model module
The Base Model module lets you select the foundation model for fine-tuning.
| Element | Description |
|---|
| Select button | Opens the model selection dialog |
| Model name | Displays the currently selected base model |
| Model details | Shows model size and capabilities |
Available base models
| Model | Description |
|---|
| Llama-3.2-1B | Meta’s lightweight 1 billion parameter model |
| Mistral-7b-v0.1 | Mistral AI’s 7 billion parameter model |
Additional base models may be available. Check the model selection dialog for the current list.
Training Data module
The Training Data module lets you select the dataset to use for fine-tuning.
| Element | Description |
|---|
| Select button | Opens the dataset selection dialog |
| Dataset name | Displays the currently selected training dataset |
| Row count | Shows the number of training examples in the dataset |
Dataset requirements
Datasets must be mapped to a supported attribute and materialized in the corresponding format before use in Model Studio.
| Attribute | Format | Description |
|---|
| fine_tuning_conversation | Conversation structure | Each row contains a structured conversation with system, user, and assistant messages |
Use Prompt Studio to transform datasets into the fine_tuning_conversation format.
Accessing training data with NQL
To query conversation data from a prepared dataset:
SELECT
d._rosetta_stone.fine_tuning_conversation.conversation
FROM company_data.my_dataset_name d
Additional fine-tuning attributes will be supported in future updates.
Compute module
The Compute module lets you configure the compute resources for training.
| Element | Description |
|---|
| Select button | Opens the compute instance selection dialog |
| Instance type | Displays the selected compute configuration |
| GPU configuration | Shows GPU count and type |
Compute instance selection
Choose an instance based on your training requirements:
| Factor | Consideration |
|---|
| Model size | Larger models require more GPU memory |
| Dataset size | Larger datasets benefit from more compute capacity |
| Training time | Higher-tier instances reduce training duration |
Available instances include AWS G5 instances with various GPU configurations.
Trained Model Details module
The Trained Model Details module captures metadata for the fine-tuned model.
| Element | Description |
|---|
| Add button | Opens the metadata configuration dialog |
| Edit button | Modify existing metadata (after initial configuration) |
| Field | Required | Description |
|---|
| Unique Name | Yes | Identifier for the trained model |
| Description | No | Purpose or use case for the model |
| Tags | No | Keywords for identification and categorization |
| License | No | License under which the model will be shared or used |
Actions reference
Configuration actions
| Action | Location | Description | Result |
|---|
| Select base model | Base Model module | Choose foundation model | Model selected for fine-tuning |
| Select training data | Training Data module | Choose prepared dataset | Dataset linked to training job |
| Select compute | Compute module | Choose compute resources | Instance allocated for training |
| Add model details | Trained Model Details module | Configure output metadata | Metadata saved for trained model |
Training actions
| Action | Location | Description | Result |
|---|
| Train Model | Page toolbar | Initiate training | Training job starts and progress is displayed |
Training output
Once training completes, the fine-tuned model is available with:
- The configured metadata (name, description, tags, license)
- Full compatibility with the training dataset format
- Readiness for deployment or inference
Workflow summary
- Select base model → Choose the foundation model in the Base Model module
- Select training data → Choose a prepared dataset in the Training Data module
- Configure compute → Select appropriate compute resources
- Add metadata → Provide model name, description, tags, and license
- Train model → Click Train Model and monitor progress
Related content