Skip to main content
This reference documents the UI elements, configuration options, and actions available in the Model Studio interface.

Overview

Model Studio enables training and fine-tuning of AI models using datasets within Narrative’s platform. It integrates datasets, base models, and compute resources into a streamlined workflow. Path: My Models → Model Studio

Base Model module

The Base Model module lets you select the foundation model for fine-tuning.
ElementDescription
Select buttonOpens the model selection dialog
Model nameDisplays the currently selected base model
Model detailsShows model size and capabilities

Available base models

ModelDescription
Llama-3.2-1BMeta’s lightweight 1 billion parameter model
Mistral-7b-v0.1Mistral AI’s 7 billion parameter model
Additional base models may be available. Check the model selection dialog for the current list.

Training Data module

The Training Data module lets you select the dataset to use for fine-tuning.
ElementDescription
Select buttonOpens the dataset selection dialog
Dataset nameDisplays the currently selected training dataset
Row countShows the number of training examples in the dataset

Dataset requirements

Datasets must be mapped to a supported attribute and materialized in the corresponding format before use in Model Studio.
AttributeFormatDescription
fine_tuning_conversationConversation structureEach row contains a structured conversation with system, user, and assistant messages
Use Prompt Studio to transform datasets into the fine_tuning_conversation format.

Accessing training data with NQL

To query conversation data from a prepared dataset:
SELECT
    d._rosetta_stone.fine_tuning_conversation.conversation
FROM company_data.my_dataset_name d
Additional fine-tuning attributes will be supported in future updates.

Compute module

The Compute module lets you configure the compute resources for training.
ElementDescription
Select buttonOpens the compute instance selection dialog
Instance typeDisplays the selected compute configuration
GPU configurationShows GPU count and type

Compute instance selection

Choose an instance based on your training requirements:
FactorConsideration
Model sizeLarger models require more GPU memory
Dataset sizeLarger datasets benefit from more compute capacity
Training timeHigher-tier instances reduce training duration
Available instances include AWS G5 instances with various GPU configurations.

Trained Model Details module

The Trained Model Details module captures metadata for the fine-tuned model.
ElementDescription
Add buttonOpens the metadata configuration dialog
Edit buttonModify existing metadata (after initial configuration)

Metadata fields

FieldRequiredDescription
Unique NameYesIdentifier for the trained model
DescriptionNoPurpose or use case for the model
TagsNoKeywords for identification and categorization
LicenseNoLicense under which the model will be shared or used

Actions reference

Configuration actions

ActionLocationDescriptionResult
Select base modelBase Model moduleChoose foundation modelModel selected for fine-tuning
Select training dataTraining Data moduleChoose prepared datasetDataset linked to training job
Select computeCompute moduleChoose compute resourcesInstance allocated for training
Add model detailsTrained Model Details moduleConfigure output metadataMetadata saved for trained model

Training actions

ActionLocationDescriptionResult
Train ModelPage toolbarInitiate trainingTraining job starts and progress is displayed

Training output

Once training completes, the fine-tuned model is available with:
  • The configured metadata (name, description, tags, license)
  • Full compatibility with the training dataset format
  • Readiness for deployment or inference

Workflow summary

  1. Select base model → Choose the foundation model in the Base Model module
  2. Select training data → Choose a prepared dataset in the Training Data module
  3. Configure compute → Select appropriate compute resources
  4. Add metadata → Provide model name, description, tags, and license
  5. Train model → Click Train Model and monitor progress