Downloading Local Models
Local models are an optional feature that enables local AI inference for Conversational Search and other AI-powered features in Pieces.
If you prefer to run LLMs on-device instead of using cloud-based AI, you can download local models directly through PiecesOS.
This guide will walk you through the download and verification process for local models across Windows, macOS, and Linux.
Downloading Through Conversational Search
The easiest way to download and manage local models is directly from Conversational Search:
<Image src="https://storage.googleapis.com/hashnode_product_documentation_assets/core_desktop_meet-pieces_orgs_paid-plans_12.3.6/desktop/conversational-search/using-conversational-search/model_selection_in_desktop_app.png" alt="" align="center" fullwidth="true" />
> Clicking the active model button in Conversational Search to open the model dropdown
<Image src="https://storage.googleapis.com/hashnode_product_documentation_assets/core_desktop_meet-pieces_orgs_paid-plans_12.3.6/desktop/configuration/models/enabling_a_model.png" alt="" align="center" fullwidth="true" />
> Model management interface showing toggle switches to enable or disable models
Downloading Through IDE Plugins
You can also download local models through any Pieces plugin or extension:
Open Conversational Search in your IDE plugin.
Click the Active Model or Change Model button.
Click
Manage Modelsto access the full model management interface.Browse the list of local models, download the ones you want, and enable them using the toggle switches.
Managing Downloaded Models
Delete Local Models
To free up storage space, you can delete downloaded local models directly from the model management interface:
<Image src="https://storage.googleapis.com/hashnode_product_documentation_assets/core_desktop_meet-pieces_orgs_paid-plans_12.3.6/desktop/configuration/models/delete_local_model.png" alt="" align="center" fullwidth="true" />
> Model management interface showing a downloaded model with trash icon visible for deletion
Storage Requirements
Local models typically require:
- Small models (2-4 GB): Suitable for quick queries and basic code generation
- Medium models (4-6 GB): Balanced performance for most use cases
- Large models (6-8+ GB): Best performance for complex queries and deep context
Verify Local Model Integration
Once downloaded, ensure PiecesOS can use your local models:
Open the Pieces Quick Menu from your system tray or menu bar.
Navigate to ML Processing.
Downloaded models will appear under Local AI Models.
If a model doesn't appear, try restarting PiecesOS through the Quick Menu.
Next Steps
You can read documentation about what local LLMs are currently available and supported by PiecesOS, or learn more about using local vs cloud models.