Models Settings
Manage AI models and model preferences. Configure processing modes, set up local model runtime with Ollama, and control which AI models are available for use in Pieces.
To access Models settings, click your User Profile in the top left, then hover over Settings and select Models.
Models settings showing Model Capabilities, Local Model Runtime, and Model Management sections
Model Capabilities
Configure how Pieces processes your materials using machine learning resources. Choose between Cloud, Local, or Blended processing modes based on your performance and privacy preferences.
Processing Mode
Choose how Pieces processes your materials using machine learning resources. You can select Cloud, Local, or Blended processing modes based on your performance and privacy preferences.
Local Model Runtime
Set up and manage Ollama for local model processing. Ollama allows you to run AI models locally on your device without sending data to the cloud.
Ollama Status
Check if Ollama is installed, activated, and ready to use. Ollama must be installed and running for local model processing to work.
Installing Ollama
If Ollama is not installed, you can install it directly from the Models settings page.
Model Management
Enable or disable models to control which AI models are available for use in Pieces. Disabled models will not appear in chat or other features. Models disabled by an organization will not appear here.
Understanding Model Management
Pieces supports a wide variety of AI models from different providers. By default, the most popular models are enabled, but you can customize which models are available based on your needs and preferences.
Viewing Enabled Models
See how many models are currently enabled and which providers they belong to.
Searching Models
Search for specific models or providers to quickly find what you're looking for.
Enabling or Disabling Models
Control which models are available by enabling or disabling them individually or by provider.
Models settings showing how to enable a model using the toggle switch
Deleting Local Models
Remove local models that you no longer need. This frees up storage space on your device.
Models settings showing how to delete a local model
Enabling All Models
Quickly enable all available models at once.
Model Providers
Pieces supports models from multiple providers. Each provider shows how many of its models are enabled:
- OpenAI: GPT models and other OpenAI models
- Anthropic: Claude models
- Google: Gemini and other Google models
- Microsoft: Azure OpenAI and other Microsoft models
- Meta: Llama and other Meta models
- IBM: Watson and other IBM models
Each provider can be expanded to see individual models, and you can enable or disable models individually or toggle the entire provider on or off.
Compatible Models
For a complete list of all compatible models and their capabilities, see the Compatible LLMs documentation. This page provides detailed information about cloud models, local models, and which models work best for different use cases.
Free vs Pro Model Access
Model access depends on your plan. Free users get full access to local models and limited usage of cloud models. Pieces Pro subscribers get unlimited access to premium cloud LLMs.
Free Plan — Models
The free plan includes:
- Local models (Ollama): Full access with no usage limits. Download and run models like Llama, Gemma, Phi, and others entirely on-device.
- Cloud models: Limited usage. Access to select cloud models with usage caps—ideal for getting started and trying AI-assisted development.
- Basic AI features: Snippet management, local storage, and community support.
Plan Comparison
| Feature | Free | Pieces Pro |
|---|---|---|
| Local models (Ollama) | Full access, no limits | Full access, no limits |
| Cloud / premium models | Limited usage | Unlimited access |
| Long-term memory | Standard | Infinite |
| Deep Study reports | — | ✓ |
For pricing and upgrade options, see Pieces Pro.
Next Steps
Now that you understand how to manage models, learn about Long-Term Memory to configure memory preferences, or explore Model Context Protocol (MCP) to integrate Pieces with other tools. For a complete list of available models, see Compatible LLMs.