Available LLMs
We constantly update and configure our plugins and extensions, like the Pieces for Obsidian Plugin, to work with the latest LLMs.
Click here to see all 54+ local and cloud-hosted models available for use with the Pieces for Obsidian Plugin.
How To Configure Your LLM Runtime
Switching your LLM model in the Pieces Web Extension is
You can choose the model that best suits your needs, such as whether you need a model with a large context window for a specific task or series of prompts, or prefer speed over accuracy.
How to change your LLM:
Open the Copilot Chat View
Open the Copilot Chat view by clicking the Pieces Icon
in the extensions section of your browser.
Locate the Active Model
Find the Active Model in the bottom-left corner of the view, where the current model (e.g., GPT-4o Mini) is shown.
View the Models
Click on Change Model
to open the Manage Copilot Runtime modal.
Choose your Desired Model
Browse the list of local and cloud models, and select your preferred model.
Switching between the cloud and desktop icons allows you to browse and select from various available cloud and local models.
Cloud-hosted models offer access to the latest AI capabilities, while on-device models ensure offline functionality, making Pieces Copilot adaptable to your specific workflow and environment.
Once you choose a new model, the switch is instant, allowing you to continue working smoothly with the latest model's features without restarting or refreshing anything.