Switching LLMs
The Pieces for VS Code Extension currently supports 54 different LLMs, including both cloud-hosted and local models.
How To Configure Your LLM Runtime
Switching your LLM model in the Pieces for VS Code Extension is a straightforward process, giving you the flexibility to choose the model that best suits your needs.
How to change your LLM:
Open the Copilot Chat View
Open the Copilot Chat view by clicking the Pieces Copilot icon in the sidebar.
Locate the Active Model
Locate the Active Model in the bottom-left corner of the view where the current model (e.g., GPT-4o Mini) is displayed.
View the Models
Click on Change Model
to open the Manage Copilot Runtime modal.
Choose Your Desired Model
Browse the list of local and cloud models and select your preferred model.
From here, you can browse and select from a variety of available models, such as the local and cloud-based models listed in the tables on this page.
Cloud-hosted models offer access to the latest AI capabilities, while on-device models ensure offline functionality, making Pieces Copilot adaptable to your specific workflow and environment.
Once you’ve chosen a new model, the switch is instant, allowing you to continue your work seamlessly with the selected model's capabilities—no need to restart or refresh anything.