LLM Settings

Learn how to switch between local and cloud-hosted LLMs with the Pieces for Visual Studio Code Extension.


Switching LLMs

The Pieces for VS Code Extension currently supports 54 different LLMs, including both cloud-hosted and local models.

How To Configure Your LLM Runtime

Switching your LLM model in the Pieces for VS Code Extension is a straightforward process, giving you the flexibility to choose the model that best suits your needs.

How to change your LLM:

1

Open the Copilot Chat View

Open the Copilot Chat view by clicking the Pieces Copilot icon in the sidebar.

2

Locate the Active Model

Locate the Active Model in the bottom-left corner of the view where the current model (e.g., GPT-4o Mini) is displayed.

3

View the Models

Click on Change Model to open the Manage Copilot Runtime modal.

4

Choose Your Desired Model

Browse the list of local and cloud models and select your preferred model.

From here, you can browse and select from a variety of available models, such as the local and cloud-based models listed in the tables on this page.

Cloud-hosted models offer access to the latest AI capabilities, while on-device models ensure offline functionality, making Pieces Copilot adaptable to your specific workflow and environment.

Once you’ve chosen a new model, the switch is instant, allowing you to continue your work seamlessly with the selected model's capabilities—no need to restart or refresh anything.

Updated on