Generative AI Conversations
If you're unsure how to implement something, stuck on a bug, or need an answer within your workspace, use the Pieces Copilot for context-aware responses to help you move forward.
The Pieces Web Extension offers several levels of conversation functionality, all fully integrated with Pieces.
You can enable the Long Term Memory Engine (LTM-2) for complete, streamlined context across your workflow or open a limited context conversation in the browser's view.
Adding Conversation Context
The Pieces Copilot lets you add specific folders or files to the conversation's context window, like files from your current workspace.
You can add individual external items as context to your chat from within the Pieces Copilot view.
This differs from starting a chat using one of the embedded buttons under a code snippet, such as on Stack Overflow (see the image below).
To start a conversation that’s pre-loaded with context, find a code snippet on a website such as Stack Overflow or your favorite code platforms and select the Ask Copilot
quick action beneath it.
Selecting Your Pieces Copilot Runtime
You can choose between different LLMs directly within the Pieces Web Extension by accessing the sidebar and clicking on your preferred model under Active Model
(e.g., Claude 3.5 Sonnet).
Options range from lightweight models for simple queries to advanced models for detailed analysis, including local and cloud-based LLMs.
This flexibility lets you customize Pieces Copilot to fit your specific development needs, whether you prioritize speed or accuracy.
Read more about what LLMs are available with the Pieces Web Extension.
Pieces Copilot As a Daily Driver
The Pieces Copilot is a powerful, adaptable tool that grows with you as you use it—so use it!