Pieces Copilot

Describe and chat about your code with Pieces Copilot. You can ask a technical question and Pieces' ML models will generate functional code to use in your projects.

Getting Started With Pieces Copilot#

Navigate to the Copilot & Global Search view from the dropdown, and you will see the Copilot chat box towards the middle of the app window. If you're looking for more information about Global Search, you can find details here.

You can start chatting with the Pieces Copilot using a few different actions:

  1. Paste code into the chat box, and the Copilot will let you know if there are any issues.
    • Try pasting this code snippet:
      function build(){
          console.log("build starting");
          let _count;
          for (let i = 0; i <= 10; i++){
              _count = _count + 1;
              console.log($_count);
          }
          console.log('counted to ten!');
      }
      
      • The Pieces Copilot will identify, explain, and correct a couple of errors.
  2. Ask a technical question, and the Copilot will answer.
    • Here are a few examples to try:
      • How do I create a flexible div with 8px of padding on both sides?
      • Loop over a directory of files and write the first line of their contents to a new text file.
  3. Drag and Drop a screenshot of code, and the Pieces Copilot will extract the code, explain it, and answer questions about it.
    • Pressing Scan Screenshot allows you to select a screenshot using your native file picker.
    • Read about dragging and dropping code into the Pieces Desktop App on the Saving Screenshots Page.

Results from Pieces Copilot#

When the Pieces Copilot returns code, you will see a few quick actions at the bottom of the code block. These actions include:

ActionWhat it Does
Save to PiecesSaves the snippet to your Pieces repo, so it's available across plugins and extensions.
ShareSaves the snippet and generates a shareable link to send to another user.
Annotate CodeAdds comments to the code to describe sections, functions, and variables.
Find Similar Code SnippetsSearches your existing code snippets for code that is similar to the generated snippet.
Tell Me MoreDescribes the snippet to you in plain text and tells you what the code does.
Repair & TidyImproves the code by removing repetition, loops, or other bad practices.
Show Related LinksShows a list of related links to that code snippet and its topic.
Show Related TagsShows tags that are relevant to the snippet.

Set Your Own Copilot Context#

It's often helpful to ask questions specific to your own code, so you may want to give additional information to the Copilot before asking a question. With Pieces Copilot, you can set your context based on a specific set of files or a directory.

To set your context, follow these steps:

  1. Open the Global Search & Copilot view from the dropdown next to your search bar.
  2. At the bottom of the Copilot chat, click "Set your Context" below the chat input.
  3. Select how you want to set your context:
    • Directories: Select a single or multiple directories from the filepicker. Pieces recursively uses the files in these folders to answer your questions.
    • Files: Add individual code files to reference when asking your Copilot questions.
    • Code Snippets: Use code snippets that you have already created and saved to Pieces to assist you when asking questions later.
Setting Context for the Pieces Copilot.

The more context you add, the better your Copilot will understand your questions and provide specific responses.

Snippet Specific Copilot#

When viewing a snippet in Gallery or List View, you can launch copilot on a snippet that you have saved to Pieces for Developers. To do this, use the Quick Action labeled Launch Copilot above the snippet in List View.

Note that if you are in Gallery View, the action buttons are below the code snippet.

Launching Pieces Copilot on a specific snippet - instead of using it in the Global Search view - allows the bot to already have context and additional information about the snippet you want to ask specific questions about. Leveraging the question and answer system, you can ask questions such as:

What does this snippet do?

or

How can I make this snippet better?

You can keep this very high-level and ask simple questions related to the code snippet as a whole, and you can ask questions about parts of the code.

For example, using the JavaScript code snippet above, you could ask:

  • What is the purpose of the "build" function in this code snippet?
  • How does the loop in the build function work?
  • What does the variable "_count" represent in this code snippet?
  • How many times will the loop iterate in the build function?
  • What will be logged to the console when the "build" function finishes counting?

When you "Launch Copilot" on a snippet, you will get a list of suggested questions to help get you started. You can click any of the options that are put in your chat to start the conversation, then add more questions on top after the bot responds.

Continuing the Copilot Conversation#

If you are in the midst of a conversation with Pieces Copilot and need to navigate to a different view, reference a separate snippet, or move to another location, don't worry. Conversations with Copilot are saved with the snippet that you had the individual conversation about so that you can get back to where you were when jumping around. You'll notice that once you start a conversation, the Launch Copilot Quick Action pill will change to Resume Copilot.

Available Copilot Runtimes#

Our Copilot also comes with multiple different LLM runtimes, cloud and local in order fulfill whatever requirements your workflow entails.

Cloud Runtimes#

  • GPT 3.5 Turbo, a highly optimized LLM created by OpenAI designed for quick and accurate responses.
  • GPT 4, OpenAI's most recent LLM containing even more context than the previous models, capable of completing more complex tasks than its predecessors.

Local Runtimes#

Local LLM's come with some hefty hardware requirements, please make sure your system can fulfill them before attempting to load one of these models.
  • CodeLlama, a model trained by Meta AI optimized for generating and conversing about code, available in both a CPU and GPU runtime.
    • These models require 5.6GB of RAM and 5.6GB of VRAM for the cpu and gpu runtimes respectively.
  • LLama2, a model trained by Meta AI optimized for completing general tasks, also available in a CPU and GPU runtime.
    • These models require 5.6GB of RAM and 5.6GB of VRAM for the cpu and gpu runtimes respectively.