Pieces MCP + GitHub Copilot

The Pieces MCP (Model Context Protocol) integration with GitHub Copilot allows you to leverage Pieces Long-Term Memory (LTM) directly within the Visual Studio Code editor, enhancing your coding workflow with seamless contextual information retrieval.



Get Started

Connecting Pieces MCP to GitHub Copilot enhances context-aware coding by linking your current task with past work.

This integration allows Copilot to provide insights like past implementations and peer-reviewed solutions.

You can ask context-rich questions, and Copilot can find answers from your local development history without searching through commits or messages.

Follow the steps below to enable the Pieces MCP integration with GitHub Copilot for smarter, personalized AI assistance.

Prerequisites

There are [2] primary prerequisites for integrating Pieces with GitHub Copilot as an MCP—an active instance of PiecesOS and the fully-enabled Long-Term Memory engine.

1

Install & Run PiecesOS

Make sure that PiecesOS is installed and running. This is required for the MCP server to communicate with your personal repository of workflow data and pass context through to the GitHub Copilot chat agent.

If you do not have PiecesOS, you can download it alongside the Pieces Desktop App or install it standalone here.

2

Enable Long-Term Memory

For the MCP server to interact with your workflow context, you must enable the Long-Term Memory Engine (LTM-2) through the Pieces Desktop App or the PiecesOS Quick Menu in your toolbar.

SSE Endpoint

To use Pieces MCP with GitHub Copilot, you first need the Server-Sent Events (SSE) endpoint from PiecesOS:

http://localhost:39300/model_context_protocol/2024-11-05/sse

Keep in mind that the specific port (i.e., 39300) PiecesOS is running on may vary.

To find the current SSE endpoint with the active instance of POS (including the current port number), open the PiecesOS Quick Menu and expand the Model Context Protocol (MCP) Servers tab.

There, you can copy the SSE endpoint with one click, which includes the active PiecesOS port number.

You can also do this in the Pieces Desktop App by opening the Settings view and clicking Model Context Protocol (MCP).

Setting Up GitHub Copilot

You can now use the Pieces MCP with both Visual Studio Code and Visual Studio Code (Insider Edition).

Follow the steps below to get started—or watch the video below for a set-up tutorial and live demo.

via Visual Studio Code UI

Adding the Pieces MCP in the built-in MCP menu is the easiest method to setting up your Pieces MCP server and allows you to have the best experience while using the Pieces MCP.

1

Open the Command Palette

Open Visual Studio Code and launch the Command Palette by pressing Cmd+Shift+P on macOS or Ctrl+Shift+P on Windows/Linux.

2

Add a New MCP Server

In the Command Palette, type MCP: Add Server and select the command when it appears.

3

Choose the Server Type

Select HTTP (sse) as the server type when requested.

4

Enter the SSE URL

Paste your SSE URL into the provided field.

For Pieces, use:

http://localhost:39300/model_context_protocol/2024-11-05/sse

Remember to grab the specific SSE URL (with the active PiecesOS port) from either the PiecesOS or Pieces Desktop App MCP menu.

5

Enter a MCP Server Name

When prompted to add a new MCP server, enter a name for your server, such as ‘Pieces` or something easy to remember.

Then, you can select the User Settings option to save the MCP server configuration in your VS Code user settings, so it can be accessed globally across different workspaces—or choose Workspace Settings to use it explicitly in your open project.

6

Save Your Configuration

Save your configuration. Your VS Code settings.json file should now include an entry similar to the example below:

{
  "mcpServers": {
    "Pieces": {
      "url": "http://localhost:39300/model_context_protocol/2024-11-05/sse"
    }
  }
}

Your GitHub Copilot chat, as long as the chat mode is in Agent mode, will now see Pieces as an MCP and automatically utilize the ask_pieces_ltm tool on-query.

via Global MCP Configuration

You can manually add the MCP to your MCP settings .JSON by following the steps below.

1

Open the Visual Studio Code Settings

Click the Settings Icon on the bottom left of your IDE and select Settings from the list.

2

Search for MCP

In the VS settings, search for MCP in the search bar at the top of the page. The MCP section will appear—then, select Edit in settings.json.

3

Add the MCP Server Config .JSON

Replace the entire file, assuming you have no others, with the PiecesOS MCP server .json.

{
  "mcpServers": {
    "Pieces": {
      "url": "http://localhost:39300/model_context_protocol/2024-11-05/sse"
    }
  }
}
4

Save the File

Save the configuration.

Your GitHub Copilot chat, as long as it’s in Agent mode, will now see PiecesOS as an MCP.

Using Pieces MCP Server in GitHub Copilot

Once integrated, you can utilize Pieces LTM directly in Visual Studio Code.

1

Open GitHub Copilot Chat

Launch the GitHub Copilot chat interface in Visual Studio Code by clicking the Copilot icon, or by using ⌘+ctrl+i (macOS) ctrl+alt+i (Windows/Linux).

Change the Copilot mode from Ask to Agent.

2

Start Prompting

Enter your prompt, and click the send icon or press return (macOS) or enter (Windows/Linux) to send your query to the Copilot.

Do not add the ask_pieces_ltm tool as context to the conversation. If you are running the chat in Agent mode—which is required for the Pieces MCP integration to operate successfully—it will automatically utilize this tool.

Hey!
Hey!

Check out this MCP-specific prompting guide if you want to effectively utilize the Long-Term Memory Engine (LTM-2) with your new Pieces MCP server.

Troubleshooting Tips

If you’re experiencing issues integrating Pieces MCP with GitHub Copilot, follow these troubleshooting steps:

  1. Verify PiecesOS Status: Ensure PiecesOS is actively running on your system. MCP integration requires PiecesOS to be operational.

  2. Confirm LTM Engine Activation: Make sure the Long-Term Memory Engine (LTM-2) is enabled in PiecesOS, as this engine aggregates context necessary for Cursor to retrieve accurate results.

  3. Use Agent Mode in Chat: Cursor must be in Agent, not Ask, to access the ask_pieces_ltm tool. Switch to Agent to enable full MCP integration. Make sure not to add the ask_pieces_ltm tool as context—instead, rely solely on the Agent chat mode.

  4. Single MCP Instance: Make sure that you aren’t testing multiple instances of the Pieces MCP server in different IDEs. This cross-contamination conflict with the SSE and several MCP instances running on the same port can cause issues in different development environments.

  5. Check MCP Server Status: If you’re encountering messages such as “Sorry, I can’t do this,” your MCP server may not be properly configured or running.

  6. Go to settings.json in Visual Studio Code: Confirm the MCP server status shows "running" (it may say "start" or "pause" otherwise). Restart the server if necessary and inspect terminal outputs for error messages.

  7. Review Configuration Details: Double-check the MCP endpoint URL and the port number in your VS Code MCP configuration menu to ensure accuracy. You can find the current SSE endpoint URL in the Pieces Desktop App under SettingsModel Context Protocol (MCP), or in the PiecesOS Quick Menu. It is usually formatted as:

http://localhost:{port_number}/model_context_protocol/{version}/sse

You're now ready to improve your workflow with powerful context retrieval using Pieces MCP, seamlessly integrated into Visual Studio Code with GitHub Copilot. Happy coding!

Updated on