What is PiecesOS?
PiecesOS is a background service that runs on your machine. It orchestrates local data processing, manages on-device machine learning models, and serves as the bridge between your workflow and every Pieces product—including the Pieces Desktop App, MCP integrations, and the CLI.
PiecesOS bridging all Pieces products and services
What PiecesOS Does
PiecesOS powers three core capabilities:
Long-Term Memory (LTM-2.7)
The LTM-2.7 Engine continuously captures workflow context—code you copy, screens you view, audio you hear—and stores it locally on your device. This memory powers Timeline, Conversational Search, and MCP integrations with real context from your day.
Local & Cloud AI Models
PiecesOS manages AI model inference for all Pieces products. Run models entirely on-device for privacy, use cloud models (OpenAI, Anthropic, Google) for speed, or use blended mode for a mix of both. Configure your processing mode from the Quick Menu.
MCP Support
The Model Context Protocol (MCP) is an open framework that lets LLMs access your workflow context. PiecesOS serves as the MCP host, connecting tools like Cursor, VS Code, Claude, and ChatGPT to your Long-Term Memory without custom integrations.
Pieces MCP integration with Cursor showing context-aware documentation changes
Privacy & Local-First Design
All data captured by PiecesOS is stored locally on your device. Nothing leaves your machine unless you explicitly enable cloud features. PiecesOS applies on-device ML to filter out sensitive information and secrets.
Installing PiecesOS
PiecesOS installs automatically with the Pieces Desktop App. If you want to run it standalone (for example, with MCP integrations only), see Manual Installation.