What Are Core Dependencies?
Pieces for Developers products, including the Pieces for Developers Desktop Application, utilize two core dependencies to provide a local, secure, and efficient development experience—PiecesOS and Ollama.
What Are They?
To run any sort of Pieces software, you will need [1] PiecesOS, the backbone of the Pieces Suite. This application is lightweight and runs in the background of your device and powers the Long-Term Memory (LTM-2) Engine, Pieces Drive, and the Pieces Copilot.
Running local LLMs requires downloading and installing the [2] Ollama wrapper to power on-device AI capabilities, such as querying Pieces Copilot or the local inference required by the LTM-2 Engine.
-
PiecesOS: The backbone of the Pieces suite, managing local memory, AI-driven workflow enhancements, and seamless integrations with your development environment.
-
Ollama: A specialized wrapper that enables local AI inference, allowing Pieces Copilot and other features to leverage machine learning models directly on your device.
What Do They Do?
These dependencies—PiecesOS and Ollama—are lightweight services & engines that handle everything from local model management and context storage to advanced local inference for AI-assisted workflows.
PiecesOS is required for all Pieces products, including:
-
Pieces for Developers Desktop App
-
Plugins & Extensions for JetBrains, VS Code, Sublime Text, JupyterLab, Azure Data Studio, Neovim, Raycast, Obsidian, the Pieces CLI, and more.
Why Do We Need Them?
Pieces for Developers is designed with speed and efficiency in mind, so PiecesOS acts as the end-all between different Pieces products to minimize client-side overhead and additional code—while also being secure and highly-configurable.
Our focus on security and flexibility is why we’ve introduced the Ollama wrapper for local large language models—users can switch to entirely on-device generative AI, and by offloading most operations locally, the user experience benefits from:
-
Instant AI-powered assistance without cloud latency.
-
100% local memory storage with full control over data.
-
Offline functionality, ensuring a seamless experience even when disconnected from the internet.
-
Lightweight, background operation, consuming minimal system resources.
However, you don't have to install Ollama if you don't want to use it.
You can choose to install it if you want to use local models, which is especially useful in enterprise settings where strong device security is important.
Dependency | Purpose | Required? |
---|---|---|
PiecesOS | Manages memory, developer material storage, and plugin communication. | Yes — this is required for all Pieces products. |
Ollama | Enables locally-powered generative AI queries and model execution. | No — but this is required for local AI inference. |