LTM Context Toggle

Control whether Conversational Search includes Long-Term Memory context in your conversations. When enabled, your chats automatically draw from up to 9 months of captured workflow context.

LTM must be enabled to access the chat feature in Pieces Desktop. If LTM is off, chat does not work. LTM is enabled by default in conversations.

Enabling or Disabling LTM Context

Click your `User Profile` in the top left of the Pieces Desktop App. Hover over `LTM-2.7` in the dropdown menu that appears. To keep LTM active, ensure it is not paused or turned off. To disable, select a pause duration (15 minutes, 1 hour, 6 hours, 12 hours, or 24 hours) or choose `Turn Off`. When paused or off, Conversational Search will not include workflow history context. Toggling LTM context for Conversational Search

User profile menu showing LTM-2.7 hover menu with pause and turn off options

You can also set whether LTM context is on by default for new chats in the LLM runtime settings gear.

Starting a Conversation from a Timeline Event

Start context-specific chats directly from any Timeline Event. When you start a conversation with a Timeline Event, it opens in Conversational Search with that event's full context pre-loaded and displayed as an information card.

Click any event in the Pieces Timeline to view its summary in the *main panel*. Click `Start Related Chat` in the bottom right of the Timeline Event detail view to open Conversational Search with that event's context loaded. Timeline Event detail with Start Related Chat opening Conversational Search

Timeline Event detail showing Start Related Chat opening Conversational Search with pre-loaded context

You can also use the three-dots menu (â‹®) on any event and choose Chat to scope a conversation to that item. See Chat from a summary.

Chat Pipelines

When you start a New Chat, you can pick a chat pipeline that shapes how the model uses context:

Pipeline Type Use case
Generally discuss technical topics Multipurpose General technical discussion and mixed modalities.
Ask questions about a local code base Project-oriented comprehension Optimized when LTM has captured relevant IDE or repo activity.
Generate code for a local project Project-oriented generation Optimized when recent workflow memories include the project.
Chat pipeline dropdown with multipurpose, comprehension, and generation options You can set one of these pipelines as the default when creating new chats.

Viewing the Relevant Summaries Sidebar

After receiving a response, see exactly which Timeline Events were used to generate it.

Look for the `Relevant Summaries` button at the bottom of a chat response. Click the button to open the sidebar on the right side. Each entry shows the Timeline Event title, description, timestamp, and related applications. Click the sort dropdown in the top right of the sidebar—options include `Suggested`, `Recent`, and `Most Viewed`. Click any Timeline Event to view full details about when it was captured and what context it contains. Relevant Summaries button showing source Timeline Events

Relevant Summaries sidebar showing Timeline Events used to generate the response


Learn how to choose and manage AI models in Models.