How to Use
- Open the model selector in the chat input.
- Toggle Use Multiple Models.
- Select two or more models. Each model shows a count badge when it is part of the parallel set.
- Type your message and submit.
Parallel responses require authentication. Anonymous sessions are limited to a
single model per request.
Response Cards
Each card in the batch displays:- The model label
- A streaming indicator while the response is being generated
- One of three states: Generating, Task completed, or Selected
switchToMessage, which rebuilds the visible thread from that assistant branch down to its most recent leaf. The previously selected branch is preserved and you can return to it at any time by clicking its card again.
Continuing the Conversation
After you select a card, the conversation continues from that branch as a normal single-model thread. Subsequent messages go only to the model of the selected branch. You can always navigate back up and pick a different card before sending a follow-up.Current Constraints
- Parallel responses are not available when the message includes attachments. Mixed model capabilities across attachment types are not yet supported.
- Each model in the parallel set is billed as a separate request.
Related
- Branching - how the underlying thread tree works
- Multi-Model Support - configuring which models are available