OptionalincludeContext?: "none" | "thisServer" | "allServers"A request to include context from one or more MCP servers (including the caller), to be attached to the prompt. The client MAY ignore this request.
The requested maximum number of tokens to sample (to prevent runaway completions).
The client MAY choose to sample fewer tokens than the requested maximum.
Optionalmetadata?: objectOptional metadata to pass through to the LLM provider. The format of this metadata is provider-specific.
OptionalmodelPreferences?: ModelPreferencesThe server's preferences for which model to select. The client MAY ignore these preferences.
OptionalstopSequences?: string[]OptionalsystemPrompt?: stringAn optional system prompt the server wants to use for sampling. The client MAY modify or omit this prompt.
Optionaltemperature?: number
A request from the server to sample an LLM via the client. The client has full discretion over which model to select. The client should also inform the user before beginning sampling, to allow them to inspect the request (human in the loop) and decide whether to approve it.