AI LLM endpoint params Google

AI LLM endpoint params Google object

string
google_params

The type of the AI LLM endpoint params object for Google. This parameter is required.

Value is always google_params

002

The temperature is used for sampling during response generation, which occurs when top-P and top-K are applied. Temperature controls the degree of randomness in token selection.

number
10.12

Top-K changes how the model selects tokens for output. A top-K of 1 means the next selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-K of 3 means that the next token is selected from among the three most probable tokens by using temperature.

number
10.12

Top-P changes how the model selects tokens for output. Tokens are selected from the most (see top-K) to least probable until the sum of their probabilities equals the top-P value.

Response Example

{
  "type": "google_params",
  "temperature": 0,
  "top_k": 1,
  "top_p": 1
}