GenerationConfig Class

Configuration options for model generation and outputs. Not all parameters are configurable for every model.

Definition

Namespace: GenerativeAI.Types
Assembly: GenerativeAI (in GenerativeAI.dll) Version: 2.0.2+aa51399cad6d90cc71158d589a6268608b3c1893
C#
public class GenerationConfig
Inheritance
Object    GenerationConfig

Constructors

GenerationConfigInitializes a new instance of the GenerationConfig class

Properties

CandidateCount Optional. Number of generated responses to return. Currently, this value can only be set to 1. If unset, this will default to 1.
EnableEnhancedCivicAnswers Optional. Enables enhanced civic answers. It may not be available for all models.
FrequencyPenalty Optional. Frequency penalty applied to the next token's logprobs, multiplied by the number of times each token has been seen in the response so far. A positive penalty will discourage the use of tokens that have already been used, proportional to the number of times the token has been used: The more a token is used, the more difficult it is for the model to use that token again, increasing the vocabulary of responses. Caution: A *negative* penalty will encourage the model to reuse tokens proportional to the number of times the token has been used. Small negative values will reduce the vocabulary of a response. Larger negative values will cause the model to start repeating a common token until it hits the MaxOutputTokens limit.
Logprobs Optional. Only valid if ResponseLogprobs is True. This sets the number of top logprobs to return at each decoding step in the LogprobsResult.
MaxOutputTokens Optional. The maximum number of tokens to include in a response candidate. Note: The default value varies by model, see the Model.output_token_limit attribute of the Model returned from the getModel function.
PresencePenalty Optional. Presence penalty applied to the next token's logprobs if the token has already been seen in the response. This penalty is binary on/off and not dependant on the number of times the token is used (after the first). Use FrequencyPenalty for a penalty that increases with each use. A positive penalty will discourage the use of tokens that have already been used in the response, increasing the vocabulary. A negative penalty will encourage the use of tokens that have already been used in the response, decreasing the vocabulary.
ResponseLogprobs Optional. If true, export the logprobs results in response.
ResponseMimeType Optional. MIME type of the generated candidate text. Supported MIME types are: text/plain: (default) Text output. application/json: JSON response in the response candidates. text/x.enum: ENUM as a string response in the response candidates. Refer to the docs for a list of all supported text MIME types.
ResponseModalities Optional. The requested modalities of the response. Represents the set of modalities that the model can return, and should be expected in the response. This is an exact match to the modalities of the response. A model may have multiple combinations of supported modalities. If the requested modalities do not match any of the supported combinations, an error will be returned. An empty list is equivalent to requesting only text.
ResponseSchema Optional. Output schema of the generated candidate text. Schemas must be a subset of the OpenAPI schema and can be objects, primitives or arrays. If set, a compatible ResponseMimeType must also be set. Compatible MIME types: application/json: Schema for JSON response. Refer to the JSON text generation guide for more details.
Seed Optional. Seed used in decoding. If not set, the request uses a randomly generated seed.
SpeechConfig Optional. The speech generation config.
StopSequences Optional. The set of character sequences (up to 5) that will stop output generation. If specified, the API will stop at the first appearance of a stop_sequence. The stop sequence will not be included as part of the response.
Temperature Optional. Controls the randomness of the output. Note: The default value varies by model, see the Model.Temperature attribute of the Model returned from the getModel function. Values can range from.
TopK Optional. The maximum number of tokens to consider when sampling. Gemini models use Top-p (nucleus) sampling or a combination of Top-k and nucleus sampling. Top-k sampling considers the set of TopK most probable tokens. Models running with nucleus sampling don't allow TopK setting. Note: The default value varies by Model and is specified by the TopP attribute returned from the [!:getModel] function. An empty TopK attribute indicates that the model doesn't apply top-k sampling and doesn't allow setting TopK on requests.
TopP Optional. The maximum cumulative probability of tokens to consider when sampling. The model uses combined Top-k and Top-p (nucleus) sampling. Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits the number of tokens based on the cumulative probability. Note: The default value varies by Model and is specified by the TopP attribute returned from the [!:getModel] function. An empty TopK attribute indicates that the model doesn't apply top-k sampling and doesn't allow setting TopK on requests.

Methods

EqualsDetermines whether the specified object is equal to the current object.
(Inherited from Object)
FinalizeAllows an object to try to free resources and perform other cleanup operations before it is reclaimed by garbage collection.
(Inherited from Object)
GetHashCodeServes as the default hash function.
(Inherited from Object)
GetTypeGets the Type of the current instance.
(Inherited from Object)
MemberwiseCloneCreates a shallow copy of the current Object.
(Inherited from Object)
ToStringReturns a string that represents the current object.
(Inherited from Object)

See Also