genai.text.embedding.embedding_service module

pydantic model genai.text.embedding.embedding_service.BaseConfig[source]

Bases: BaseServiceConfig

Config:
  • extra: str = forbid

  • validate_assignment: bool = True

  • validate_default: bool = True

field create_execution_options: CreateExecutionOptions = CreateExecutionOptions(throw_on_error=True, ordered=True, concurrency_limit=None, callback=None)
pydantic model genai.text.embedding.embedding_service.BaseServices[source]

Bases: BaseServiceServices

Config:
  • extra: str = forbid

  • validate_assignment: bool = True

  • validate_default: bool = True

field LimitService: type[LimitService] = <class 'genai.text.embedding.limit.limit_service.LimitService'>
pydantic model genai.text.embedding.embedding_service.CreateExecutionOptions[source]

Bases: BaseModel

field callback: Annotated[Callable[[TextEmbeddingCreateResponse], None] | None, FieldInfo(annotation=NoneType, required=True, description='Callback which is called everytime the response comes.')] = None

Callback which is called everytime the response comes.

field concurrency_limit: Annotated[int | None, FieldInfo(annotation=NoneType, required=True, description="Upper bound for concurrent executions (in case the passed value is higher than the API allows, the API's limit will be used).", metadata=[Ge(ge=1)])] = None

Upper bound for concurrent executions (in case the passed value is higher than the API allows, the API’s limit will be used).

Constraints:
  • ge = 1

field ordered: Annotated[bool, FieldInfo(annotation=NoneType, required=True, description='Items will be yielded in the order they were passed in, although they may be processed on the server in different order.')] = True

Items will be yielded in the order they were passed in, although they may be processed on the server in different order.

field throw_on_error: Annotated[bool, FieldInfo(annotation=NoneType, required=True, description="Flag indicating whether to throw an error if any error occurs during execution (if disabled, 'None' may be returned in case of error).")] = True

Flag indicating whether to throw an error if any error occurs during execution (if disabled, ‘None’ may be returned in case of error).

class genai.text.embedding.embedding_service.EmbeddingService[source]

Bases: BaseService[BaseConfig, BaseServices]

Config

alias of BaseConfig

Services

alias of BaseServices

__init__(*, api_client, services=None, config=None)[source]
Parameters:
create(*, model_id, inputs, parameters=None, execution_options=None)[source]

Creates embedding vectors from an input(s).

Parameters:
  • model_id (str) – The ID of the model.

  • inputs (str | list[str]) – Text/texts to process. It is recommended not to leave any trailing spaces.

  • parameters (dict | TextEmbeddingParameters | None) – Parameters for embedding.

  • execution_options (dict | CreateExecutionOptions | None) – An optional configuration how SDK should work (error handling, limits, callbacks, …)

Return type:

Generator[TextEmbeddingCreateResponse, None, None]

Example:

from genai import Client, Credentials
from genai.text.chat import HumanMessage, TextGenerationParameters

client = Client(credentials=Credentials.from_env())

responses = list(
    client.text.embedding.create(
        model_id="sentence-transformers/all-minilm-l6-v2",
        input="Write a tagline for an alumni association: Together we"
    )
)
print("Output vector", responses[0].results[0])
Yields:

TextEmbeddingCreateResponse object.

Raises:
Parameters:
Return type:

Generator[TextEmbeddingCreateResponse, None, None]