Changelog ========= 1.2.10 ------ 🚀 Features: - Handle retry more robustly - add reference documents to ragpattern service stream - Support for `Utility Agent Tools` 🐛 Bug Fixes: - Verify is set correctly for httpx client - Support for deployment of ai service generated in experiments with chroma vector store - Fixed validation version parameter only for CPD - Fixed extra-space for `time_limit` parameter in `GenTextParamsMetaNames` 1.2.9 ----- 🚀 Features: - Change authentication method in flight and refactor to allow private RT - Added support for window search with Elastisearch 🐛 Bug Fixes: - Change approach to listing limit of resources - Proxies setting for httpx fixed - Raise error when adding not chunked documents to vector store 1.2.8 ----- 🚀 Features: - Add `get_all` param to `AutoPipelinesRuns.list()` and set `200` as default listed records number 🐛 Bug Fixes: - Allow inference deployments without providing project_id or space_id - `APIClient.set_token` is correctly changing authentication method - Updated concurrency_limit default value to 8 - Improved CP4D version value handling - Convert strings with regex causing `SyntaxWarning` to raw strings - Added missing parameters to ``TextChatParameters`` dataclass 1.2.7 ----- 🚀 Features: - Added ``get_all`` param to ``RAGEngine.get_details()`` 🐛 Bug Fixes: - Allow passing ``concurrency_limit`` in init Embeddings - Raise error that AutoAI RAG is not available only if response is 404 - Return ``usage`` metadata when running ``ModelInference.chat_stream()`` method 1.2.6 ----- 🚀 Features: - Support for reading documents from NFS folder 1.2.5 ----- 🚀 Features: - Support for LoRA/QLoRA Tuning in watsonx.ai Software 5.1.1 and latest 🐛 Bug Fixes: - Replace only the first occurrence in the response text of stream mode - Added error to handle method not allowed in run AI Services - Disable warnings from ``pypdf.PdfReader`` in ``TextLoader`` 1.2.4 ----- 🚀 Features: - Added support to generate token from ``zen-service-broker-secret`` - Added support for AI Services on CPD 5.1.1 and latest - Added indexing step to autogenerated inference AI service in ChromaDB scenario in RAGPattern - Added support for native async stream methods for text generation and chat 🐛 Bug Fixes: - Fixed authentication error during text generation with using ``meta-llama/llama-3-1-8b-instruct`` - Check file extension with lowercase filename - Raise error when error event occurs during generating stream - Removed default list length limit for WML Resources - Improved error message for creating connections with invalid payload 1.2.2 ----- 🚀 Features: - Added integration with ILAB/wx.ai 🐛 Bug Fixes: - Import pyarrow only when FlightConnection is used - Closing httpx.Response in retry mechanism 1.2.1 ----- 🚀 Features: - Introduced support for AI-Service within RAGPattern - Implemented validation for CPD url to ensure correctness - Added support for new ``status`` object in get AutoAI RAG run response - Removed potential scheme in ``flight_location`` 🐛 Bug Fixes: - Rename parameter name ``word_to_token_factor`` to ``word_to_token_ratio`` in prompt builder - Enhanced ``AutoPipelinesRuns.get_run_details()`` to retrieve the latest run across all endpoints and raise an error when resources are unavailable - Added custom timeout to close persistent connection - Task Credentials support improvements - Discover all available assets instead of limited to 100 - Fixed endpoints for API DataPlatform - Disable warnings logger for ``LangChainChunker.split_documents()`` 1.1.26 ------ 🐛 Bug Fixes: - Get the latest run including both endpoints and raise an error if empty resources - `TuneRuns.get_run_details()` - Added validation of storage details in project/ space details for Container DataConnection - Added `fs` as model supported training data references type - Updated the cancel_run method in RAGOptimizer to avoid returning cached training details 1.1.25 ------ 🚀 Features: - Add cancel job to the text extractions - Support for Deploy on Demand - Added `word_to_token_factor` parameter to RAGPattern constructor - Add batch size param to embeddings - Updated add_documents method in milvus_adapter 🐛 Bug Fixes: - add batch size to dict only if customized - Added training data references type validation in repository.store_model - Disable logging `NotS3Connection` in one case - Pagination support added to lookup datasources types - Pagination added for datasources 1.1.24 ------ 🚀 Features: - Add support for IBM watsonx.data Milvus connection - Added method to get chat model specs with function calling support - Add support for achat method 🐛 Bug Fixes: - Improve storing AI services content 1.1.23 ------ 🚀 Features: - Support for Sydney watsonx.ai API endpoint - Add 'Large' option for HardwareRequestSizes 🐛 Bug Fixes: - Update connections to enable AutoAI RAG experiment 1.1.22 ------ 🚀 Features: - Adjusted ``ai_service`` support to align with recent changes in the AI services backend - Added deprecation warning on Model class - Support for writing to FSLocation - Add get id by name for spaces 🐛 Bug Fixes: - Downgraded the ``grpcio`` version to ``>=1.54.3`` - Remove flag from RagPattern constructor - Updated `show` schema method - Update text extraction delete job - SSL error fixed on FIPS cluster when downloading documents from COS folder - save Milvus's SSL certificate file in temporary file - Improve error message for creating connections with invalid payload - Update set_api_client method 1.1.17 ------ 🚀 Features: - Added documentation for Fine-Tuning: `Working with TuneExperiment and FineTuner `_ 🐛 Bug Fixes: - Improved error reporting for ``DataConnection.read()`` when attempting to read from a database with the flight service disabled - Added a generic error message for failed attempts to download datasets due to flight service issues - Restricted Python version compatibility to versions below 3.13 - Migrated dependencies to LangChain version 0.3.x 1.1.16 ------ 🚀 Features: - Introduced support for loading text from Markdown files in the TextLoader class - Added an enum for rerank models: ``client.foundation_models.RerankModels`` - Enhanced the ``get_details`` method with an optional name parameter - Implemented the ``set_api_client`` method in the ModelInference class 🐛 Bug Fixes: - Added 429 error handling for the ``set.default_project`` method's project_id component - Improved error handling during data loading operations - Fixed an issue where the ``get_id_by_name`` method raised an error when no ID was found for the specified name 1.1.15 ------ 🚀 Features: - Introduced support for AI services - Added support for Rerank functionality: `Rerank Documentation `_ - Added support for Text Extraction results in Markdown format. 🐛 Bug Fixes: - Updated the route for CPD token generation for CPD versions 5.1 and above - Disabled AutoAI TSAD in the SDK for CPD versions starting from 5.1 1.1.14 ------ 🚀 Features: - Added support for the Chat API 🐛 Bug Fixes: - Automatically added the ``https://`` prefix to the COS connection ``endpoint_url`` if missing. 1.1.13 ------ 🚀 Features: - Introduced the ``get_evaluation_results`` method in the ``RAGOptimizer`` class - Added support for CPD version 5.1 🐛 Bug Fixes: - Corrected the example value for ``TextExtractionsMetaNames`` - Relaxed version constraints for LangChain packages 1.1.12 ------ 🚀 Features: - Added the option to install ``langchain_community`` with the ``[rag]`` option 🐛 Bug Fixes: - Fixed an issue with ``DataConnection`` as a container folder. - Updated the ``load()`` method docstring in the documentation. - Added support for platform URLs in credentials. 1.1.11 ------ 🚀 Features: - Added integration for AutoAI RAG SDK: `Working with AutoAI RAG class and rag_optimizer `_ 🐛 Bug Fixes: - Increased ``batch_size`` reduction in AutoAI RAG experiments 1.1.10 ------ 🐛 Bug Fixes: - Added a timeout to ``DocumentsIterableDataset`` iteration - Included a private development URL in the platform map 1.1.9 ----- 🚀 Features: - Introduced support for Fine-Tuning - Added support for native asynchronous methods for model inference 1.1.8 ----- 🚀 Features: - Added support for MCSP 🐛 Bug Fixes: - Fixed an issue with reading JSON files without binary mode, resolving problems with reading Prompt Tuning files 1.1.7 ----- 🚀 Features: - Added a timeout for persistent connections 🐛 Bug Fixes: - Set a default ``max_sequence_length`` to avoid model limits errors 1.1.6 ----- 🚀 Features: - Added support for ``ConnectionAsset`` without a specified location 🐛 Bug Fixes: - Resolved an issue where model specification filters were not available in version 4.8 - Fixed a bug related to unsupported connection types - Aligned logging for improved consistency 1.1.5 ----- 🚀 Features: - Implemented Keep-Alive and persistent connections for tokenization - Added support for the text extraction endpoint: `Text Extractions Documentation `_ 🐛 Bug Fixes: - Resolved the issue with missing ``python-docx`` 1.1.4 ----- 🚀 Features: - Introduced support for Hardware Request field instead of the Hardware Specification - Added an asynchronous ``generate`` method - Implemented support for the ``api_client`` parameter in ``DocumentIterableDataset`` 🐛 Bug Fixes: - Fixed an issue with storing ``TaskCredential`` - Improved de-duplication in the ``add_documents`` method of the vector store to include both text and its document ID 1.1.3 ----- 🚀 Features: - Introduced support for Bring Your Own Model (BYOM) 🐛 Bug Fixes: - Set ``input_shape`` during training, addressing issues with dataset training data - Removed the timeout for reading documents - Implemented improvements to the Documents Dataset 1.1.2 ----- 🐛 Bug Fixes: - Resolved an issue caused by a breaking change in ``setuptools`` version 71.0.1 1.1.1 ----- 🐛 Bug Fixes: - Added a setter for the ``ServiceInstance.details`` property 1.1.0 ----- 🚀 Features: - Added Keep-Alive functionality for FoundationModels requests - Introduced support for the private ``wml-fvt`` endpoint - Added installation support for RAG group packages 🐛 Bug Fixes: - Improved performance of the ``APIClient`` initialization - Disabled lifecycle checks when validation is turned off - Updated Scikit-learn for federated learning support - Resolved issues with reading data from NFS connections in Git-based projects - Changed the default behavior to read files on the cloud with Flight Service enabled 1.0.10 ------ 🚀 Features: - Added support for different input modes of Prompt Templates(``PromptTemplate``, ``FreeformPromptTemplate``, ``DetachedPromptTemplate``). 🐛 Bug Fixes: - Fixed validation for tech preview models. - Fixed set project for git-based-project in new scenario. - Fixed used filters for ``get_model_specs`` method. 1.0.9 ----- 🚀 Features: - Added support for tech preview models - Added additional information in request headers. - Extend VectorStore functionalities via concrete wrappers(For Vector Index notebooks) 1.0.8 ----- 🚀 Features: - Added ``validate_prompt_variables`` parameter to generate method in ``Model`` and ``ModelInference`` class. - Added ``hardware_spec`` support in Batch class. 🐛 Bug Fixes: - Fixed correct schema for timeseries-anomaly-prediction prediction type. 1.0.6 ----- 🐛 Bug Fixes: - Added more clear Error message when user pass invalid credentials. - Fixed "Invalid authorization token" error when initializing the client with the "USER_ACCESS_TOKEN" environment variable on a cluster 1.0.5 ----- 🚀 Features: - Added auto-generated Enum classes (TextModels, EmbeddingModels, PromptTunableModels) with available models 🐛 Bug Fixes: - Better filtering of supported runtimes for r-scripts - Fixed downloading model content to file - Improved scaling Prompt Tuning charts - Provided a clearer error message when a user passes an incorrect URL to the cluster 1.0.4 ----- 🚀 Features: - Added forecast_window parameter support for online deployment scoring 1.0.3 ----- 🚀 Features: - Milvus support for RAG extension - Autogenerated changelog - Travis is tagging the version during push into production 🐛 Bug Fixes: - Reading data assets as binary when flight is unavailable - next resource generator type fixed, other internal type issues fixed - Reading tabular dataset with non-unique columns - Deprecation warnings removed when using fm_model_inference 1.0.2 ----- 🚀 Features: - Added get and get_item methods for better compatibility with SDK v0 🐛 Bug Fixes: - Relaxed package version checking - Fixed AutoAI initialization without version in credentials - Fixed calls to wx endpoints in git base project - Fixed backward compatibility of WebService and Batch class for credentials in dictionary 1.0.1 ----- 🐛 Bug Fixes: - Hotfix for imports 1.0.0 ----- 🚀 Features: - RAGutils module added - Getting foundation models specs moved under foundation models object - Credentials as object + proxies supported - WCA service support 🐛 Bug Fixes: - Minor corrections/improvements to Foundation Models module