AI Tasks identification
This notebook illustrates how to identify AI tasks based on specific use cases.¶
Import libraries¶
In [1]:
Copied!
from ai_atlas_nexus.blocks.inference import (
RITSInferenceEngine,
WMLInferenceEngine,
OllamaInferenceEngine,
VLLMInferenceEngine,
)
from ai_atlas_nexus.blocks.inference.params import (
InferenceEngineCredentials,
RITSInferenceEngineParams,
WMLInferenceEngineParams,
OllamaInferenceEngineParams,
VLLMInferenceEngineParams,
)
from ai_atlas_nexus.library import AIAtlasNexus
from ai_atlas_nexus.blocks.inference import (
RITSInferenceEngine,
WMLInferenceEngine,
OllamaInferenceEngine,
VLLMInferenceEngine,
)
from ai_atlas_nexus.blocks.inference.params import (
InferenceEngineCredentials,
RITSInferenceEngineParams,
WMLInferenceEngineParams,
OllamaInferenceEngineParams,
VLLMInferenceEngineParams,
)
from ai_atlas_nexus.library import AIAtlasNexus
/Users/ingevejs/Documents/workspace/ingelise/risk-atlas-nexus/src/ai_atlas_nexus/toolkit/job_utils.py:4: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from tqdm.autonotebook import tqdm
AI Atlas Nexus uses Large Language Models (LLMs) to infer risks dimensions. Therefore requires access to LLMs to inference or call the model.¶
Available Inference Engines: WML, Ollama, vLLM, RITS. Please follow the Inference APIs guide before going ahead.
Note: RITS is intended solely for internal IBM use and requires TUNNELALL VPN for access.
In [ ]:
Copied!
inference_engine = OllamaInferenceEngine(
model_name_or_path="llama3:latest",
credentials=InferenceEngineCredentials(api_url="OLLAMA_API_URL"),
parameters=OllamaInferenceEngineParams(
num_predict=1000, temperature=0, repeat_penalty=1, num_ctx=8192
),
)
# inference_engine = WMLInferenceEngine(
# model_name_or_path="ibm/granite-20b-code-instruct",
# credentials={
# "api_key": "WML_API_KEY",
# "api_url": "WML_API_URL",
# "project_id": "WML_PROJECT_ID",
# },
# parameters=WMLInferenceEngineParams(
# max_new_tokens=1000, decoding_method="greedy", repetition_penalty=1
# ),
# )
# inference_engine = VLLMInferenceEngine(
# model_name_or_path="ibm-granite/granite-3.1-8b-instruct",
# credentials=InferenceEngineCredentials(
# api_url="VLLM_API_URL", api_key="VLLM_API_KEY"
# ),
# parameters=VLLMInferenceEngineParams(max_tokens=1000, temperature=0.7),
# )
# inference_engine = RITSInferenceEngine(
# model_name_or_path="ibm-granite/granite-3.1-8b-instruct",
# credentials={
# "api_key": "RITS_API_KEY",
# "api_url": "RITS_API_URL",
# },
# parameters=RITSInferenceEngineParams(max_tokens=1000, temperature=0.7),
# )
inference_engine = OllamaInferenceEngine(
model_name_or_path="llama3:latest",
credentials=InferenceEngineCredentials(api_url="OLLAMA_API_URL"),
parameters=OllamaInferenceEngineParams(
num_predict=1000, temperature=0, repeat_penalty=1, num_ctx=8192
),
)
# inference_engine = WMLInferenceEngine(
# model_name_or_path="ibm/granite-20b-code-instruct",
# credentials={
# "api_key": "WML_API_KEY",
# "api_url": "WML_API_URL",
# "project_id": "WML_PROJECT_ID",
# },
# parameters=WMLInferenceEngineParams(
# max_new_tokens=1000, decoding_method="greedy", repetition_penalty=1
# ),
# )
# inference_engine = VLLMInferenceEngine(
# model_name_or_path="ibm-granite/granite-3.1-8b-instruct",
# credentials=InferenceEngineCredentials(
# api_url="VLLM_API_URL", api_key="VLLM_API_KEY"
# ),
# parameters=VLLMInferenceEngineParams(max_tokens=1000, temperature=0.7),
# )
# inference_engine = RITSInferenceEngine(
# model_name_or_path="ibm-granite/granite-3.1-8b-instruct",
# credentials={
# "api_key": "RITS_API_KEY",
# "api_url": "RITS_API_URL",
# },
# parameters=RITSInferenceEngineParams(max_tokens=1000, temperature=0.7),
# )
[2025-11-27 14:49:18:12] - INFO - AIAtlasNexus - OLLAMA inference engine will execute requests on the server at http://localhost:11434. [2025-11-27 14:49:18:43] - INFO - AIAtlasNexus - Created OLLAMA inference engine.
Create an instance of AIAtlasNexus¶
Note: (Optional) You can specify your own directory in AIAtlasNexus(base_dir=<PATH>) to utilize custom AI ontologies. If left blank, the system will use the provided AI ontologies.
In [4]:
Copied!
ai_atlas_nexus = AIAtlasNexus()
ai_atlas_nexus = AIAtlasNexus()
[2025-11-27 14:49:22:47] - INFO - AIAtlasNexus - Created AIAtlasNexus instance. Base_dir: None
AI Tasks Identification API¶
AIAtlasNexus.identify_ai_tasks_from_usecases()
Params:
- usecases (List[str]): A List of strings describing AI usecases
- inference_engine (InferenceEngine): An LLM inference engine to identify AI tasks from usecases.
In [5]:
Copied!
usecase = "Generate personalized, relevant responses, recommendations, and summaries of claims for customers to support agents to enhance their interactions with customers."
risks = ai_atlas_nexus.identify_ai_tasks_from_usecases(
usecases=[usecase],
inference_engine=inference_engine,
)
risks[0].prediction
usecase = "Generate personalized, relevant responses, recommendations, and summaries of claims for customers to support agents to enhance their interactions with customers."
risks = ai_atlas_nexus.identify_ai_tasks_from_usecases(
usecases=[usecase],
inference_engine=inference_engine,
)
risks[0].prediction
Inferring with OLLAMA: 100%|██████████| 1/1 [00:08<00:00, 8.04s/it]
Out[5]:
['Text Classification', 'Summarization', 'Question Answering', 'Text Generation', 'Translation']