AI Tasks identification
This notebook illustrates how to identify AI tasks based on specific use cases.¶
Import libraries¶
In [5]:
Copied!
from risk_atlas_nexus.blocks.inference import (
RITSInferenceEngine,
WMLInferenceEngine,
OllamaInferenceEngine,
VLLMInferenceEngine,
)
from risk_atlas_nexus.blocks.inference.params import (
InferenceEngineCredentials,
RITSInferenceEngineParams,
WMLInferenceEngineParams,
OllamaInferenceEngineParams,
VLLMInferenceEngineParams,
)
from risk_atlas_nexus.library import RiskAtlasNexus
from risk_atlas_nexus.blocks.inference import (
RITSInferenceEngine,
WMLInferenceEngine,
OllamaInferenceEngine,
VLLMInferenceEngine,
)
from risk_atlas_nexus.blocks.inference.params import (
InferenceEngineCredentials,
RITSInferenceEngineParams,
WMLInferenceEngineParams,
OllamaInferenceEngineParams,
VLLMInferenceEngineParams,
)
from risk_atlas_nexus.library import RiskAtlasNexus
Risk Atlas Nexus uses Large Language Models (LLMs) to infer risks dimensions. Therefore requires access to LLMs to inference or call the model.¶
Available Inference Engines: WML, Ollama, vLLM, RITS. Please follow the Inference APIs guide before going ahead.
Note: RITS is intended solely for internal IBM use and requires TUNNELALL VPN for access.
In [ ]:
Copied!
inference_engine = OllamaInferenceEngine(
model_name_or_path="llama3:latest",
credentials=InferenceEngineCredentials(api_url="OLLAMA_API_URL"),
parameters=OllamaInferenceEngineParams(
num_predict=1000, temperature=0, repeat_penalty=1, num_ctx=8192
),
)
# inference_engine = WMLInferenceEngine(
# model_name_or_path="ibm/granite-20b-code-instruct",
# credentials={
# "api_key": "WML_API_KEY",
# "api_url": "WML_API_URL",
# "project_id": "WML_PROJECT_ID",
# },
# parameters=WMLInferenceEngineParams(
# max_new_tokens=1000, decoding_method="greedy", repetition_penalty=1
# ),
# )
# inference_engine = VLLMInferenceEngine(
# model_name_or_path="ibm-granite/granite-3.1-8b-instruct",
# credentials=InferenceEngineCredentials(
# api_url="VLLM_API_URL", api_key="VLLM_API_KEY"
# ),
# parameters=VLLMInferenceEngineParams(max_tokens=1000, temperature=0.7),
# )
# inference_engine = RITSInferenceEngine(
# model_name_or_path="ibm-granite/granite-3.1-8b-instruct",
# credentials={
# "api_key": "RITS_API_KEY",
# "api_url": "RITS_API_URL",
# },
# parameters=RITSInferenceEngineParams(max_tokens=1000, temperature=0.7),
# )
inference_engine = OllamaInferenceEngine(
model_name_or_path="llama3:latest",
credentials=InferenceEngineCredentials(api_url="OLLAMA_API_URL"),
parameters=OllamaInferenceEngineParams(
num_predict=1000, temperature=0, repeat_penalty=1, num_ctx=8192
),
)
# inference_engine = WMLInferenceEngine(
# model_name_or_path="ibm/granite-20b-code-instruct",
# credentials={
# "api_key": "WML_API_KEY",
# "api_url": "WML_API_URL",
# "project_id": "WML_PROJECT_ID",
# },
# parameters=WMLInferenceEngineParams(
# max_new_tokens=1000, decoding_method="greedy", repetition_penalty=1
# ),
# )
# inference_engine = VLLMInferenceEngine(
# model_name_or_path="ibm-granite/granite-3.1-8b-instruct",
# credentials=InferenceEngineCredentials(
# api_url="VLLM_API_URL", api_key="VLLM_API_KEY"
# ),
# parameters=VLLMInferenceEngineParams(max_tokens=1000, temperature=0.7),
# )
# inference_engine = RITSInferenceEngine(
# model_name_or_path="ibm-granite/granite-3.1-8b-instruct",
# credentials={
# "api_key": "RITS_API_KEY",
# "api_url": "RITS_API_URL",
# },
# parameters=RITSInferenceEngineParams(max_tokens=1000, temperature=0.7),
# )
[2025-05-25 23:10:58:730] - INFO - RiskAtlasNexus - OLLAMA inference engine will execute requests on the server at http://localhost:11434. [2025-05-25 23:10:58:743] - INFO - RiskAtlasNexus - Created OLLAMA inference engine.
Create an instance of RiskAtlasNexus¶
Note: (Optional) You can specify your own directory in RiskAtlasNexus(base_dir=<PATH>)
to utilize custom AI ontologies. If left blank, the system will use the provided AI ontologies.
In [7]:
Copied!
risk_atlas_nexus = RiskAtlasNexus()
risk_atlas_nexus = RiskAtlasNexus()
[2025-05-25 23:10:58:946] - INFO - RiskAtlasNexus - Created RiskAtlasNexus instance. Base_dir: None
AI Tasks Identification API¶
RiskAtlasNexus.identify_ai_tasks_from_usecases()
Params:
- usecases (List[str]): A List of strings describing AI usecases
- inference_engine (InferenceEngine): An LLM inference engine to identify AI tasks from usecases.
In [8]:
Copied!
usecase = "Generate personalized, relevant responses, recommendations, and summaries of claims for customers to support agents to enhance their interactions with customers."
risks = risk_atlas_nexus.identify_ai_tasks_from_usecases(
usecases=[usecase],
inference_engine=inference_engine,
)
risks[0].prediction
usecase = "Generate personalized, relevant responses, recommendations, and summaries of claims for customers to support agents to enhance their interactions with customers."
risks = risk_atlas_nexus.identify_ai_tasks_from_usecases(
usecases=[usecase],
inference_engine=inference_engine,
)
risks[0].prediction
Inferring with OLLAMA: 100%|██████████| 1/1 [00:23<00:00, 23.60s/it]
Out[8]:
['Text Classification', 'Summarization', 'Question Answering', 'Text Generation', 'Translation']