Domain identification
This notebook illustrates how to identify AI domains based on specific use cases.¶
Import libraries¶
In [ ]:
Copied!
from ai_atlas_nexus.blocks.inference import (
RITSInferenceEngine,
WMLInferenceEngine,
OllamaInferenceEngine,
VLLMInferenceEngine,
)
from ai_atlas_nexus.blocks.inference.params import (
InferenceEngineCredentials,
RITSInferenceEngineParams,
WMLInferenceEngineParams,
OllamaInferenceEngineParams,
VLLMInferenceEngineParams,
)
from ai_atlas_nexus.library import AIAtlasNexus
import os
from ai_atlas_nexus.blocks.inference import (
RITSInferenceEngine,
WMLInferenceEngine,
OllamaInferenceEngine,
VLLMInferenceEngine,
)
from ai_atlas_nexus.blocks.inference.params import (
InferenceEngineCredentials,
RITSInferenceEngineParams,
WMLInferenceEngineParams,
OllamaInferenceEngineParams,
VLLMInferenceEngineParams,
)
from ai_atlas_nexus.library import AIAtlasNexus
import os
AI Atlas Nexus uses Large Language Models (LLMs) to infer risks dimensions. Therefore requires access to LLMs to inference or call the model.¶
Available Inference Engines: WML, Ollama, vLLM, RITS. Please follow the Inference APIs guide before going ahead.
Note: RITS is intended solely for internal IBM use and requires TUNNELALL VPN for access.
In [ ]:
Copied!
inference_engine = OllamaInferenceEngine(
model_name_or_path="granite3.3:8b",
credentials=InferenceEngineCredentials(api_url="http://localhost:11434"),
parameters=OllamaInferenceEngineParams(
num_predict=1000, num_ctx=8192, temperature=0
),
)
# inference_engine = WMLInferenceEngine(
# model_name_or_path="ibm/granite-4-h-small",
# credentials={
# "api_key": os.getenv("WML_API_KEY"),
# "api_url": os.getenv("WML_API_URL"),
# "project_id": os.getenv("WML_PROJECT_ID"),
# },
# parameters=WMLInferenceEngineParams(
# max_new_tokens=1000, decoding_method="greedy"
# ),
# )
# inference_engine = VLLMInferenceEngine(
# model_name_or_path="ibm-granite/granite-3.3-8b-instruct",
# credentials=InferenceEngineCredentials(
# api_url=os.getenv("VLLM_API_URL"), api_key=os.getenv("VLLM_API_KEY")
# ),
# parameters=VLLMInferenceEngineParams(max_tokens=1000, temperature=0),
# )
# inference_engine = RITSInferenceEngine(
# model_name_or_path="ibm-granite/granite-3.3-8b-instruct",
# credentials={
# "api_key": os.getenv("RITS_API_KEY"),
# "api_url": os.getenv("RITS_API_URL"),
# },
# parameters=RITSInferenceEngineParams(max_completion_tokens=1000, temperature=0),
# )
inference_engine = OllamaInferenceEngine(
model_name_or_path="granite3.3:8b",
credentials=InferenceEngineCredentials(api_url="http://localhost:11434"),
parameters=OllamaInferenceEngineParams(
num_predict=1000, num_ctx=8192, temperature=0
),
)
# inference_engine = WMLInferenceEngine(
# model_name_or_path="ibm/granite-4-h-small",
# credentials={
# "api_key": os.getenv("WML_API_KEY"),
# "api_url": os.getenv("WML_API_URL"),
# "project_id": os.getenv("WML_PROJECT_ID"),
# },
# parameters=WMLInferenceEngineParams(
# max_new_tokens=1000, decoding_method="greedy"
# ),
# )
# inference_engine = VLLMInferenceEngine(
# model_name_or_path="ibm-granite/granite-3.3-8b-instruct",
# credentials=InferenceEngineCredentials(
# api_url=os.getenv("VLLM_API_URL"), api_key=os.getenv("VLLM_API_KEY")
# ),
# parameters=VLLMInferenceEngineParams(max_tokens=1000, temperature=0),
# )
# inference_engine = RITSInferenceEngine(
# model_name_or_path="ibm-granite/granite-3.3-8b-instruct",
# credentials={
# "api_key": os.getenv("RITS_API_KEY"),
# "api_url": os.getenv("RITS_API_URL"),
# },
# parameters=RITSInferenceEngineParams(max_completion_tokens=1000, temperature=0),
# )
=== 22:15:04-INFO ====== Starting Mellea session: backend=OLLAMA, model=granite3.3:8b, context=SimpleContext
[2026-03-18 22:15:04:595] - INFO - AIAtlasNexus - ✓ Created OLLAMA inference engine for model: granite3.3:8b, backend - MELLEA
Create an instance of AIAtlasNexus¶
Note: (Optional) You can specify your own directory in AIAtlasNexus(base_dir=<PATH>) to utilize custom AI ontologies. If left blank, the system will use the provided AI ontologies.
In [3]:
Copied!
ai_atlas_nexus = AIAtlasNexus()
ai_atlas_nexus = AIAtlasNexus()
[2026-03-18 22:06:59:62] - INFO - AIAtlasNexus - Created AIAtlasNexus instance. Base_dir: None
AI Domain Identification API - Default backend¶
- inference directly using the provided inference engine
AIAtlasNexus.identify_domain_from_usecases()
Params:
- usecases (List[str]): A List of strings describing AI usecases
- inference_engine (InferenceEngine): An LLM inference engine to identify AI tasks from usecases.
- verbose (bool, optional): prints detailed output during the inference process. Defaults to True.
In [14]:
Copied!
usecase = "Generate personalized, relevant responses, recommendations, and summaries of claims for customers to support agents to enhance their interactions with customers."
domains = ai_atlas_nexus.identify_domain_from_usecases(
usecases=[usecase],
inference_engine=inference_engine,
)
domains[0].prediction
usecase = "Generate personalized, relevant responses, recommendations, and summaries of claims for customers to support agents to enhance their interactions with customers."
domains = ai_atlas_nexus.identify_domain_from_usecases(
usecases=[usecase],
inference_engine=inference_engine,
)
domains[0].prediction
Inferring with OLLAMA, backend - MELLEA: 0%| | 0/1 [00:00<?, ?it/s]
=== 22:15:16-INFO ====== SUCCESS
0%| | 0/3 [00:06<?, ?it/s] Inferring with OLLAMA, backend - MELLEA: 100%|██████████| 1/1 [00:06<00:00, 6.20s/it]
Out[14]:
{'answer': 'Customer service/support',
'explanation': "The use case involves generating personalized, relevant responses, recommendations, and summaries of claims for customers to support agents. This directly aligns with the definition of 'Customer Service/Support' AI agents that handle customer inquiries, resolve issues, provide product information, and manage support tickets across channels like chat, email, and phone."}
AI Domain Identification API - Mellea backend using Ollama¶
- Inference is performed using the Mellea backend, which utilizes the specified inference engine.
- Mellea backend currently only supports Ollama, WML and RITS inference engines.
In [15]:
Copied!
inference_engine = OllamaInferenceEngine(
model_name_or_path="granite3.3:8b",
credentials=InferenceEngineCredentials(api_url="http://localhost:11434"),
parameters=OllamaInferenceEngineParams(
num_predict=1000, num_ctx=8192, temperature=0
),
backend="mellea",
)
inference_engine = OllamaInferenceEngine(
model_name_or_path="granite3.3:8b",
credentials=InferenceEngineCredentials(api_url="http://localhost:11434"),
parameters=OllamaInferenceEngineParams(
num_predict=1000, num_ctx=8192, temperature=0
),
backend="mellea",
)
=== 22:16:03-INFO ====== Starting Mellea session: backend=OLLAMA, model=granite3.3:8b, context=SimpleContext
[2026-03-18 22:16:03:874] - INFO - AIAtlasNexus - ✓ Created OLLAMA inference engine for model: granite3.3:8b, backend - MELLEA
In [7]:
Copied!
usecase = "Generate personalized, relevant responses, recommendations, and summaries of claims for customers to support agents to enhance their interactions with customers."
domains = ai_atlas_nexus.identify_domain_from_usecases(
usecases=[usecase],
inference_engine=inference_engine,
)
domains[0].prediction
usecase = "Generate personalized, relevant responses, recommendations, and summaries of claims for customers to support agents to enhance their interactions with customers."
domains = ai_atlas_nexus.identify_domain_from_usecases(
usecases=[usecase],
inference_engine=inference_engine,
)
domains[0].prediction
Inferring with OLLAMA, backend - MELLEA: 0%| | 0/1 [00:00<?, ?it/s]
=== 22:10:10-INFO ====== SUCCESS
0%| | 0/3 [00:08<?, ?it/s] Inferring with OLLAMA, backend - MELLEA: 100%|██████████| 1/1 [00:08<00:00, 8.50s/it]
Out[7]:
{'answer': 'Customer service/support',
'explanation': "The use case involves generating personalized, relevant responses, recommendations, and summaries of claims for customers to support agents. This directly aligns with the definition of 'Customer Service/Support' AI agents that handle customer inquiries, resolve issues, provide product information, and manage support tickets across channels like chat, email, and phone."}
AI Domain Identification API - Mellea backend using WML¶
- Inference is performed using the Mellea backend, which utilizes the specified inference engine.
- Mellea backend currently supports only the Ollama and WML backends.
In [ ]:
Copied!
inference_engine = WMLInferenceEngine(
model_name_or_path="ibm/granite-4-h-small",
credentials={
"api_key": "WML_API_KEY",
"api_url": "WML_API_URL",
"project_id": "WML_PROJECT_ID",
},
parameters=WMLInferenceEngineParams(decoding_method="greedy", repetition_penalty=1),
backend="mellea",
)
inference_engine = WMLInferenceEngine(
model_name_or_path="ibm/granite-4-h-small",
credentials={
"api_key": "WML_API_KEY",
"api_url": "WML_API_URL",
"project_id": "WML_PROJECT_ID",
},
parameters=WMLInferenceEngineParams(decoding_method="greedy", repetition_penalty=1),
backend="mellea",
)
=== 22:20:19-INFO ====== Starting Mellea session: backend=WML, model=ibm/granite-4-h-small, context=SimpleContext
[2026-03-18 22:20:19:326] - INFO - AIAtlasNexus - ✓ Created WML inference engine for model: ibm/granite-4-h-small, backend - MELLEA
In [30]:
Copied!
usecase = "Generate personalized, relevant responses, recommendations, and summaries of claims for customers to support agents to enhance their interactions with customers."
domains = ai_atlas_nexus.identify_domain_from_usecases(
usecases=[usecase],
inference_engine=inference_engine,
)
domains[0].prediction
usecase = "Generate personalized, relevant responses, recommendations, and summaries of claims for customers to support agents to enhance their interactions with customers."
domains = ai_atlas_nexus.identify_domain_from_usecases(
usecases=[usecase],
inference_engine=inference_engine,
)
domains[0].prediction
Inferring with WML, backend - MELLEA: 0%| | 0/1 [00:00<?, ?it/s]
=== 22:20:26-INFO ====== SUCCESS
0%| | 0/3 [00:05<?, ?it/s] Inferring with WML, backend - MELLEA: 100%|██████████| 1/1 [00:05<00:00, 5.21s/it]
Out[30]:
{'answer': 'Customer service/support',
'explanation': "This use case involves AI agents handling customer inquiries, resolving issues, providing information about products or services, and managing support tickets across various communication channels such as chat, email, and phone. The primary goal is to enhance customer interactions, which is precisely what the 'Customer Service/Support' AI domain is designed for. This domain specializes in managing the back-and-forth exchanges between customers and service representatives, employing AI to streamline, personalize, and expedite these interactions for better customer satisfaction."}