AI Tasks identification
This notebook illustrates how to identify AI tasks based on specific use cases.¶
Import libraries¶
In [ ]:
Copied!
from risk_atlas_nexus.blocks.inference import (
RITSInferenceEngine,
WMLInferenceEngine,
OllamaInferenceEngine,
VLLMInferenceEngine,
)
from risk_atlas_nexus.blocks.inference.params import (
InferenceEngineCredentials,
RITSInferenceEngineParams,
WMLInferenceEngineParams,
OllamaInferenceEngineParams,
VLLMInferenceEngineParams,
)
from risk_atlas_nexus.library import RiskAtlasNexus
from risk_atlas_nexus.blocks.inference import (
RITSInferenceEngine,
WMLInferenceEngine,
OllamaInferenceEngine,
VLLMInferenceEngine,
)
from risk_atlas_nexus.blocks.inference.params import (
InferenceEngineCredentials,
RITSInferenceEngineParams,
WMLInferenceEngineParams,
OllamaInferenceEngineParams,
VLLMInferenceEngineParams,
)
from risk_atlas_nexus.library import RiskAtlasNexus
Risk Atlas Nexus uses Large Language Models (LLMs) to infer risks dimensions. Therefore requires access to LLMs to inference or call the model.¶
Available Inference Engines: WML, Ollama, vLLM, RITS. Please follow the Inference APIs guide before going ahead.
Note: RITS is intended solely for internal IBM use and requires TUNNELALL VPN for access.
In [ ]:
Copied!
inference_engine = WMLInferenceEngine(
model_name_or_path="ibm/granite-20b-code-instruct",
credentials={
"api_key": "WML_API_KEY",
"api_url": "WML_API_URL",
"project_id": "WML_PROJECT_ID",
},
parameters=WMLInferenceEngineParams(
max_new_tokens=1000, decoding_method="greedy", repetition_penalty=1
),
)
# inference_engine = OllamaInferenceEngine(
# model_name_or_path="hf.co/ibm-granite/granite-20b-code-instruct-8k-GGUF",
# credentials=InferenceEngineCredentials(api_url="OLLAMA_API_URL"),
# parameters=OllamaInferenceEngineParams(
# num_predict=1000, num_ctx=8192, temperature=0.7, repeat_penalty=1
# ),
# )
# inference_engine = VLLMInferenceEngine(
# model_name_or_path="ibm-granite/granite-3.1-8b-instruct",
# credentials=InferenceEngineCredentials(
# api_url="VLLM_API_URL", api_key="VLLM_API_KEY"
# ),
# parameters=VLLMInferenceEngineParams(max_tokens=1000, temperature=0.7),
# )
# inference_engine = RITSInferenceEngine(
# model_name_or_path="ibm/granite-20b-code-instruct",
# credentials={
# "api_key": "RITS_API_KEY",
# "api_url": "RITS_API_URL",
# },
# parameters=RITSInferenceEngineParams(max_tokens=1000, temperature=0.7),
# )
inference_engine = WMLInferenceEngine(
model_name_or_path="ibm/granite-20b-code-instruct",
credentials={
"api_key": "WML_API_KEY",
"api_url": "WML_API_URL",
"project_id": "WML_PROJECT_ID",
},
parameters=WMLInferenceEngineParams(
max_new_tokens=1000, decoding_method="greedy", repetition_penalty=1
),
)
# inference_engine = OllamaInferenceEngine(
# model_name_or_path="hf.co/ibm-granite/granite-20b-code-instruct-8k-GGUF",
# credentials=InferenceEngineCredentials(api_url="OLLAMA_API_URL"),
# parameters=OllamaInferenceEngineParams(
# num_predict=1000, num_ctx=8192, temperature=0.7, repeat_penalty=1
# ),
# )
# inference_engine = VLLMInferenceEngine(
# model_name_or_path="ibm-granite/granite-3.1-8b-instruct",
# credentials=InferenceEngineCredentials(
# api_url="VLLM_API_URL", api_key="VLLM_API_KEY"
# ),
# parameters=VLLMInferenceEngineParams(max_tokens=1000, temperature=0.7),
# )
# inference_engine = RITSInferenceEngine(
# model_name_or_path="ibm/granite-20b-code-instruct",
# credentials={
# "api_key": "RITS_API_KEY",
# "api_url": "RITS_API_URL",
# },
# parameters=RITSInferenceEngineParams(max_tokens=1000, temperature=0.7),
# )
Create an instance of RiskAtlasNexus¶
Note: (Optional) You can specify your own directory in RiskAtlasNexus(base_dir=<PATH>)
to utilize custom AI ontologies. If left blank, the system will use the provided AI ontologies.
In [ ]:
Copied!
risk_atlas_nexus = RiskAtlasNexus()
risk_atlas_nexus = RiskAtlasNexus()
AI Tasks Identification API¶
RiskAtlasNexus.identify_ai_tasks_from_usecases()
Params:
- usecases (List[str]): A List of strings describing AI usecases
- inference_engine (InferenceEngine): An LLM inference engine to identify AI tasks from usecases.
In [ ]:
Copied!
usecase = "Generate personalized, relevant responses, recommendations, and summaries of claims for customers to support agents to enhance their interactions with customers."
risks = risk_atlas_nexus.identify_ai_tasks_from_usecases(
usecases=[usecase],
inference_engine=inference_engine,
)
risks
usecase = "Generate personalized, relevant responses, recommendations, and summaries of claims for customers to support agents to enhance their interactions with customers."
risks = risk_atlas_nexus.identify_ai_tasks_from_usecases(
usecases=[usecase],
inference_engine=inference_engine,
)
risks