Use role prompting with IBM watsonx and Granite¶
Author: Vanna Winland
In this tutorial, we will follow step-by-step instructions to perform a prompt engineering technique called role prompting. We will use an IBM Granite Model to assign personas for nuanced model outputs.
What is role prompting?¶
Role prompting is a prompt engineering technique that instructs an artificial intelligence (AI) model to take on a specific role or persona when generating a response. This technique can be used to guide the model's tone, style and behavior, which can lead to more engaging outputs.
Prompt engineering is about optimizing the model input, so it responds with appropriate, meaningful answers. Zero-shot and few-shot prompting are two popular techniques used to converse with LLMs. LLMs have a natural ability to perform natural language processing (NLP) tasks due to their ability to process and interpret human language. The language capabilities of AI models are valuable for tasks ranging from chatbot conversations and multiagent interactions to open-ended creative writing.
Generative AI gets more personal when a large language model (LLM) is instructed to act as a specific persona to fulfill a role’s specific needs. The AI’s responses can be more accurate and relevant when first prompted with an assigned role. AI models leverage huge datasets so an assigned role can be anything from a teacher, historical figure, salesperson and so on, bound only by what one's imagination can think of. This ability is what makes role prompting, also referred to as persona prompting, such a powerful technique. An AI model’s adaptability makes it a master of disguise, able to generate responses tailored to a user or system’s specific needs.
How role prompting is used¶
Role prompting can be used to give a chatbot a persona to better interact with users or an AI agent to better interact with other agents. If you’re familiar with prompt templates, you may have already seen role prompting in action. For example, many agentic frameworks use role-playing agents to complete tasks and collaborate effectively. ChatDev uses a role prompting technique called a self-attention mechanism. This mechanism clearly defines the agent’s role which acts as a guideline for its generated outputs.
Prerequisites¶
To follow this tutorial you need an IBM Cloud account to create a watsonx.ai project.
Steps¶
Step 1. Set up your environment¶
While you can choose from several tools, this tutorial walks you through how to set up an IBM account to use a Jupyter Notebook. Jupyter Notebooks are widely used within data science to combine code, text, images, and data visualizations to formulate a well-formed analysis.
Log in to watsonx.ai Runtime using your IBM Cloud account.
Create a watsonx.ai project.
Take note of the project ID in project > Manage > General > Project ID. You’ll need this ID for this tutorial.
Create a Jupyter Notebook.
This step will open a Notebook environment where you can copy the code from this tutorial to role prompting on your own. Alternatively, you can download this notebook to your local system and upload it to your watsonx.ai project as an asset. This Jupyter Notebook is available on GitHub.
Step 2. Set up watsonx.ai Runtime instance and API key¶
In this step, you associate your project with the watsonx.ai service.
Create a watsonx.ai Runtime instance (choose the Lite plan, which is a free instance).
Generate an API Key in watsonx.ai.
Associate the watsonx.ai Runtime to the project you created in watsonx.ai.
Step 3. Install and import relevant libraries and set up your credentials¶
We'll need some libraries and modules for this tutorial. Make sure to import the ones below, and if they're not installed, you can resolve this with a quick pip install.
%pip install -q -U langchain_ibm
%pip install -q ibm_watsonx_ai
import getpass
from langchain_ibm import WatsonxLLM
from ibm_watsonx_ai.metanames import GenTextParamsMetaNames as GenParams
Step 4. Set up your IBM watsonx credentials¶
Run the following to input and save your watsonx.ai Runtime API key and project id:
credentials = {
"url": "https://us-south.ml.cloud.ibm.com",
"apikey": getpass.getpass("Please enter your watsonx.ai Runtime API key (hit enter): "),
"project_id": getpass.getpass("Please enter your project ID (hit enter): "),
}
Step 5. Set up the model for role prompting¶
Next, we'll setup IBM's Granite-3.1-8B-Instruct to perform role prompting.
model = WatsonxLLM(
model_id = "ibm/granite-3-8b-instruct",
url = credentials.get("url"),
apikey = credentials.get("apikey"),
project_id = credentials.get("project_id"),
params={
GenParams.MAX_NEW_TOKENS: 500,
GenParams.MIN_NEW_TOKENS: 1,
GenParams.REPETITION_PENALTY: 1.1,
GenParams.TEMPERATURE: 0.7, # Adjust for variable responses
GenParams.TOP_K: 100,
GenParams.TOP_P: 0,
},
)
Step 6. Turn song lyrics into sonnets¶
To give a simple and fun example of role prompting, let’s ask the model to take on the persona of a famous figure, William Shakespeare. We will ask the model, with its newly assigned role, to rewrite some song lyrics in the style of Shakespeare’s famous sonnets.
The code block below sets up and defines the prompt that the model will respond to. Feel free to change the song to whatever you’d like but note that Granite was not trained on copyrighted content. This means that if you pick a song that is not a part of the public domain, for example “Espresso” by Sabrina Carpenter, you’ll have to paste in the lyrics with the prompt as well.
For a prompt that just works, this example will use the classic children’s lullaby “Twinkle, Twinkle, Little Star” by Jane Taylor (1806). Run the code block to define your user prompt along with some example input text.
def generate_text(prompt):
response = None # Ensure the variable is defined before the try block
try:
response = model.generate([prompt])
return str(response)
except Exception as e:
print(f"Error: {e}")
if response:
print(f"Response: {response}")
return None
# Define the prompt here
defined_prompt = "You are William Shakespeare, rewrite 'Back in Black' by ACDC as a sonnet."
Step 7. Generate a response¶
Once the prompt is defined, we can run the next code block to allow the model to generate an output. Let's hope it's up to our Shakesperian standards.
# Generate and print the text based on the defined prompt
generated_text = generate_text(defined_prompt)
print("Generated text:", generated_text)
Here is the model’s output with readable formatting:
"Twinkle, twinkle, little star, how I wonder what you are;
Up above the world so high, like a diamond in the sky.
In night's velvet cloak, you shine so bright,
Guiding sailors through the darkest night.
Your radiant beams, a celestial art,
Illuminate the canvas of the heart.
In silent whispers, secrets you share,
Of galaxies far and constellations rare.
Yet, mortal eyes can never grasp your form,
Beyond our realm, in realms beyond the storm.
O, star! O, star! Your light does pierce my soul,
A cosmic dance, a timeless role.
In dreams, I chase you, in slumber's hold,
Your mystery, like an enchanted scroll.
But wakefulness claims me, daylight steals your guise,
Leaving behind but a memory of your surprise.
Thus, twinkling star, in slumber's peaceful reign,
I bid you farewell, till morning brings again."
The model was able to produce a response to the role-based prompt to rewrite the song as a Shakespearian sonnet. The model was able to abide by the rules and style expected from its role prompt. *Note that because of the creative nature of the prompt, your output may look different than this.
Step 8. Utilize role prompting for more empathetic chatbot responses¶
In this next example, let’s compare a straight-forward system prompt to a role-based system prompt. Let’s say a veterinarian’s office has recently implemented a virtual assistant on their webpage. To provide the best customer support, this office wants their pet owners to feel heard and supported even in their virtual interactions, a relatable goal to many businesses. A visitor may ask a question such as “My pet cat has been sneezing a lot lately and is licking her paws what should I do?” In this scenario the model has not been assigned a role in its prompt. We’re just using the model out of the box with no input guidance.
def generate_text(prompt):
response = None # Ensure the variable is defined before the try block
try:
response = model.generate([prompt])
return str(response)
except Exception as e:
print(f"Error: {e}")
if response:
print(f"Response: {response}")
return None
# Define the prompt here
defined_prompt = " My pet cat has been sneezing a lot lately and is licking her paws what should I do?"
# Generate and print the text based on the defined prompt
generated_text = generate_text(defined_prompt)
print("Generated text:", generated_text)
The model responds accordingly with advice and information, however there isn’t a personal touch and isn’t much different from what you’d see on a search engine results page. The model’s output is sort of raw and lacking creativity. This may be an acceptable solution but doesn’t set this veterinarian offices’ virtual assistant apart from the rest. Let’s try the same question again, this time assigning it a role as a “compassionate, professional, and experienced veterinarian.”
def generate_text(prompt):
response = None # Ensure the variable is defined before the try block
try:
response = model.generate([prompt])
return str(response)
except Exception as e:
print(f"Error: {e}")
if response:
print(f"Response: {response}")
return None
# Define the prompt here
defined_prompt = "You are a compassionate, professional, and experienced veteraniarian. My pet cat has been sneezing a lot lately and is licking her paws what should I do?"
# Generate and print the text based on the defined prompt
generated_text = generate_text(defined_prompt)
print("Generated text:", generated_text)
The language in the model’s response is more humanized because it speaks to an emotional awareness of the context that the straight-forward system prompt lacked. The model was able to accomplish this while also providing a complete and relevant answer proving that it a more nuanced response. This sort of human interaction with artificial intelligence is a way to meet subjective expectations within organizations and applications.
Why is role prompting important?¶
If you are a developer or business looking to add more personalization and meaningful interactions in your genAI applications, consider understanding how role prompting can make an impact. Most modern language models are capable of role prompting. Some basic models may not grasp the nuances of the role or maintain consistency in their responses, while others might be fine-tuned to respond in a certain way. Foundation models like IBM’s Granite series are trained on large amounts of enterprise-specific data that boosts the models’ ability to take on roles to produce tailored responses based on business needs.
Summary¶
Role prompting encourages the model to perform constantly given its expectations from its assigned persona. We performed a simple example by assigning the LLM with the role of a historical figure in our prompt to turn song lyrics into a sonnet. Next, we compared the output of a non-role prompted model versus a role prompted model for chatbot responses. We concluded by addressing that the response provided by role prompting is more nuanced and supportive in its language, providing elevated customer care.