Skip to content

What is Generative Computing?

A generative program is any computer program that contains calls to an LLM. As we will see throughout the documentation, LLMs can be incorporated into software in a wide variety of ways. Some ways of incorporating LLMs into programs tend to result in robust and performant systems, while others result in software that is brittle and error-prone.

Generative programs are distinguished from classical programs by their use of functions that invoke generative models. These generative calls can produce many different data types — strings, booleans, structured data, code, images/video, and so on. The model(s) and software underlying generative calls can be combined and composed in certain situations and in certain ways (e.g., LoRA adapters as a special case). In addition to invoking generative calls, generative programs can invoke other functions, written in languages that do not have an LLM in their base, so that we can, for example, pass the output of a generative function into a DB retrieval system and feed the output of that into another generator. Writing generative programs is difficult because generative programs interleave deterministic and stochastic operations.

If you would like to read more about this, please don't hesitate to take a look here.

Mellea

Mellea is a library for writing generative programs. Generative programming replaces flaky agents and brittle prompts with structured, maintainable, robust, and efficient AI workflows.

Features

  • A standard library of opinionated prompting patterns.
  • Sampling strategies for inference-time scaling.
  • Clean integration between verifiers and samplers.
  • Batteries-included library of verifiers.
  • Support for efficient checking of specialized requirements using activated LoRAs.
  • Train your own verifiers on proprietary classifier data.
  • Compatible with many inference services and model families. Control cost and quality by easily lifting and shifting workloads between:
    • inference providers
    • model families
    • model sizes
  • Easily integrate the power of LLMs into legacy code-bases (mify).
  • Sketch applications by writing specifications and letting mellea fill in the details (generative slots).
  • Get started by decomposing your large unwieldy prompts into structured and maintainable Mellea problems.

Let's setup Mellea to work locally

Open up a terminal and run the following uv command from the beeai-workshop/opentech directory of your cloned repo.

  1. Start an interactive Python session:

    uv run --directory mellea python
    
  2. Run a simple Mellea session:

    Run the example code in your Python session.

    import mellea
    
    m = mellea.start_session(backend_name="ollama", model_id="ibm/granite4:micro-h")
    print(m.chat("tell me some fun trivia about IBM and the early history of AI.").content)
    

Simple email examples

Note

The following work should be done via a text editor, there should be a couple installed on your laptop, if you aren't sure raise your hand and a helper will help you out.

  1. Let's leverage Mellea to do some email generation for us, the first example is a simple example:

    import mellea
    m = mellea.start_session(backend_name="ollama", model_id="ibm/granite4:micro-h")
    
    email = m.instruct("Write an email inviting interns to an office party at 3:30pm.")
    print(str(email))
    
  2. As you can see, it outputs a standard email with only a couple lines of code, lets expand on this:

    import mellea
    m = mellea.start_session(backend_name="ollama", model_id="ibm/granite4:micro-h")
    
    def write_email(m: mellea.MelleaSession, name: str, notes: str) -> str:
        email = m.instruct(
            f"Write an email to {name} using the notes following: {notes}.",
            # user_variables={"name": name, "notes": notes},  # Use double curly brackets instead of f-string
        )
        return email.value  # str(email) also works.
    
    
    print(
        write_email(
            m,
            "Olivia",
            "Olivia helped the lab over the last few weeks by organizing intern events, advertising the speaker series, and handling issues with snack delivery.",
        )
    )
    

    With this more advance example we now have the ability to customize the email to be more directed and personalized for the recipient. But this is just a more programmatic prompt engineering, lets see where Mellea really shines.

Simple email with boundaries and requirements

  1. The first step with the power of Mellea, is adding requirements to something like this email, take a look at this first example:

    import mellea
    m = mellea.start_session(backend_name="ollama", model_id="ibm/granite4:micro-h")
    
    def write_email_with_requirements(
        m: mellea.MelleaSession, name: str, notes: str
    ) -> str:
        email = m.instruct(
            f"Write an email to {name} using the notes following: {notes}.",
            requirements=[
                "The email should have a salutation",
                "Use only lower-case letters",
            ],
            # user_variables={"name": name, "notes": notes},  # Use double curly brackets instead of f-string
        )
        return str(email)
    
    
    print(
        write_email_with_requirements(
            m,
            name="Olivia",
            notes="Olivia helped the lab over the last few weeks by organizing intern events, advertising the speaker series, and handling issues with snack delivery.",
        )
    )
    

    As you can see with this output now, you force the Mellea framework to start checking itself to create what you need. Imagine this possibility, now you can start making sure your LLMs only generate things that you want. Test this theory by changing from "only lower-case" to "only upper-case" and see that it will follow your instructions.

    Pretty neat eh? Lets go even deeper.

    Let's create an email with some sampling and have Mellea find the best option for what we are looking for: We add two requirements to the instruction which will be added to the model request. But we don't check yet if these requirements are satisfied, we add a strategy for validating the requirements.

    You might notice it fails with the above example, because the name "Olivia" has an upper-case letter in it. Remove the "Use only lower-case letters", line, and it should pass on the first re-run. This brings up some interesting opportunities, so make sure that the writing you expect is within the boundaries and it'll keep trying till it gets it right.

Instruct Validate Repair

The first instruct-validate-repair pattern is as follows:

import mellea
from mellea.stdlib.requirement import req, check, simple_validate
from mellea.stdlib.sampling import RejectionSamplingStrategy

def write_email(m: mellea.MelleaSession, name: str, notes: str) -> str:
    email_candidate = m.instruct(
        f"Write an email to {name} using the notes following: {notes}.",
        requirements=[
            req("The email should have a salutation"),  # == r1
            req(
                "Use only lower-case letters",
                validation_fn=simple_validate(lambda x: x.lower() == x),
            ),  # == r2
            check("Do not mention purple elephants."),  # == r3
        ],
        strategy=RejectionSamplingStrategy(loop_budget=5),
        user_variables={"name": name, "notes": notes},
        return_sampling_results=True,
    )
    if email_candidate.success:
        return str(email_candidate.result)
    else:
        return email_candidate.sample_generations[0].value

m = mellea.start_session(backend_name="ollama", model_id="ibm/granite4:micro-h")
print(
    write_email(
        m,
        "Olivia",
        "Olivia helped the lab over the last few weeks by organizing intern events, advertising the speaker series, and handling issues with snack delivery.",
    )
)

Most of this should look familiar by now, but the validation_fn and check should be new.

We create 3 requirements:

  • First requirement (r1) will be validated by LLM-as-a-judge on the output of the instruction. This is the default behavior.
  • Second requirement (r2) uses a function that takes the output of a sampling step and returns a boolean value indicating successful or unsuccessful validation. While the validation_fn parameter requires to run validation on the full session context, Mellea provides a wrapper for simpler validation functions (simple_validate(fn: Callable[[str], bool])) that take the output string and return a boolean as seen in this case.
  • Third requirement is a check(). Checks are only used for validation, not for generation. Checks aim to avoid the "do not think about B" effect that often primes models (and humans) to do the opposite and "think" about B.
  • We also demonstrate in the m = mellea.start_session() how you can specify a different Ollama model, in case you want to try something other than Mellea's ibm/granite4:micro default.

Run this in your local instance, and you'll see it working, and ideally no purple elephants! :)

Hopefully you felt like you've learned a bunch about AI and engaging with our open source models through this journey. Never hesitate to give us any feedback, and remember all of this stuff is free, open source, Apache 2 licensed, and designed to work in the Enterprise ecosystem. Thanks for reading and joining us!