Overview
Open Source AI Workshop¶
You've probably heard how tools like ChatGPT are changing workflows — but when it comes to privacy, security, and control, using public AI tools isn't always an option. In this hands-on workshop, you'll learn how to run your own local, open-source LLMs — no cloud, no cost, and no compromise.
We'll walk through installing and running models with tools like ollama, AnythingLLM, and Continue using familiar environments like VS Code. By the end, you'll have a fully functional local AI assistant, ready to support your work securely and offline.
Our overarching goals of this workshop is as follows:
- Learn about Open Source AI and its general use cases.
- Use an open source LLM that is built in a verifiable and legal way.
- Learn about Prompt Engineering and how to leverage a local LLM in daily tasks.
Tip
Working with AI is all about exploration and hands-on engagement. These labs are designed to give you everything you need to get started — so you can collaborate, experiment, and learn together. Don’t hesitate to ask questions, raise your hand, and connect with other participants.
Agenda¶
| Lab | Description |
|---|---|
| Lab 0: Workshop Pre-work | Install pre-requisites for the workshop |
| Lab 1: Configuring Open-WebUI | Set up Open-WebUI to start using an LLM locally |
| Lab 2: Chatting with Your Local AI | Get acquainted with your local LLM |
| Lab 3: Prompt Engineering | Learn about prompt engineering techniques |
| Lab 4: Applying What You Learned | Refine your prompting skills |
| Lab 5: Using Open-WebUI for a local RAG | Write code using Continue and Granite |
Acknowledgments¶
- These labs are based on the originals here