January 2026 | AAAI-26 | Singapore EXPO
Agentic AI is poised to transform decision-making in Industry 4.0 by enabling autonomous agents to reason over multimodal inputs—such as sensor streams, structured knowledge bases, and unstructured maintenance logs—and act adaptively under uncertainty. Yet, real-world adoption remains challenging due to data fragmentation, integration complexity, limited explainability, and lack of evaluation workflows.
This hands-on tutorial offers a full lifecycle walkthrough for building trustworthy agentic AI systems in industrial settings. Participants will engage in two interactive labs: (i) resolving data silos in smart manufacturing using an open-source platform, and (ii) benchmarking agent performance, reasoning, and explainability in an enterprise-scale industrial simulation. Capabilities such as trace visualizations, real-time introspection, and comparative reasoning will be demonstrated. The session concludes with best practices for governance, monitoring, and reusable evaluation workflows. Participants will leave with practical skills and modular tools to build explainable, robust, and deployable agentic AI systems for real-world Industry 4.0 applications.
Agentic AI is rapidly becoming a cornerstone of intelligent decision-making in Industry 4.0. These agents must reason across heterogeneous data sources—including sensor time series, structured knowledge graphs, and unstructured logs—while adapting under uncertainty. Despite advances in large language models and multimodal learning, building deployable and trustworthy systems remains difficult due to fragmented data, lack of explainability, and limited evaluation protocols.
This tutorial provides a comprehensive, hands-on walkthrough of the lifecycle of multimodal agentic AI—from design to deployment—featuring lab sessions on data integration and benchmarking. We explore reasoning strategies, evaluation methods, and governance tools that ensure trustworthy and auditable deployments.
Time | Activity | Presenter(s) |
---|---|---|
10 mins | Introduction and Overview: Multimodal AI agents, use cases in Industry 4.0, objectives. | Amit Sheth, Dhaval Patel |
20 mins | Multimodal Agents in Industry 4.0: Overview of architectures (symbolic + neural integration). | Ruwan Wickramarachchi |
20 mins | Lab Session 1: Addressing Data Silos and Integration Complexity. | Chathurangi Shyalika |
20 mins | Operationalizing and Governing Multimodal Agents: Evaluation and governance techniques. | Dhaval Patel, Saumya Ahuja |
20 mins | Lab Session 2: Evaluation Benchmarking at Scale for Industrial Multi-Agent Systems. | Shuxin Lin |
15 mins | Q&A and Wrap-up | All Presenters |
Participants should bring a laptop with Python 3 installed. Pre-configured environments and setup instructions will be provided. The tutorial uses:
Ph.D. student at AIISC, University of South Carolina. Research in Deep Learning, Multimodal-AI, Neurosymbolic-AI, anomaly detection, event understanding.
AI Engineer Lead, IBM WatsonX ASEAN. Leads Generative AI and Agentic AI projects across APAC. Experienced in LLMs, RAG systems, and enterprise AI deployments.
Researcher at IBM with expertise in AI for Industry 4.0, agent evaluation, multimodal reasoning, and large-scale industrial AI benchmarks.
Research Scientist at Bosch Center for AI. Ph.D. from AIISC, USC. Research in Generative AI, Neurosymbolic AI, knowledge graphs, and multimodal representation learning.
Senior Technical Staff Member, IBM Research. Expert in Data Mining, Machine Learning, Time Series, and industrial AI platforms such as Maximo and AutoAI-TS.
NCR Chair & Professor, AIISC, USC. Fellow of IEEE, AAAI, ACM, AAAS. Research in trustworthy, explainable, and safe neuro-symbolic AI.