Skip to main contentNeuro-Symbolic AI

Blog for IBM Neuro-Symbolic AI Workshop 2022 (18-19 Jan 2022)

Workshop Replay


Event Summary

Neuro-symbolic AI combines knowledge-driven symbolic AI and data-driven machine learning approaches. In this workshop we showed our recent progress toward some of the most outstanding issues in today’s AI:

  • Incorporation of complex domain knowledge into learning, including ways to ensure trusted behavior — and vice versa, incorporation of learning to account for incomplete or imperfect knowledge
  • Rigorous expressive reasoning which is ‘soft’ (handles uncertainty) while computationally practical
  • Learning with many fewer examples through the use of knowledge
  • Full explainability by construction, including the reasons the models make their decisions
  • Natural language processing via this approach to achieve state-of-the-art results, including handling more complex examples than is possible with today’s default AI.

This workshop included talks from IBM researchers and other academic AI experts. The speakers shared an overview of neuro-symbolic AI technologies, achievements to date, and future direction for the field. The workshop also hosted a panel discussion on the future of AI and the possible role of neuro-symbolic AI approaches.

The workshop consisted of 9 sessions over 2 days. Details of the sessions are given below.


Day 1, Session 1: Introduction

Agenda

  • Opening Words — Alexander Gray (IBM)
  • Motivation and overview — Francesca Rossi (IBM), Murray Campbell (IBM), Lior Horesh (IBM)
  • Invited talk 1: A Short on the History and Evolution of Neurosymbolic AI — Luis Lamb (Universidade Federal do Rio Grande do Sul)
  • Neuro-symbolic AI overview — Alexander Gray (IBM)
  • General AI and Interactive fiction — Murray Campbell (IBM)

Summary

The opening session introduced the field of neuro-symbolic AI, which combines knowledge-driven symbolic AI and data-driven machine learning approaches. In the invited talk, Professor Luis Lamb (Universidade Federal do Rio Grande do Sul), who has long been an advocate of neuro-symbolic AI, provided a comprehensive introduction to the origins of neuro-symbolic approaches.  While symbolic and neural approaches have followed somewhat separate trajectories, Professor Lamb provided numerous examples of past and current work that demonstrates the benefits of integrating these two approaches.

Alex Gray, the VP of AI Foundations at IBM Research, then gave a high-level overview of the neuro-symbolic research program at IBM. He began by exploring the strengths and weakness of neural and symbolic approaches, making the case for uniting the two approaches.  Alex then showed, in a preview of the upcoming workshop sessions, how different types of neuro-symbolic AI can fit together in an integrated fashion.  

Finally, Murray Campbell, Distinguished Research Scientist at IBM Research, examined the role of benchmarks in AI.  Murray argued that most current benchmarks do not effectively support the long-term goal of developing general AI, and that text-based interactive fiction environments hold promise for developing neuro-symbolic systems that strive for more general intelligence.


Day 1, Session 2: Learnable Reasoning

Agenda

  • Learnable Reasoning — Ndivhuwo Makondo (IBM), Hima Karanam (IBM)
  • Invited talk 2: Theory of real-valued logics — Ron Fagin (IBM)
  • Invited talk 3: Bridging Lukasiewicz logic with Neural Networks: a fruitful link — Antonio di Nola (Università degli Studi di Salerno)

Summary

The Learnable Reasoning session began with Ryan Riegel introducing logical neural networks (LNNs), which are the core of neuro-symbolic AI research at IBM. LNNs are a new representation and reasoner which is both neural and symbolic at the same time. Naweed Aghmad Khan presented the LNN repository that is available at https://github.com/IBM/LNN. Naweed also gave an example of how to use LNNs for defining predicates and reasoning over those predicates. Finally, Ndivhuwo Makondo introduced lifted LNNs and HPC scaling of LNNs.

This talk covered various aspects of LNN and its extensions. References to the papers are given below.

References

  • Riegel, et al 2020, Logical Neural Networks,  https://arxiv.org/abs/2006.13155
  • Fagin, Riegel, and Gray, 2020, Foundations of Reasoning with Uncertainty via Real-valued Logic, https://arxiv.org/abs/2008.02429
  • Hang Jiang, Sairam Gurajada, Qiuhao Lu, Sumit Neelam, Lucian Popa, Prithviraj Sen, Yunyao Li, Alexander G. Gray:LNN-EL: A Neuro-Symbolic Approach to Short-text Entity Linking. ACL/IJCNLP (1) 2021: 775-787
  • Prithviraj Sen, Breno W. S. R. de Carvalho, Ryan Riegel, Alexander G. Gray:Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks. CoRR abs/2112.03324 (2021)
  • Songtao Lu, Naweed Khan, Ismail Yunus Akhalwaya, Ryan Riegel, Lior Horesh, Alexander G. Gray:Training Logical Neural Networks by Primal-Dual Methods for Neuro-Symbolic Reasoning. ICASSP 2021: 5559-5563
  • Pavan Kapanipathi et al.,Leveraging Abstract Meaning Representation for Knowledge Base Question Answering. ACL/IJCNLP (Findings) 2021: 3884-3894
  • Kimura, et al., Reinforcement Learning with External Knowledge by using Logical Neural Networks, KbRL workshop at IJCAI 2020
  • Lebese, et al., 2021, Proof Extraction for Logical Neural Networks, https://openreview.net/pdf?id=Xw3kb6UyA31
  • Crouse, et al., A Deep Reinforcement Learning Approach to First-Order Logic Theorem Proving, AAAI 2021
  • Abdelaziz, et al., Learning to Guide a Saturation-Based Theorem Prover, IEEE TPAMI 2022
  • Qian, et al., 2020, Logical Credal Networks https://arxiv.org/abs/2109.12240

In the invited talks, IBM Fellow Ron Fagin gave an extremely interesting talk on the “Theory of real-valued logic” and its comparison to binary and other type of logics. The talk on “Bridging Lukasiewicz logic with Neural Networks: a fruitful link” By Prof. Antonio di Nola explained an approach to create a bridge between logical calculus and neural networks. 


Day 1, Session 3: Natural Language Understanding            

Agenda

  • Natural Language Understanding — Pavan Kapanipathi (IBM), Salim Roukos (IBM), Radu Florian (IBM)
  • Invited talk 4: It’s Time for Reasoning — Dan Roth (University of Pennsylvania & Amazon AWS AI)
  • Invited talk 5: System 1 Reasoning with Box Embeddings and System 2 Reasoning from Subgraph Cases — Andrew McCallum (University of Massachusetts Amherst)

Summary

The session presented a details of IBM’s approach to understanding natural language and perform reasoning. Pavan Kapanipathi presented the knowledge base question answering (KBQA) pipeline which have produced state of the art results on multiple reasoning based QA datasets (including LC-QUAD, QALD). The pipeline consists of several components including abstract meaning representation (AMR), entity linking, relation linking, and representation/reasoning using logical neural networks (LNNs). Compared to pure deep learning approach the whole pipeline is explainable. LNN and logic embeddings have also shown a significant improvements in knowledge completion problems.

The invited talk by Prof. Dan Roth showed how humans can use vast amount of experience and abstractions to understand complex natural language and images. He presented in detail the key challenges faced by NLU researchers and how neuro-symbolic approaches can help solve most of the challenges in knowledge and reasoning. The talk by Prof. Andrew McCallum presented his fascinating work on box embeddings. He concluded that NLU with a combination of neural and symbolic methods is robust, interpretable, and controllable. 

References

  • Logical neural networks. Riegel, R., Gray, A., Luus, F., Khan, N., Makondo, N., Akhalwaya, I. Y., … & Srivastava, S. (2020). https://arxiv.org/abs/2006.13155 arXiv preprint .
  • A Two-Stage Approach towards Generalization in Knowledge Base Question AnsweringS Ravishankar, J Thai, I Abdelaziz… - arXiv preprint https://arxiv.org/abs/2111.05825, 2021
  • Structure-aware Fine-tuning of Sequence-to-sequence Transformers for Transition-based AMR Parsing. J Zhou, T Naseem, RF Astudillo, YS Lee, R Florian… - arXiv preprint https://arxiv.org/pdf/2110.15534.pdf, 2021
  • Maximum Bayes Smatch Ensemble Distillation for AMR Parsing. YS Lee, RF Astudillo, TL Hoang, T Naseem, R Florian… - arXiv preprint https://arxiv.org/abs/2112.07790, 2021
  • Learning to Transpile AMR into SPARQL M Bornea, RF Astudillo, T Naseem… - arXiv preprint https://arxiv.org/abs/2112.07877, 2021

Day 1, Session 4: Knowledge Foundations           

Agenda

  • Knowledge Foundation — Rosario Uceda-Sosa (IBM), Maria Chang (IBM), Guilherme Lima (IBM)
  • Invited talk 6: Designing AI-Enabled Systems for Longevity — Deborah L. McGuinness (Rensselaer Polytechnic Institute)
  • Invited talk 7: Positive AI with Social Commonsense Models — Maarten Sap (Allen Institute and CMU)

Summary

This session described IBM efforts to build open and reusable knowledge foundations for downstream natural language understanding and reasoning tasks, such as event prediction, entity disambiguation, and commonsense reasoning. Our aim to to represent symbolic knowledge that can be gleaned not only from existing knowledge sources but also from neural models of information extraction and graphical event models. We approach this problem by building a Universal Logic Knowledge Base (ULKB), which combines a variety of knowledge graphs and ontologies to capture lexical knowledge (VerbNet, WordNet, Propbank), commonsense knowledge (ConceptNet), and world knowledge (Wikidata) in a unified framework. However, ULKB is greater than the sum of its parts due to the implementation of query constructs that unify entities across disparate KGs and approximate entity equivalence through linguistic phenomena such as meronymy. Additionally, our Hyperknowledge Graph (HKG) infrastructure provides support for reified subgraphs called contexts, which pave the way for higher order reasoning. 

References

  • Ontologies, Reasoning, Hyperknowledge:
    • Rosario Uceda-Sosa, Nandana Mihindukulasooriya, Atul Kumar, Sahil Bansal, and Seema Nagar. 2022. Domain specific ontologies from Linked Open Data (LOD). In 5th Joint International Conference on Data Science & Management of Data (9th ACM IKDD CODS and 27th COMAD) (CODS-COMAD 2022). Association for Computing Machinery, New York, NY, USA, 105–109. DOI:https://doi.org/10.1145/3493700.3493703
    • Guilherme Lima, Marcelo Machado, Rosario Uceda-Sosa, Marcio Moreno (2021). Practical Rule-Based Qualitative Temporal Reasoning for the Semantic Web. 5th International Joint Conference on Rules and Reasoning (RuleML+RR 2021)
    • Guilherme Lima, Rodrigo Costa, Marcio Ferreira Moreno. (2019). An Introduction to Symbolic Artificial Intelligence Applied to Multimedia. arXiv:1911.09606
    • Marcio Ferreira Moreno, Rafael Brandao, Renato Cerqueira. (2017). Extending Hypermedia Conceptual Models to Support Hyperknowledge Specifications. International Journal of Semantic Computing, vol. 11, no. 1, pp. 43-64.
  • Graphical Event Modeling and Causal Knowledge Extraction:
    • Debarun Bhattacharjya, Tian Gao, Dharmashankar Subramanian (2021). Ordinal Historical Dependence in Graphical Event Models with Tree Representations. AAAI 2021
    • Tian Gao, Dharmashankar Subramanian, Debarun Bhattacharjya, Xiao Shou, Nicholas Mattei, Kristen Bennett. (2021). Causal Inference for Event Pairs in Multivariate Point Processes. NeurIPS 2021
    • Oktie Hassanzadeh (2021). Building a Knowledge Graph of Events and Consequences Using Wikidata. ISWC 2021 (Wikidata Workshop)

At the end of the session, Deborah McGuinness (RPI) gave an invited talk titled “Designing AI-Enabled Systems for Longevity” where she described a problem-centric approach toward building large data- and knowledge- intensive systems. Maarten Sap (AI2) gave an invited talk titled “Positive AI with Social Commonsense Models​” where he described opportunities and risks in building neuro-symbolic commonsense knowledge graphs, such as COMET.


Day 2, Session 1: Optimal Action      

Agenda

  • Optimal action - Shirin Sohrabi (IBM), Debarun Bhattacharjya (IBM)
  • Invited talk 8: Rich Representations for Rational Robots — Leslie Kaelbling (MIT)
  • Invited talk 9: Building Taskable Reinforcement Learning Agents — Sheila McIlraith (University of Toronto)          

Summary

Decision making is fundamental for many real world problems. There are many challenges in training an AI system to take optimal actions. Some of the problems include: 1) availability of offline data only, 2) limited number of interactions we can have with the environment, 3) dynamically changing systems, 4) handling safety constraints and learning explainable policies. The goal of the “Optimal Action” research at IBM is to built knowledge-enabled decision technologies. We use neuro-symbolic policy representations that can leverage knowledge and reasoning to learn better policies in significantly less number of interactions. The policies are explainable and can be edited by the users. This allows co-creation of the models. 

In the first hour of the session, the speakers covered various topics:

The session also had two thought provoking talks by the invited speakers. Prof. Leslie Kaelbling (MIT) talk titled “Rich Representations for Rational Robots”, introduced ways we can train a robot or an AI system to perform various sequential decision making tasks in the real-world environment with relatively lower number of interactions with the environment. Prof. Leslie presented a common architecture for intelligent robots with general representations and reasoning mechanism which can be used to encode hand-built models and enable learning. She also presented neuro-symbolic relational transition models (NSRTs) that can enable planning. Prof. Sheila McIlraith (University of Toronto) talk on “Building Taskable Reinforcement Learning Agents” showed a way to solve the sample efficiency problem of reinforcement learning. LTL (Linear Temporal Logic) is used to create reward state automata called reward machines. This is then used by QRM (Q-learning for reward machines) algorithm to learn the policy in less number of interactions. 


Day 2, Session 2: Insight             

Agenda

  • Insight - Renato Cerqueira (IBM), Sanjeeb Dash (IBM)
  • Invited talk 10: What’s new in Learning and Reasoning? — Stephen Muggleton (Imperial College London)
  • Invited talk 11: Building machines that see, learn and think like people — Joshua Tenenbaum (MIT) 

Summary

Neuro-symbolic AI Insight work at IBM has the aim of creating a neuro-symbolic methodology and supporting technologies to improve AI understandability and enable Human-AI co-creation in data science, scientific discovery and decision optimization. Among other topics Sanjeeb Dash introduce neuro-symbolic approach to inductive logic programming (ILP) with LNNs. Renato Cerqueira presented a symbiotic interaction framework between the AI and users, that brings human in the loop and can result in co-creating of models. Renato also presented the work done at IBM on hyper-knowledge and knowledge-augmented risk assessment (KaRA).

References

  • Neuro-Symbolic Inductive Logic Programming with Logical Neural Networks, AAAI 2022. Prithviraj Sen, Breno W. S. R. de Carvalho, Ryan Riegel, Alexander Gray. https://arxiv.org/abs/2112.03324
  • LPRules: Rule Induction in Knowledge Graphs Using Linear Programming, S. Dash. J. Goncalves, 2021. https://arxiv.org/abs/2110.08245
  • AI Descartes: Combining data and theory for derivable symbolic discovery, 2021. Cristina Cornelio, Sanjeeb Dash, Vernon Austel, Tyler Josephson, Joao Goncalves, Kenneth Clarkson, Nimrod Megiddo, Bachir El Khadir, Lior Horesh. https://arxiv.org/abs/2109.01634
  • Integer Programming for Causal Structure Learning in the Presence of Latent Variables. ICML 2021. PMLR 139:1550-1560. R. Chen, S. Dash, T. Gao.
  • STONE: Signal Temporal Logic Neural Network for Time Series Classification, R. Yan, A. Julius, M. Chang, A. Fokoue, T. Ma and R. Uceda-Sosa, 2021 International Conference on Data Mining Workshops (ICDMW), 2021, pp. 778-787,

The presentation by IBM was followed by two invited talks by distinguished speakers. The talk by Stephen Muggleton explained the methods for integrating machine learning and reasoning and ILP. The invited talk by Josh Tenenbaum on building machines that learns and think like people was very inspiring. He explained how we can learn from the way humans do reasoning on commonsense and show cognitive abilities. The questions he raised were to figure out an architecture that can support various representations, abstractions, casual principles etc. 


Day 2, Session 3: Learning with less                

Agenda

  • Learning with less — Mark Squillante (IBM), Ken Clarkson (IBM)
  • Invited talk 12: Meta-Learning — Timothy Hospedales (University of Edinburgh)
  • Invited talk 13: Implicit Symbolic Representation and Reasoning in Deep Networks for Vision and Language — Jacob Andreas (MIT)

Summary

Learning with less and generalization is a key focus area under neuro-symbolic AI research at IBM. Less can be interpreted in various ways. Some of the goals of this area of research are: 1) learn with 10x less computational resources, 2) Multi-task learning with 10x less data or interactions, 3) Compositional generalization for minimal recursive semantics (MRS) with LNNs, 4) Defining a quantitative measure of intelligence for multiple tasks. The talk was divided into several parts:

  • “Learning and Generalization: the Statistical Physics Approach” by Yuhai Tu; see also  
  • “Learning to Generate Image-Source-Agnostic Universal Adversarial Perturbations” by Pari Ram
  • “Min-Max Bilevel Optimization for Robust Representation Learning” by Songtao Lu  
  • “Compositional Generalization” by Tim Klinger
  • “Multi-armed Bandits with Group Testing” by (Shashanka Ubaru)
  • “Generalization through Distributionally Robust Learning” by Soumyadip Ghosh  
  • “Transfer Learning: Geometric Structures, Minimax Bounds, and Minimax Optimality” by Mark Squillante  (To appear, AISTATS 2022)
  • “Capacity and Bias of Learned Geometric Embeddings for Directed Graphs” by Ken Clarkson

  In the invited talks for the session, Prof. Timothy Hospedales presented a very comprehensive talk on the topic of meta-learning. Meta-learning with neuro-symbolic learning can achieve superior performance when compared to other learning methods. Prof. Jacob Andreas showed how many features and computations of deep models can be well approximated by interpretable symbolic expressions, both for language and images. These expressions can be used to understand and control the black box neural models. 


Agenda

  • Neuro-symbolic AI related advances — Lior Horesh (IBM)
  • Invited talk 14: Rebooting AI — Gary Marcus (New York University) 
  • Invited talk 15: SynAGI (30 mins)        James Kozloski (IBM)

Summary

Neuro-symbolic AI is a multi-faceted topic, attempting to bridge between knowledge, data, learning and reasoning. In this session, we venture into some of the possible inspirational disciplines and theories that can help us to establish coherently the foundations of NeuroSymbolic AI. In this session we draw inspiration from algebra and geometry, cognitive theories, neuroscience, formal logic and information theory. 

In the first hour of the session, the speakers covered various topics:

  • Matrix and Tensor Algebra – Theory and Algorithms (Shashanka Ubaru)
  • xGraph: Accelerated and Explainable Graph Deep Learning (Tengfei Ma) 
  • Geometry, Data, and Algorithms (Aldo Guzman)
  • Thinking Fast and Slow in AI (Francesca Rossi)
  • Progress in Neuroscience (Mattia Rigotti)
  • Automated/Assisted Discovery of Correct–by–Construction Algorithms: Property–Guided Inductive Synthesis (Vasily Pestun)
  • Math Zero (Vasily Pestun)
  • P vs. NP Problem (Jon Lenchner)
  • Informational Lens - connecting bit, qubits and neurons (Chai Wah Wu)

The second hour of the session included two intriguing talks, offering insights as for how to advance AI:

  • Rebooting AI (Gary Markus) - Prof. Marcus elucidates many of the deficiencies of state-of-the-art AI, such as poor generalizability, lack of common-sense, limited incorporation of knowledge, and siloed research thrusts. In his talk he proposes the AI community to direct its efforts towards advancement of neurosymbolic AI, as well as bringing insights from cognitive models, form real-world models, leverage compositionality and figure how to incorporate values. Lastly, Gary connect these desired innovations, with the necessity to harness such advancements to solve important societal challenges. 
  • SynAGI - A Systems Neuroscience Approach to General Intelligence (James Kozloski) - Dr. Kozloski describes a effort to bring synergize neuroscience and AI. He argues that convergence of these disciplines requires coherent definitions of measures of intelligence, combining the design of novel AI with integrative modeling of brain systems, and lastly, coalesce towards building and demonstrating brain derived AI architectures. The working hypothesis of the project is that requirements per mechanisms, methods and behaviors, can be implemented and validated in a neuroscience form, and then enable synthesis of architectures, methods and environments.

References

  • Tensor-Tensor Products for Optimal Representation and Compression - M. Kilmer, L. Horesh, H. Avron, E. Newman, PNAS, 2021
  • Dynamic Graph Convolutional Networks Using the Tensor M-Product - O. Malik, S. Ubaru, L. Horesh, M. Kilmer, and H. Avron, SDM 2021
  • Sparse graph-based sketching for fast numerical linear algebra - D. Hu, S. Ubaru, A. Gittens, K. Clarkson, L. Horesh, and V. Kalantzis, ICASSP 2021
  • Projection techniques to update the truncated SVD of evolving matrices - V. Kalantzis, G. Kollias, S. Ubaru, A. Nikolakopoulos, L. Horesh, and K. Clarkson,  ICML 2021
  • Analysis of stochastic Lanczos quadrature for spectrum approximation - T. Chen, T. Trogdon, and S. Ubaru,  ICML, 2021 (Long Presentation)
  • Dynamic graph and polynomial chaos based models for contact tracing data analysis and optimal testing prescription - S. Ubaru, L. Horesh, G. Cohen,  JBI, 2021
  • Near-Optimal Algorithms for Linear Algebra in the Current Matrix Multiplication Time - N. Chepurko, K. Clarkson, P. Kacham, D. Woodruff, SODA 2022.
  • Efficient Scaling of Dynamic Graph Neural Networks - V. Chakaravarthy, S. Pandian, S. Raje, Y. Sabharwal, T. Suzumura, and S. Ubaru, Supercomputing (SC21), 2021
  • Ali, M., Berrendorf, M., Galkin, M., Thost, V., Ma, T., Tresp, V., & Lehmann, J. Improving Inductive Link Prediction Using Hyper-relational Facts. In International Semantic Web Conference (pp. 74-92). (2021, October). Springer, Cham.
  • Platt, D.E., Basu, S., Zalloua, P.A., Parida, L., Characterizing redescriptions using persistent homology to isolate genetic pathways contributing to pathogenesis. BMC Syst Biol 10, S10 (2016)
  • Karisani, N., Platt D.E., Basu, S., Parida, L., Inferring COVID-19 Biological Pathways from Clinical Phenotypes via Topological Analysis, International Workshop on Health Intelligence, AAAI 2021
  • Thinking Fast and Slow in AI: the Role of Metacognition, Marianna Bergamaschi Ganapini, Murray Campbell, Francesco Fabiano, Lior Horesh, Jon Lenchner, Andrea Loreggia, Nicholas Mattei, Francesca Rossi, Biplav Srivastava, Kristen Brent Venable, NeurIPS 2021 workshop on the role of Metacognition in AI, https://arxiv.org/abs/2110.01834 (2021)
  • Thinking Fast and Slow in AI, Grady Booch, Francesco Fabiano, Lior Horesh, Kiran Kate, Jon Lenchner, Nick Linck, Andrea Loreggia, Keerthiram Murugesan, Nicholas Mattei, Francesca Rossi, Biplav Srivastava, AAAI 2021 blue sky ideas track, https://arxiv.org/abs/2010.06002 (2020)
  • Preferences and Ethical Priorities: Thinking Fast and Slow in AI - Proc. AAMAS 2019
  • Deep Bandits Show-Off: Simple and Efficient Exploration with Deep Networks, Thong Zhu, Mattia Rigotti, NeurIPS 2021
  • Predictive Learning as a network mechanism for extracting low-dimensional latent space representations, Stefano Recanatesi, Matthew Farrell, Guillaume Lajoie, Sophie Deneve, Mattia Rigotti & Eric Shea-Brown, Nature Comms 2021
  • CertRL: formalizing convergence proofs for value and policy iteration in Coq. Vajjha, K., Shinnar, A., Trager, B., Pestun, V., & Fulton, N., In Proceedings of the 10th ACM SIGPLAN International Conference on Certified Programs and Proofs, (pp. 18-31). 2021 
  • PyCoq open source release, https://github.com/IBM/PyCoq
  • “Multi-Structural Games and Number of Quantifiers”, Fagin, R., Lenchner, J., Regan, K. W., & Vyas, N., Proceedings of the Conference on Logic in Computer Science (LICS) 2021
  • “New Progress with a Forgotten Logical Game”, Jon Lenchner, Conference on Highlights of Logic, Games and Automata, 2021. 
  • First IBM Research Workshop on the Informational Lens Sept 29-Oct 2,  2020), https://sites.google.com/view/informational-lens-workshop-1/home (Recordings of selected talks
  • http://ibm.biz/first_informational_lens_workshop)
  • JMM 2022 AMS Special Session on Mathematics Through the Informational Lens, https://www.jointmathematicsmeetings.org/meetings/national/jmm2022/2268_program_ss109.html#title 
  • J. Lenchner, “A Finitist’s Manifesto: Do we need to Reformulate the Foundations of Mathematics?,” arXiv:2009.06485 
  • G. Karunaratne, M. Schmuck, M.L. Gallo, G. Cherubini, L. Benini, A. Sebastian, A. Rahimi, Robust high-dimensional memory-augmented neural networks, Nature Communication, April 2021 (Featured in the 50 best articles in Applied Physics and Mathematics)
  • G. Karunaratne, M. L. Gallo, G. Cherubini, L. Benini, A. Rahimi, A. Sebastian, “In-memory hyperdimensional computing”, Nature Electronics, 2020. (Cover article)
  • G. Lan, P. Sartori, S. Neumann, V. Sourjik, Y. Tu, “The Energy-Speed-Accuracy trade-off in sensory adaptation”, Nat. Phys. 2012.

Day 2, Session 5: Closing          

Agenda

  • Neuro-Symbolic AI Toolkit — Naweed Khan (IBM)
  • Panel: The future of (neuro-symbolic) AI — Moderator: Francesca Rossi (IBM),    Panelists: Henry Kautz (University of Rochester), Gary Marcus (New York University), Luis Lamb (Universidade Federal do Rio Grande do Sul), Leslie Kaelbling (MIT)
  • Closing remarks — Alexander Gray (IBM)

Summary

The workshop was concluded by a session which started with a description of the first version of what will be the IBM neuro-symbolic AI toolkit (see https://ibm.biz/nstoolkit). The current version includes over 40 code repositories, available to all, covering various aspects of the neuro-symbolic AI pipeline and its applications and including the Logical Neural Network (LNN) repository. These repositories contain essential components to construct a complete pipeline for question answering systems (see Session 1, ), neuro-symbolic AI agents for sequential decision making, and relevant benchmarks.

The session continued with a panel discussion, moderated by Francesca Rossi (IBM), on the future of AI and the possible role of neuro-symbolic AI approaches, which included the following panelists:

  • Henry Kautz (University of Rochester, USA)
  • Gary Marcus (New York University, USA)
  • Luis Lamb (Universidade Federal do Rio Grande do Sul, Brasil)
  • Leslie Kaelbling (MIT, USA)

The panel discussed many aspects of neuro-symbolic AI, including the opportunities for neuro-symbolic AI to contribute to advancing AI’s capabilities, the role of benchmarks vs challenges, the use of large language models, the exploitation of knowledge and ontologies, and the lessons learnt from embodied AI.

Alexander Gray, who leads the IBM research efforts in neuro-symbolic AI, concluded the session and the whole workshop with final remarks that summarized the two-day event and discussed opportunities for collaborative work between IBM and the whole neuro-symbolic AI research community. 

Workshop Replay

Page last updated: 10 March 2022