← Back to Paper List

Reasoning in Neurosymbolic AI

Son Tran, Edjard Mota, Artur d'Avila Garcez
School of Information Technology, Deakin University, Melbourne, Australia, Instituto de Computação, Universidade Federal do Amazonas, Manaus, Brazil
arXiv (2025)
Reasoning KG

📝 Paper Summary

Neurosymbolic AI Energy-based Models Logical Reasoning in Neural Networks
The Logical Boltzmann Machine (LBM) maps propositional logic formulae to Restricted Boltzmann Machines, enabling neural networks to perform sound logical reasoning and learning simultaneously via energy minimization.
Core Problem
Current deep learning systems, particularly Large Language Models (LLMs), suffer from hallucinations, lack of interpretability, and unreliability when handling exceptions or safety constraints.
Why it matters:
  • LLMs (e.g., GPT-4) act as 'black boxes' that cannot offer logical guarantees, making them risky for safety-critical applications like self-driving cars
  • Fixing reliability issues via Reinforcement Learning or post-hoc alignment is costly and data-inefficient
  • Purely neural approaches struggle with 'true generalization' and often fail at simple formal reasoning tasks despite massive scale
Concrete Example: The paper cites OpenAI's o1 system, which uses Chain of Thought (CoT) to improve reasoning. However, CoT relies on synthetic data generation that can be unreliable; a model might solve a task today but fail at an analogous task tomorrow due to simple naming variations, as it lacks a sound underlying reasoning mechanism.
Key Novelty
Logical Boltzmann Machine (LBM)
  • Translates any propositional logic formula into the energy function of a Restricted Boltzmann Machine (RBM) such that minimizing energy corresponds to finding satisfying truth assignments
  • Uses the neural network to search for logical models (assignments mapping formulae to true) via Gibbs sampling
  • Acts as a neurosymbolic module that can be attached to complex networks (e.g., CNNs) to enforce logical constraints like fairness or safety
Evaluation Highlights
  • LBM achieves better learning performance in 5 out of 7 datasets compared to purely symbolic, purely neural, and state-of-the-art neurosymbolic systems
  • The system can find all satisfying assignments of a class of logical formulae by searching through a very small percentage of possible truth-value assignments
  • Demonstrates effective solution of connectionist Boolean satisfiability (SAT) and Maximum Satisfiability (MaxSAT) problems
Breakthrough Assessment
7/10
Offers a theoretically grounded method for exact reasoning in neural networks. While the provided text lacks the detailed results to confirm 'breakthrough' status, the claim of proving equivalence between logic and energy models is significant for reliable AI.
×