โ† Back to Paper List

Continual Reasoning: Non-Monotonic Reasoning in Neurosymbolic AI using Continual Learning

Sofoklis Kyriakopoulos, Artur S. d'Avila Garcez
City, University of London
arXiv (2023)
Reasoning KG

๐Ÿ“ Paper Summary

Neurosymbolic AI Continual Learning Non-monotonic Reasoning
Continual Reasoning enables Logic Tensor Networks to solve non-monotonic reasoning tasks by splitting rule learning into sequential stages with rehearsal, allowing belief revision without logical contradictions.
Core Problem
Deep learning struggles with non-monotonic reasoning (retracting conclusions given new evidence) because standard training treats all data/rules as simultaneously true, leading to contradictions or averaging artifacts.
Why it matters:
  • Commonsense reasoning requires jumping to conclusions that might later be retracted (e.g., birds fly โ†’ penguins don't), which classical logic and standard neural training fail to handle gracefully
  • Previous approaches like Autoepistemic logic are computationally expensive, while standard neural networks lack formalization for explicit belief revision
Concrete Example: In the Penguin Exception Task, a system learns 'Birds fly' and 'Penguins are birds'. Later, it learns 'Penguins don't fly'. Standard monotonic logic fails due to contradiction. A standard neural network might converge to an uninformative 0.5 truth value for 'Penguins fly', failing to fully retract the earlier belief.
Key Novelty
Continual Reasoning Paradigm
  • Treats logical rules as a sequence of tasks rather than a single static knowledge base, using Continual Learning techniques to update beliefs over time
  • Implements non-monotonicity via curriculum design: learning general rules first (birds fly), then specific exceptions (penguins don't), utilizing rehearsal to prevent forgetting unrelated facts
Evaluation Highlights
  • +28-36% accuracy improvement on the Penguin Exception Task using Task Separation curriculum compared to a Baseline single-stage training
  • Achieves 99.9% satisfiability for 'Penguins do not fly' in the final stage while retaining 99.5% for 'Normal Birds fly', successfully handling the exception
  • Outperforms Baseline curriculum on the Smokers and Friends task, achieving higher satisfiability in 5 out of 9 logical rules using Knowledge Completion curriculum
Breakthrough Assessment
7/10
Novel application of Continual Learning to solve logical non-monotonicity in Neurosymbolic systems. Demonstrates clear improvements on prototypical tasks, though tested primarily on small-scale reasoning problems.
×