WIAS Preprint No. 3257, (2026)

A tensor network formalism for neuro-symbolic AI



Authors

  • Goeßmann, Alex
  • Schütte, Janina
    ORCID: 0009-0000-9924-3229
  • Fröhlich, Maximilian
  • Eigel, Martin
    ORCID: 0000-0003-2687-4497

2020 Mathematics Subject Classification

  • 68T27 68T30 68T37 15A69 65F99

Keywords

  • Neuro-symbolic AI, tensor networks

DOI

10.20347/WIAS.PREPRINT.3257

Abstract

The unification of neural and symbolic approaches to artificial intelligence remains a central open challenge. In this work, we introduce a tensor network formalism, which captures sparsity principles originating in the different paradigms in tensor decompositions. In particular, we describe a basis encoding scheme for functions and model neural decompositions as tensor decompositions. Furthermore, the proposed formalism can be applied to represent logical formulas and probability distributions as structured tensor decompositions. This unified treatment identifies tensor network contractions as a fundamental inference class and formulates efficiently scaling reasoning algorithms, originating from probability theory and propositional logic, as contraction message passing schemes. The framework enables the definition and training of hybrid logical and probabilistic models, which we call Hybrid Logic Networks. The theoretical concepts are accompanied by the python library tnreason, which enables the implementation and practical use of the proposed architectures.

Download Documents