Differentiable ILP : learning explanatory rules

Introduction

In this blog, we have discussed Differentiable ILP: learning explanatory rules. Inductive Logic Programming (ILP) has been reimplemented in an end-to-end differentiable architecture as Differentiable Inductive Logic Programming (∂ILP). It aims to create a data-efficient induction system that can learn explicit human-readable symbolic rules, that is robust to noisy and ambiguous data, and that does not degrade when used with unknown test data in order to combine the benefits of ILP and neural network-based systems. This system’s key element is a differentiable application of deduction via forwarding chaining on definite clauses.

 

 

 

Example for Differentiable ILP: learning explanatory rules

Assume you are playing football. When the ball lands at your feet, you make the decision to pass it to the open striker. Two distinct types of thought are required for what appears to be a single easy action.

You first notice that a football is at your feet. You cannot easily explain how you know there is a ball at your feet; you simply see it. This awareness demands intuitive perceptual reasoning. Next, you choose to give the ball to a specific striker. Thinking conceptually is necessary for this choice. Your choice is supported by a justification; you passed the ball to the striker because he was unmarked.

 

We find the distinction intriguing since deep learning and symbolic program synthesis, are two different approaches to machine learning, corresponding to these two ways of thinking. While symbolic program synthesis relies on conceptual, rule-based thinking, deep learning emphasizes intuitive, perceptual thinking. While symbolic systems are much easier to interpret and require less training data than deep learning systems, they struggle with noisy data. Each system has advantages over the others. Deep learning systems are robust to noisy data but are difficult to interpret and require large amounts of data to train.

 

Although these two different styles of thinking are smoothly combined in human cognition, it is much less obvious whether or how this can be accomplished in a single AI system. ∂ILP shows that it is possible for systems to combine intuitive perceptual reasoning with conceptual interpretable reasoning. The ∂ILP system generates rules that are comprehensible and resilient to noise.

 

 

 

How ∂ILP works with an induction task.

It must determine whether the left image’s number is lower than the right image’s number by outputting a label (0 or 1) when it receives a pair of images representing numbers. Identifying the image as a representation of a certain digit requires intuitive perceptual thinking, and comprehending the less-than connection in its whole requires conceptual thinking. Together, these two types of thinking are required to solve this task.

 

 

A common deep learning model (such as a convolutional neural network with an MLP) can be trained to solve this problem successfully if given enough training data. Once trained, you can present it with a fresh set of images that it has never seen before, and it will accurately classify. However, if you give it several samples of each pair of digits, it will only generalize appropriately. The model is efficient at generalizing to new images while presuming that it has seen each set of digits in the test set (see the green box below). However, it cannot generalize symbolically to a new set of digits that it has never seen.

 

 

 

Because it has the ability to generalize symbolically as well as visually, ∂ILP varies from traditional neural networks and traditional symbolic algorithms. It gains knowledge of explicit programs by learning readable, understandable, and verifiable examples. ∂ILP generates a program that satisfies the requirements supplied to it along with a partial collection of instances (the desired outcomes). Using gradient descent, it passes through the set of programs. The program is revised by the system to more closely match the data if the program’s outputs don’t match the desired outputs from the reference data.

 

 

Also read – GopherCite: Teaching models to support answers

 

Share this post

One thought on “Differentiable ILP : learning explanatory rules

Leave a Reply

Your email address will not be published. Required fields are marked *