If chatbots are to succeed, they need this

Can logic be used to make chatbots intelligent? In the 1960s this was taken for granted. Now we have all but forgotten the logical approach. Is it time for a revival?

By Daoud Clarke, ChatbotTech

Photo by nikko macaspac on Unsplash

Chatbot technology is at an inflection point. The promises made about the potential for the technology have yet to come true. The market is predicted to grow 25% year-on-year to reach $1.25 Billion by 2025. Yet current attempts to build chatbots are failing.

It is not clear how this apparent contradiction will resolve. Will chatbot technology develop to a point where it’s easy to make chatbots that appear intelligent? Or will people’s initial experiences with chatbots be so bad that they will never become popular?

I believe there is a component missing from current chatbot implementations, without which chatbots will never achieve their full potential. That is the application of logic.

Logic programming

Logic programming’s heyday was the 1970s and ’80s. Languages such as Prolog were going to change AI by making it easy to program with logic. Programming languages like C and Python allow programmers to say what the computer should do. This is called “procedural programming”. The original idea of logic programming was that you would instead tell the computer what was true. Given this database of true facts and rules, logical deduction could be used to answer the user’s queries. This style of programming is called “declarative programming”.

The only real declarative programming language that has taken off and remains popular is SQL, which allows you to query databases. The programmer typically doesn’t specify how the query should be computed, that bit is left up to the interpreter.

Prolog tried to do something similar to SQL, except the database could contain logical statements such as “If A then B”. Logical deduction would be applied to answer users’ queries. Unfortunately, it never really succeeded in its aim because it tried to combine procedural with declarative programming. The declarative syntax translated into a procedural program. In order to use Prolog effectively, you needed to know how the interpreter would do this translation. In many cases, it actually made programming harder. I discovered this when I taught Prolog at university.

Expert systems and the AI winter

expert systems. An expert system is a set of rules emulating human expertise for a particular application. In the 1980s, there was a lot of enthusiasm for these systems, and a belief that they would revolutionise industry.

A lot of this hype never materialised. This, together with other events, resulted in cuts in funding, along with reduced enthusiasm for AI. This became known as the AI winter.

Since then, Prolog, and logic programming in general, has fallen out of favour.

From Prolog to Datalog

Since then, other logic programming frameworks have tried to improve on Prolog. Datalog is a purely declarative subset of Prolog which has achieved moderate popularity in certain subfields of AI.

Datalog is a subset of Prolog which is relatively easy to compute. This also means that it’s expressibility is limited. In particular, it is not a Turing complete language. This lack of expressibility has meant that Datalog never became a mainstream language, but it remains useful for certain applications.

Automated planning to the rescue

Another area of AI research that has achieved more recent success is automated planning. Programmers describe the world using a language like Planning Domain Definition Language (PDDL). This is a declarative language that, like Datalog, allows you to specify what is true in the world using logic. Unlike standard logic programming, however, you can also specify the consequences of taking actions. The automated planning system is then responsible for choosing the best sequence of actions to maximise the expected reward, which is also specified by the programmer.

Automated planning is being used with success at NASA on unmanned spacecrafts, autonomous rovers, ground communication stations and aerial vehicles.

From Go to theorem provers

Automated theorem proving is another area of artificial intelligence that has remained relevant in recent years. Theorem provers attempt to automatically generate proofs of logical statements, given a set of statements that are taken to be true.

There has been an exciting development in recent years as researchers apply techniques used to get computers to beat humans at computer games to this problem.

The most famous application of AI in recent years was AlphaGo’s win over Lee Sedol in the game Go. This game is notorious for being incredibly complex to solve by brute force, with an estimated 2 x 10170 possible moves. There are two technologies that were combined to beat Lee Sedol: deep learning and Monte Carlo tree search.

Now a similar approach is being used to tackle automated theorem proving. It’s early days but the results so far are promising.

Logic and chatbots

What is wrong with current chatbot implementations? And how can logic be used to make them better?

There are many things that chatbot creators get wrong that are obvious, and easily fixed with existing tools. For example, most of the problems addressed in chatbot.fail can be solved with careful design.

At the moment, chatbots are often a long way from being acceptable as a user interface. But if chatbots are to deliver on their promise, they need to deliver an exceptional experience, not an acceptable one. To me, that means displaying the appearance of intelligence.

One way to appear intelligent is to apply logic carefully and judiciously. This was demonstrated in the 1960s with SHRDLU and the “blocks world”. Since then, this approach has fallen out of favour. Despite this early success, future attempts to generalise the approach to less constrained environments were never fruitful.

Attempting to increase the scope of the tools lead to quickly mounting complexity that could not be solved with existing tools.

The future is in our hands

Times have changed and our methods and tools have improved. Should we revisit the logical approach? And could logic programming make chatbots a success?

It’s up to us to find out. But if we don’t find something to make chatbots more intelligent, it seems unlikely they will be the success we were promised.

Bio: Daoud Clarke is a freelance data scientist and creator of ChatbotTech.io. He earned his PhD at the University of Sussex where he invented a new theory of what meaning is. Since then he has worked as a researcher and software engineer in industry and academia, publishing papers on semantics, sentiment analysis, document classification, and recommendation systems. He also built the recommendation engine for The Times.