Lesson 12 Logic

Knowlede-Based AI — Cognitive Systems

notes
KB-AI
Logic
Author

Oren Bochman

Published

Saturday, February 6, 2016

Preview

In this lesson, we will discuss logic which is a formal language that allows us to make assertions about the world in a very precise way.

We learn about logic both because it is an important topic and also because it forms the basis of additional topics such as planning. We’ll start talking about a formal notation for writing sentences in logic.

This formal notation will have things like conjunctions and disjunctions. Then we’ll talk about truth tables, a topic that you probably already know. We’ll talk about rules of inferences like modus ponens and modus tollens. Finally we’ll discuss methods for proving theories [UNKNOWN] by repetition. One of those methods is called, resolution theorem proving.

Why do we need formal logic

Let us begin by asking ourselves, why do we need use formal logic to design an AI agent? The way the formal logic in an AI agent would work is that there will be two parts to the AI agent.

The first part of the agent will consist of a knowledge base. The knowledge will contain the agents knowledge all over the world. That knowledge will be represented in sentences in the language of logic.

The second part will consist of an inference engine. The inference engine will apply rules of inference to the knowledge that the agent has. So remember again, two parts, the knowledge base and then rules of inference. Now there are certain situations in which we want the AI agent to be able to show, to be able to prove that the answers, that it derives to any problem in fact are provably correct.

If we want to show this to itself, or to other users. In other situations, we may want the AI agent to generate only provably correct solutions. How can we guarantee that? Well we need two things. First, we need a complete and correct knowledge base. And second, we need rules of inference that will give guarantees of correctness of the answer. The inference then has two parts to it. The first part is called soundness. The property of soundness means that rules of inference will derive only those conclusions that are, in fact, valid. The second property is completeness. The property of completeness means that the [UNKNOWN] will derive all of the valid conclusions. These are two very important properties to have. If an AI agent can use logical rules, logical rules of inference on its knowledge base and provide guaranteed soundness and completeness, then it’s a very useful thing for an AI agent to have. For this reason, logic has been an important part of AI since the inception of the field. In fact it continues to be an important part of research and modeling AI. In this course however we’ll discuss logic only to a very limited degree. There are two reasons for this. First, our priorities in this course are a little different. Instead of talking about knowledge in the form of logical sentences, we are much more interested in conceptual knowledge and experiential knowledge and heuristic knowledge. Second, recall that we said that a logic based agent has two parts to it, the knowledge base and the rules of inferences. Even if the rules of inferences are, in fact, guaranteed to be sound and complete, there is a problem about how do we construct a correct and complete knowledge base? If the knowledge base of an AI agent is not correct and complete, then it may not gave you useful answers, even if the roots of inference are sound and complete. Thus, in this course, we’ll use logic only to the degree to which it is useful for specifying other methods. Methods that use conceptual knowledge or experiential knowledge or heuristic knowledge.

Inferences About Birds

Now we have come across this particular kind of problem earlier. This was a classification hierarchy. Vertebrates can be of different kinds. Bird is one kind. Birds can be a different kind. Eagle, bluebird, penguin are three classes of birds. Now imagine we have knowledge like, if an animal has feathers then it is a bird. When we’re discussing this classification hierarchy we had tried to define the concept of bird, and we had said at that time that if an animal has feathers it is a bird, and if an animal lays eggs and it flies then it’s a bird. It is sentences like this that we’ll try to put in the language of logic.

Exercise Inferences About Foos

Before we represent those sentences in the language of logic, let us consider another example of conceptual knowledge and its relationship to logic first. So here is a concept of four. We have come across this earlier. They were a block and a block and a brick at the bottom and a brick at the top. And some relationship between these objects. Given this conceptual knowledge about foo we can ask ourselves, what are the sufficient conditions for something to be a foo? Here are several choices. Please mark all of those choices that together make for sufficient conditions for the concept of foo

Exercise Inferences About Foos

That’s good David. So what we are finding here is, given conceptual knowledge, how can we translate it, into the language of logic?

Predicates

Recall that we said that a lot of this AI agent will have two parts to it, a knowledge base and then the rules of inference. We’ll come to the rules of inference a little bit later. First let us look at how can we construct a knowledge base in the language of logic. So what we are trying to do now is that an AI agent has some knowledge about the world and it is going to express it in the scheme of logic. In earlier schemes of knowledge representation, we discussed how there were objects and relationships between objects. And any knowledge representation scheme we need to capture both objects and relationships between those objects. Logic has a particular way of doing it. So consider an object like a bird. This object may have various arguments. We’ll define something called a predicate, which is a function that maps object arguments to either true or false. So let us consider an example. Here we have bluebird as the object and feathers as the predicate on this object. Let’s consider this example. Here, bluebird is the object and feathers is the predicate on this object. Feathers is now a function that can map either into true or into false. Either bluebird has feathers or bluebird doesn’t have feathers. In this particular case, feathers of bluebirds would be true, because bluebirds do have feathers. Now, just like we had bluebird as the object in the previous example, here we have animal as the object, the same predicate. Now, of course, not all animals will have feathers, so this particular predicate may be true or false, depending on the choice of the animal. In this sentence there are two predicates, one object, animal still, but there is a predicate feathers and a predicate bird. And we can capture the relationship between these two predicates, by saying that if feathers animal is true, then bird animal is also true. This example has two predicates. Here there’s one object, the animal. But the predicates are feathers and birds. And the sentence is capturing a relationship between the two predicates. If the animal has feathers, then the animal is a bird. In logic we call sentences like this as having an implication. This is an implicative relationship. So in logic, we’ll read this as Feathers(animal) implies Bird(animal). Or if the animal has feathers, then it implies that the animal is a bird.

Conjunctions and Disjunctions

Now, consider another sentence that we have come across earlier. If an animal lays eggs, and it flies, then it is a bird. How do we write this in the language of logic, given that there is conjunction here. So this time, we can have two predicates again. There is a predicate of lays-eggs, coming from here. The predicate of flies, coming from here. And we can denote a conjunction between them. Which in the language of logic is often put in this form. Now we can re-write this sentence in the following form. If the animals lays eggs and the animal flies, then the animal is a bird. Remember again, this semi colon here, really is denoting implication for now. Remember again, that in logic, this really stands for an implication. Consider the slightly different sentence. Suppose if the sentence was if an animal lays eggs or it flies it is a bird. In that case, again, we’ll have two predicates, but this time we’ll have a disjunction between them. And the sentence would become or if animal lays eggs or animal flies, then the animal is a bird. And again, this is an implication. Let us continue with the our exercise in which we are learning how to write sentencable language of logic. It is under the sentence, if an animal flies and is not a bird. So, it is a negation here, then it is a bat. How do we write that in logic? So I’m still interested in writing the antecedent of this particular sentence, and I may be able to say that animal flies is a conjunction here, because it is an and here, and we have this negation symbol for this predicate, bird. Now we can write a complete sentence by saying that the animal flies, conjunction. Animal is not a bird, implies animal is a bat.

Implies

Now, I have talking a little about implication. Let’s see how do we actually write, implication and logic. So here is a sentence, if animal lays eggs and animal flies, it is implication is that the animal is a bird. In logic we write this using the symbol, arrow symbol, or an indication, so if the animal lays eggs and animal flies, implication animal is a bird. So here is the left hand side of the implication, here is the left hand side of the implication. The left hand side of the implication, implies the right hand side

Notation Equivalency

Generally speaking, you won’t have these symbols on your keyboard. You can find them in your character map and you are welcome to use them if you’d like to. But for the exercises in the rest of this lesson and in the next lesson, feel free to use the symbols given over here. These are the symbols for AND, NOT, OR and Equals that come from Java or Python. So, feel free these when you are doing the exercises that you’ll come across in the rest of this lesson.

Exercise Practicing Formal Logic

So remember we are still trying to learn how to build a knowledge-based on the language of logic. To put it all together, consider four exercises. Here is the sentence. Please put it in the language of logic. Similarly for this sentence, this sentence, this sentence.

Exercise Practicing Formal Logic

Good, David, that looks right to me. To wrap this part up, let us note that, when we defined what a predicate was, we said a predicate like flies can map into true or false. Well, okay, a predicate can map into true or false. What about complicated sentences like this which are multiple predicates as well as implications? How do we find out whether the sentence as a whole maps into true or false? That’s what we’re going to look at next. We’re looking at truth tables.

Truth Tables

So we’ll now build truth tables for conjunctions and disjunctions and negations of sentences, so that we can find the truth of complex sentences stated in logic. Now many of you probably are familiar with truth tables, and if you are in fact familiar with truth tables, then you can skip this part and go directly to implication elimination. If you’re not familiar with this then please stay with me, but even so I’m going to go through this quite rapidly. So here is the truth table for A or B. If A is true, then B. If A is true and B is true, then A or B is true. If A is true and B is false, then A or B is still true, because A was true. If A is false and B is true, then A or B is true, because B was true. One of them is true, makes this true. If A is false and B is false, than A or B is false.

Exercise Truth Tables I

Let us try a couple of simple exercises. So here we have A, B and we want to find a truth value of A or not B. Given these values for A and B, can you please write down the truth values for A or not B. And similarly, for not A and not B

Exercise Truth Tables I

So for A or not B, I got that if A is ever true, then this has to be true, because it’s A or not B. When A is false the negation flips the value of B, so it makes it true when B is false, but keeps it false when B is true. For not A and not B, that means that any time either A or B is true, then this is all false. So when A is true, this is false. When B is true, this is false. When both are false, this becomes true, because those negations flip the values of both A and B.

Exercise Truth Tables II

Now, we can play the same game, for ever more complex sentences. So, here I’ve again, three predicates, A, B and C. And here’s a more complicated sentence that involves all three of those predicates. A or B, and within parentheses, B and, nought C. And we can find the truth values for this particular, sentence, given the truth values for the predicates A, B and C. Why don’t you give it a try and write down the values here?

Exercise Truth Tables II

So as you can see, this can become very complicated very quickly. But David did get the answer to the truth value of this particular sentence based on the truth values of the predicates that are inside this sentence. So, in principle now, we can see how, we can compute the truth value of very, very complicated sentences written in logic

Exercise Commutative Property

The construction of these truth tables, allows us to illustrate certain important properties of logical predicates. To see those properties, let us do an exercise together. So here we have the predicate A, and the predicate B. And here we have A and B, and B and A. Please fill these boxes, the truth values of A and B, the truth values of B and A.

Exercise Commutative Property

That’s good, David. And as you know, this property is called the commutative property. The commutative property says that the truth value for A and B is the same as the truth value for B and A. So whenever I have A and B, and can re-write it as B and A.

Exercise Distributive Property

Let us try a slightly more complicated exercise. This time, we have three variables, A, B, and C. And here are the combinations of the truth values of A, B, and C. Here on the right are two formulas. The first one says, A and parenthesis B or C parenthesis closed. The second says parenthesis A and B parenthesis closed or parenthesis A and C parenthesis closed. Please write down the truth values for these two formulas.

Exercise Distributive Property

We can also think of this as distributing the part outside both the predicate and the operator, into both the ones on the inside. We take the A and, and apply it to B, so A and B. We take the A and, and apply it to C, so A and C. And we preserve the operator in-between B and C. In between the two new parenthesis. So if this had been a or b or c. This would become a or b or b or c. This would become a and b and c, a would be all the operators here

Exercise Associative Property

Let us do one of their exercising in two tables illustrate one of the property of logical predicate. Again here are three predicates, and here are two formulas. It should be a simple exercise. Please write down the truth values, of the two formulas, in these boxes.

Exercise Associative Property

The difference between these formulas and the ones we were doing before, are the values of these operators. Associative property works when it’s, both ors or both ands. Distributed property worked when there was a mixture of operators.

Exercise de Morgans Law

One other property of logical predicates that we will see very soon in action is called de Morgan’s law. So this time there are two predicates A and B. Here are their truth values. And here are two formulas. Remember this is a negation. Please write down the truth values of these two formulas in these boxes.

Exercise de Morgans Law

That’s good David. So the de Morgan’s law is saying that when we try to distribute negation over the predicate inside the parentheses that are connected with a conjunction, then the conjunction becomes a disjunction between the negations of the pre, predicates. The same would have been true if we had a disjunction here. When we distribute the negation, it would have become a conjunction here. David, before we go ahead further, let’s remember why we are trying to do all of this. So do you recall we said in the beginning of the lesson that a logical agent will have a knowledge base, and then formal rules of inference that will apply on these sentences as knowledge base. The knowledge base itself may be coming from many places. Some sentences in the knowledge base may be boot strapped into the logical agent. Other sentences may be coming from perception. Now when we’re trying to apply these rules of inferences to the synthesis of the knowledge base it is sometimes very useful to rewrite the sentences in different forms. And that’s what we are trying to do. These properties will allow us to rewrite the sentences in such a way that we can in fact apply the rules of inferences that we will see in a minute.

Truth of Implications

So it can be a little bit weird to talk about the truth value of an implication sentence. What we’re really saying here is, whether or not this implication actually holds true. So let’s take three different implications to see this. First let’s think of the implication, feathers implies bird. All birds have feathers and only birds have feathers. So, we know that if an animal has feathers, then it is a bird. That’s true. On the other hand, let’s take the implication, if scales then bird. Lots of animals with scales aren’t birds and in fact no animals with scales are birds. So the implication, scales implies birds. Would be False. For our third example, let’s take the implication, flight implies bird. If we have a penguin, flight is False. But the penguin is still a bird. So, flight can be false and bird can still be true, meaning the implication can still be true here. On the other hand, if we have a cat, flight if False. And bird is False. So, the implication can still be true. So in this case, if flight was false, we can’t actually make a determination on whether or not the animal is a bird.

Implication Elimination

As we go ahead and start applying rules of inferences to sentences in a knowledge base. We’ll find it convenient to rewrite the sentences in a knowledge base. And sometimes it will be very useful to rewrite these sentences in the knowledge base in a manner that eliminates the implications in a sentence. And this is how we can eliminate the implication. If a implies b, than we can rewrite it as not a or b. We know this because the truth value of a implies b is exactly the same as your truth value of not a or b. We can take an example here. Supposing that we are given feathers imply bird. Then we can rewrite this as not feathers or bird. And intuitively, you can see the truth value of this. It is either the animal does not have feathers or, it is a bird. In a little bit, we will see that this is a important rewrite rule in doing certain kinds of logical proofs.

Rules of Inference

You may already be familiar with this line of reasoning, because this is another way of raising contrapositive, that we see in other areas of logic.

Prove Harry is a bird

Now you can see how we apply these rules of inferences on sentences in a knowledge base or philosophical agent to prove all kinds of sentences. See, imagine that an AI agent begins with the knowledge that if an animal has feathers, it implies that the animal is a bird. Now it comes across Harry, who does have feathers. By Modus Ponens, therefore the AI agent can conclude that Harry is a bird. This completes the proof for our original goal of proving that Harry is a bird. Now let us suppose that a goal is to prove that Buzz does not have feathers. Once again, imagine an AI agent which begins with the knowledge that if an animal has feathers, it implies that the animal has, is a bird. The agent comes across a animal, which is not a bird. Then by Modus Tollens it can infer that buzz must not have feathers. This completes the proof for of a original goal of proving that buzz does not have feathers. Okay. So now, we have looked at two ways of proving the truth value of various sentences. The first way was just through truth tables. I could have sentences and logic. Then I could write another sentence. And ask myself, what, what is the truth value of this sentence? I could construct a truth table for that sentence, composed of the truth values of all the predicates, with some of which might be coming from earlier sentences. The second way in which we have seen how we can prove the truth values of sentences and logic is by applying these rules of inferences like modus ponens and modus tollens. This is very powerful, and in fact the power of this logic has been known since before the birth of AI. As computer scientists however, we’ll analyze this power in a slightly different way. Yes, we can use method of truth tables to construct a truth table for any arbitrary sentence. However, the sentence was complicated. Then the truth table very soon will become very complex. Computationally, that is infeasible for very long, large sentences. Similarly, yes we can apply simply modus ponens and modus tollens to find the truth value of many sentences. But if the knowledge base consisted of a very large number of sentences, instead of just one or two sentences, then the kinds of inferences, number of inferences I can draw from those sentences simply by applying modus ponens and modus tollens, will be very large. Or if I had to find the truth value of a single sentence, then the different pathways I could take in order to get to the truth value of those sentences can make for long, large problem space. So while these methods of proving the truth with your sentences and logic have been around for a long time. These methods are not computationally feasible. At least not for complex tasks. At least not for agents that have only limited computational resources and from who we want near realtime performance

Universal Quantifiers

Before we show you, a computationally more feasible way of proving theorems in logic, or proving the truth value of sentence in logic. We should point out that so far, we have been using only propositional logic. Propositional logic is sometimes also called the zero-if order logic. The key aspect of propulsion logic, is that it does not have any variables. So as an example, I may have a sentence that says if the animal Lays-eggs, and the animal Flies, then the animal is a Bird. And here I’m talking about a specific animal. Well, sometimes I might want to talk about, animals in general, any animal, all animals. In that case, I would want to introduce a variables in it. So in first audilogic, otherwise known as predicate calculus, I might want to say something like. If x Lays-eggs and x Flies, then x is a Bird. Which has a set form very similar to form here, except that instead of animal, I now have a variable. Now, I have a variable here. But, I must also specify the range of the variable. And what I really want to say here is for all animals. Therefore I’ll introduce a new quantifier over the variable x. This quantifier is called Universal Quantifier. It is denoted with the symbol, this is the symbol for Universal Quantifier. And this says now for all x, if x Lays-eggs, and x Flies it implies that x is a Bird. One thing to note here is that, I could have rewritten this sentence, with the Universal Quantifier back into proposition logic. But, having lots of sentences like this. In proposition logic. So I could’ve said Lays-eggs (animal) one, Flies (animal) one implies Bird (animal) one, Lays-eggs (animal) two, and Flies (animal) two implies Bird (animal) two. And so on and so forth, for each and every animal that is possible. But, by writing it in the form of a variable, a Universal Quantifier statement, I can reduce the number of sentences I have to write into just one sentence. So we have introduced variables, and we have talked at least about one quantifier so far, the Universal Quantifier, that applies for all values that that variable can take. Sometimes I might want to specify a different range of the variable. Not all values of the variable can take, but, at least some values of the variable I can take. So consider again, this sentence, here the animal is [UNKNOWN] a specific animal. Now let’s look at the second sentence on this screen. And this sentence is the variable y. It says if y Lays-eggs and y Flies then it implies that y is a Bird. This sentence is a very similar form, to the previous except for the variable y. I can specify the value, that the variable y can take. This time I want to specify not that this sentence is true for all values of y, for all animals, but simply that it is true for. Some at least one animal in which case I’ll use an Existential Quantifier. Here is the symbol for an Existential Quantifier, this Existential Quantifier says that there is at least one animal, for which this sentence happens to be true.

A Simple Proof

Okay, let us set aside predicate calculus, and return back to population logic. Recall that we had found ways of writing sentences in population chronologic. We had found rules of inferences, we could prove theorems. We could find the truth value of new sentences. However, we found that those methods were computationally, not very efficient. So AI has developed more efficient methods. One of those methods is called Resolution Theorem Proving. Let us take an example to illustrate how resolution theorem proving works. So, imagine there is a robot, and this robot. Is working on an assembly line, it’s a factory robot, and on the assembly line are coming weird kind of widgets. The robot’s task is to pick up each widget, as it comes on the assembly line and put it in a truck. However, there are some humans in this factory. Who play a joke on the robot once in a while, they glued the widget to the assembly line belt, so that, when the robot tries to move it, it can not move it. But the robot is a smart robot, this is a logical agent, so when it can not move it. It uses its logical reasoning, to figure out that the boxes aren’t liftable. And the moment it knows that the boxes aren’t liftable, it lets go of the box and moves onto the next one. Everyone got the story? All right. So let us suppose that the robot begins with some knowledge in its knowledge base. And this knowledge in its knowledge base, that it begins with says that if cannot move, then it implies that not liftable. Now, it tries to move the box, the next box in the widget. It’s biceps tells it, it can not move. It needs to prove that it’s not liftable. And of course this is a preview example and I’m sure you’ll understand it. You can put essentially a class of the modest components to prove that it’s not liftable. If p then q, p therefore you can infer q. But, we’ll use this example to show. How does resolution theorem proving works? So, the first step in resolution theorem proving is, to convert every sentence into a conjunctive normal form. A conjunctive normal form of a sentence, can have one of three conditions. It can have a literal. That can be either a positive atom, or a negative atom. It can have this disjunctional literals like here can-move, or not liftable, or it can have a conjunction of disjunctional request. In this example the third condition doesn’t occur. So, the first thing we must do is to take the first sentence. The negation of not move implies not liftable. And remove the implication, because an implication cannot occur in conjunctive normal form. So the first thing we need to do is, to rewrite the sentence, the first sentence, to remove the implication. Because the implication cannot occur in a conjunctive normal form. So now we use the. Implication elimination rewrite rule. To rewrite this in the form of can-move, or not liftable. Remember that was alpha implies beta becomes, not alpha or beta. So the not of negation of can-move becomes can-move or not liftable. So, we have done it for the first sentence. This is now in a conjuncted normal form. We can do the same thing for the second sentence, but wait, the second sentence already is in a conjunctive form. We don’t have to do anything. Now, the robot wants to prove that their box is not liftable. Resolution to improving, is like proof by refutation. To do proof by refutation we will take the negation of what we want to prove. We wanted to prove not liftable would take its negation, which makes it liftable. Okay, so now we got three sentences. This one’s the first sentence that the robot was bootstrap with, you’ve just converted to a conductor normal form. This was the sentence that came from a it saw that the box cannot move. And this is the sentence throughout the negation of the sentence, the refutation of the sentence that it wants to prove. So we have three sentences now. The first sentence came from the bootstrapping, of the robot’s knowledge base. This is the axiom that the robot assumes to be true. The second sentence came from its percepts. The robot tried to move the box, it could not move it. The third sentence is coming from taking the negation of what the robot wants to prove. It wants to prove it’s not liftable. So, it’s going to take this negation of it and then, sure that it’s going to lead to a null condition that we’ll view as a contradiction. The resolution for improving lawless begin with a liftable in the sentence that we want to prove. So here that sentence is liftable, and we’ll look for a sentence that contains a negation of liftable in this sentence that we want to prove. So the sentence here was liftable, sentence S1 contains liftable which is a negation of that so we pick S1 and not S2. Note, how efficient it was to decide what sentence on the knowledge-based to go to. In sentence container negation of the liftable. So, liftable and not liftable can not both be true. We know that, and therefore we can eliminate them. This is called resolution. We resolve unliftable and we remove them from the sentences. Now, we were sentence as S1, that leaves us can move. So, now we pick a sentence, that has the negation of literal can-move. Sentence S2 has a negation of that, and we can resolve one can move, they can not both be true. When we resolve on both of them, those get eliminated as well. And now we see we’ve reached another condition. This null condition represents a contradiction, and now we can infer that liftable cannot be true, therefore not liftable is true. The robot has proved not liftable. And in this case it appears as a resolution theorem improving is more complex there’s more respondents. In general it is not. It just appears here, because this condition happened to fulfil the form of more respondents directly. In general, deciding on which sentence to apply the modest ponents on, and how to combine those groups of inferences don’t suddenly become harder than deciding how to apply the resolution and improvements.

A More Complex Proof

Let us make this example a little bit more complicated. Complicated enough that it cannot be proven simply by applying one instance of modus ponens. Imagine that a robot proved to itself that this box is not liftable. And the humans in the factory who were trying to make fun of the robots said to the robot well, the reason it’s not liftable is not because it’s not movable, but because your battery is not working. So now the situation is more complicated, the robot must also check its battery. So now the robot begins with slightly different knowledge in this knowledge case. So suppose that the knowledge in its knowledge basis, cannot move and battery full means it’s not liftable. It finds from its concept. Again, it cannot move, so it checks its battery, and it checks its battery and it finds that the battery’s full. So then two new sentences that get written in the knowledge base. By the knowledge base contains three sentences. As earlier, the resolutions you’re improving, the agent must convert all these sentences, in its knowledge base into a conjunctive normal form. That means that the sentences can contain a literal or a disjunctional literal or a conjunction of disjunctional literals. So if we begin by removing the implication from sentence one, because an implication cannot occur in a conjunctive normal form. So when we remove implication from the first sentence we get this sentence. Where the sentence is not yet satisfactory, it is not yet a conjunctive normal form because this is a disjunction of conjunctions. And what we want are conjunctions of disjunctions. So we apply the deMorgan’s Law and now we get the following sentence. We’re simply taking the negation inside which flips the conjunction into a disjunction and now we have three liftables connected with disjunctions and this is a conjunctive normal form, disjunctional liftable. So now we have in the knowledge base, three sentences, all three of them in the conductor normal form, either literals or disjunctional literals. Recall that the robot wanted to prove not liftable. So it takes the negation of that, this is again proof by refutation, so it considers liftable. So now this knowledge base has four sentences. These four sentences coming from the negation of what it wants to prove. Once again the reasoning begins by the literal that it wants to prove, in this case liftable. It finds a sentence in which the negation of literal is true. So once again, we begin with the sentence, S4 because that is what we want to prove. And we find a sentence in the knowledge base which contains a literal which is a negation of the literal in this sentence S4 that we want to prove. We resolve on this because they both cannot be true, and resolution here simply means that we drop them. Now, in the sentence S1 that is under consideration currently, we have two literals. We can begin with either one of them. Let us begin with not battery full. We’ll try to find a sentence which contains a negation of this particle electrode. There is a sentence, S3, which is a negation of this so we resolve on this. Battery full and not battery full because they both cannot be true. We’ll drop them. Now in sentence S1 we are left with just one literal, can-move. We can try to find a sentence in the knowledge base which contains a negational judge literal. Here it is, and so we can resolve on them. And we resolve on them, we drop them. And once we drop them, then we have a null condition, which stands for a contradiction. So we reached a contradiction, therefore the assumption that this was liftable cannot be true, therefore not liftable is true, and we have just shown that resolution theorem proving in this case proves what the robot wanted to prove. One important point to note here is the issue of focus of attention. Often when the problem space is very complex, for example, when the number of sentences is really large, the sentences are very complex, it can become really hard for the logical agent to decide what to focus on. But because we have converted everything in a conjunctive normal form, and because resolution theorem proving is making use of resolution, therefore at any particular time, the logical agent knows exactly what to focus on. You always begin with this literal, you always try to find a sentence which contains this negation. You always resolve on that. You take the remaining literal in the sentence and proceed forward. This focus of attention, this computational efficiency of resolution theorem proving is arising because a
called horn clauses. A horn clause, is a disjunction that contains at most one positive literal. This is happening in S1. This is a disjunction that contains at most one positive recall. This is a negative recall, this is a negative recall, and the fact that it contains just one positive recall, is a very powerful idea because that’s where the focus of attention comes from.

Exercise Proof I

Let us do an exercise together to make sure that we get rich solutions for improving. So consider this sentence if an animal has wings and does not have fur, it is a bird. Write this sentence down in formal logic. You can use the predicates, has-wings, has-fur, and bird.

Exercise Proof I

David, what did you write? So starting at the beginning, has wings becomes the predicate has wings. We’re doing a conjunction so and, does not have fur, so not has fur, those two things imply that it’s a bird. That’s good, David. Now, let us put this in a form, in a conjunctive normal form that we can use in resolution theorem proving.

Exercise Proof II

So since an indication cannot occur in a conducted normal form, we must eliminate the implication. So please eliminate the implication from this sentence, and rewrite it in this box.

Exercise Proof II

What did you get David? So this is a little bit more complicated. We know that from our earlier formula if a implies b, then to rewrite it with implicational elimination we write, not a or b. So or b is pretty straightforward, or bird. We take the not of the conjunction over here and say not has wings and not has fur. Now some of you may have jumped straight to writing this in full conjunctive normal form, but now we’re going to move on and do that last step.

Exercise Proof III

So this is not conjunctive normal form, because we have a disjunction over conjunction. What we want are either just disjunctions or conjunctions for disjunctions. So use de Morgan’s law to write this in a conjunctive normal form.

Exercise Proof III

What do you have, David? So with the Morgan’s Law, we take the negation on the outside and apply individually to the predicates on the inside, and flip the operator. So has-wings becomes not-has-wings, not-has fur because has-fur. And the and becomes an or.

Exercise Proof IV

So imagine that your robot has this in its knowledge base and goes into a country where it finds an animal and from its perspective knows that this animal has wings and does not have fur. So these two additional centers is gotten in the knowledge base. Now the robot wants to prove that this is a bird. In order to proving, what should the robot begin by writing in this box? What should the S4 sentence be?

Exercise Proof IV

What do you think, David? So in resolution theorem proving, we always assume the opposite of what we’re trying to prove. We’re doing proof by refutation. So we’re trying to prove that it is a bird, so we’re going to assume that it’s not a bird. That’s right.

Exercise Proof V

Containing further. So what part of S1 would we eliminate first?

Exercise Proof V

What do you think David? So in resolution theorem proving, we start with whatever it is we assumed and look for the negation of that in an earlier sentence. Here we find not bird, and bird in S1. So we’re going to resolve on not bird and bird and leave both of those out. That’s good.

Exercise Proof VI

So what shall we do next? What should we resolve on next?

Exercise Proof VI

Out of four choices, which one did you pick, David? So, I picked the first one, but I think there’s actually two correct answers. What we’re looking for is something else in S1 that has a negation in another sentence. Not has wings has a negation in S2, so we could resolve on S2 with the not has wings portion of S1. Has fur also has a negation in S3. So we could also resolve on S3 and the has fur portion of S1. This is right, David. At the end of this, we’re left with null, which is a contradiction. Therefore, an assumption that is not bird is false. Therefore, it must be a bird and the robot has just proved that this must be a bird. Note what we have done. We have mechanized parts of logic. And sort of coming up with large truth tables. And it’s sort of coming up with complex chains of inference based on modus ponens and modus tollens. We have found in our garden resolutions are improving, which is a efficient way of proving sentences and the truth values. This is how it works. Take all the sentence and the knowledge base, convert them into conjecture novel form. Take what you want to prove and its negation. Put that as a new sentence. Now, starting with this particular sentence, a new sentence. Find the literal in another sentence on which you can resolve, and keep on doing it until you find a null condition, which is a contradiction. If you don’t find a null condition, if you don’t find a contradiction, that means that what you started with cannot be proved.

Assignment Logic

So how would you use formal logic to develop an agent that can solve Raven’s progressive matrices? As with production systems, we can kind of face this at two different levels. One, you could use formal logic to represent the overall algorithm the agent uses to solve any new problem. Or secondly, the agent could use formal logic to develop its understanding of a new problem that just came in. It could then use those formal rules to develop the transformations that occur within the problem and transfer those transformations into the new answer. Alternatively, you could also use formal logic to allow your agent to prove why it’s answer to a particular problem is correct. Then if the answer is actually incorrect, the agent may have the information necessary to go back and repair it’s reasoning and do better next time.

Wrap Up

So today, we’ve talked about formal logic in order to set up a kind of formal language for us to reason with going forward. We started off by talking about formal notation, including conjunctions and disjunctions, so that we can write sentences in formal logic. We use that to then talk about truth tables, and exam some of the properties that we need going forward, like De Morgan’s law. Using that, we investigated some of the inferences that we can draw using formal logic. And finally, we looked at proof by reputation which kind of capitalizes on everything we’ve talked about so far. Next time, we’ll be discussing planning, which leverages the formal logic that we’ve developed in this lesson. It allows agents to reason more formally about initial and goal states. Interestingly, planning actually has its history in the kinds of proofs we’ve developed here. Originally, agents would prove that a particular plan would work, so that’s why we talk about formal logic before we talk about planning.

The Cognitive Connection

The connection between logic and human cognition is interesting. Logic is a very important school of thought in AI, for several reasons. One reason is that logic provides a very formal and precise way of reasoning. Another reason is that logic provides a formal notation for expressing how intelligent agents reason, whether or not they’re using logical reasoning. But does this mean that logic is also the basis of cognition? It suddenly appears that humans use logic some of the time. For example, I may have a statement like if I get a big bonus, I’ll take a long vacation. I did get a big bonus, therefore I may infer I’ll take a long vacation. This is clearly. But simply because of a behavior appears to be logical, does not necessarily imply that we use logic as a fundamental reasoning strategy. I might solve a new problem, by analogy to a previously encountered problem, I did not use logic, but my behavior appears to be logical. The logic that we have considered so far is deductive logic. But a lot of human reasoning is inductive or abductive in its character. If you haven’t come across an abductive so far, we will discuss in detail later in the class. Deduction has to do with reasonings from causes to effects. Abduction has to do with reasoning from effects to causes. When you go to a doctor, you go with some signs and symptoms. Those are the effects. The doctor comes up with a disease category. That’s the cause. Abduction is reasoning from data to an explanation to a disease category for the data. Induction is given some relationship between cause and effect for a sample. How do we generalize it between a cause and effect relationship for a population? So while human reasoning appears to be inductive and abductive in character much of the time, the logic that we have considered so far is deductive. That’s yet another issue that we’ll return to later in the class.

Reuse

CC SA BY-NC-ND

Citation

BibTeX citation:
@online{bochman2016,
  author = {Bochman, Oren},
  title = {Lesson 12 {Logic}},
  date = {2016-02-06},
  url = {https://orenbochman.github.io/notes/cognitivie-ai-cs7637/12-logic/12-logic.html},
  langid = {en}
}
For attribution, please cite this work as:
Bochman, Oren. 2016. “Lesson 12 Logic.” February 6, 2016. https://orenbochman.github.io/notes/cognitivie-ai-cs7637/12-logic/12-logic.html.