Artifical Intelligence – Dutch Books
September 10, 2009 Leave a comment
In the previous post we talked about probability theory, and talked about agents having a probability distribution to represent their degrees of belief. We also talked about the axioms K1 and K2, that is:
K1 – If a is always true, then p(a) = 1.
K2 – If a AND b are never true at the same time, then the probability of a OR b being true is p(a) + p(b).
These axioms become more important when we talk about the idea of a ‘dutch book’.
Dutch Books – Why Use Probability Distributions?
Lets consider an artificially intelligent agent walks into a betting shop. No, this isn’t the start of some bad joke. In this betting shop, the agent will bet on propositions such as ‘Horse A will win the horse race’. The tickets pay out £1 if the proposition is true, and no money if the proposition is not true. But what price should the agent buy the ticket for? This depends upon his degree of belief in the proposition. If the agent’s degree of belief, for example, is 0.6 that the horse will win, then he will be willing to pay 60p on the ticket.
But there’s two horse races going on tomorrow. Let ‘a’ and ‘b’ be the propositions ‘Horse A will win the first race’ and ‘Horse B will win the second race’. Our agent has been programmed with the following degrees of belief:\
p(a) = 0.6
p(b) = 0.5
p(neither a nor b) = 0.25
p(not both a and b) = 0.7
Based on these beliefs, our agent takes out the following four bets:
Bet 1 pays £1 if horse ‘A’ wins the first race. This ticket costs 60p.
Bet 2 pays £1 if horse ‘B’ wins the second race. This ticket costs 50p.
Bet 3 pays £1 if horses ‘A’ and ‘B’ both lose their races. This ticket costs 25p.
Bet 4 pays £1 as long as not both ‘A’ and ‘B’ win their races. This ticket costs 70p.
So our agent has spent a total of 60+50+25+70 = £2.05
But there’s something weird about these bets. Lets analyse it more closely:
If both horse A and horse B win, then our agent wins bets 1 and 2, but loses bets 3 and 4. He wins £2.
If horse A wins and horse B loses, then our agent wins bets 1 and 4, but loses bets 2 and 3. He wins £2.
If horse A loses and horse B wins, then our agent wins bets 2 and 4, but loses bets 1 and 3. He wins £2.
If both horse A and horse B lose, then our agent wins bets 3 and 4, but loses bets 1 and 2. He wins £2.
In any case, he wins £2. But that can’t be right, he spent £2.05 on the ticket, so no matter what the outcome is, he’s lose money!
We call this situation a ‘Dutch Book’. And the reason that a Dutch Book was able to be made against the agent is all because of his initial degrees of belief! They violate the axioms of probability theory, K1 and K2, in some way.
In general, if an agents degrees of belief violate K1 and K2, then a Dutch Book can be made against it. If an agents degrees of belief conform to K1 and K2, then a Dutch Book is impossible. This example of Dutch Books shows us why we should represent our degrees of belief as probability distributions.
Why Conditionalise Probability Distributions?
Alright, lets take another example. Let ‘a’ be the proposition that horse A wins the first race, and ‘b’ be the proposition that horse B wins the second race. The ticket now says:
You get a payout of £1 if both horse A and horse B win. You get no payout if horse A loses and horse B wins. You get your money back if horse B loses.
How much would you spend on that ticket? This number is your conditional degree of belief in ‘a’ given ‘b’.
The proposal is that if an agents degrees of belief are given by a probability distribution, p, then its conditional degree of belief in ‘a’ given ‘b’ should be given by p(a|b) when p(b) is not 0.
Now lets consider the following scenario. ‘a’ and ‘b’ are propositions as before. p is our initial probability distribution, and pa is the probability distribution after the first race but before the second race, if a has won.
The agent buys the following tickets:
Bet 1: £1 if a and b both win. 0 if a wins but b loses. Money back if a loses. This ticket costs £p(b|a).
Bet 2: £x if a wins, otherwise nothing. This ticket costs £(x * p(a))
After the first race, if a has won then the agent has already won £x, and has the following bet that used to be Bet 1:
£1 if b wins, otherwise nothing.
The agent then sells this ticket for £pa(b)
Now the payouts for the agent are as follows:
If a has won, then the payout will be his winnings on bet 2 (x*(1-p(a))) [ie. the winnings (£x) minus the amount spent on the ticket (x*p(a)], minus the amount he spent on bet 1 (p(a|b)), plus the amount he sold bet 1 for after race 1 (pa(b)). So his total winnings are… x – xp(a) + pa(b) – p(b|a).
If a loses, then he has lost bet 2 and got his money back on bet 1. So his total losses are the amount that he spent on bet 2, which was x * p(a).
If a loses, then his payoff is going to be negative, ie. he’s made a loss, as long as x is greater than 0 (Which it probably will be).
If a wins, then his payoff is going to negative if:
x – xp(a) + pa(b) – p(b|a) < 0
x (1 – p(a)) + pa(b) – p(b|a) < 0
x (1 – p(a)) < p(b|a) – pa(b)
x < (p(b|a) – pa(b)) / (1 – p(a))
So if we choose a value of ‘x’ between 0 and (p(b|a) – pa(b)) / (1 – p(a)), then we have made a dutch book on this agent, because no matter whether a wins or loses, he will always have a negative payoff.
Notice the numerator: (p(b|a) – pa(b)). If the agents degrees of belief DO conform to axioms K1 and K2, then these two terms will be equal to each other and it will read x<0. So in order to create a dutch book, a value of x must be chosen that’s greater than 0 and less than 0. Clearly, this isn’t possible.
In this scenario, a dutch book can be avoided by making the agents degree of belief in p(b|a) equal to pa(b). This shows why we need to conditionalise, because if we don’t, then we’re going to end up in a dutch book situation.
The Monty Hall Paradox
This is a famous paradox, that shows how you have to be careful with Bayesian Updating.
An agent is on a TV competition. At the end of the show, the host, Monty Hall, shows you over to three doors. Behind one of the doors is a car, and behind the other two are goats. Clearly, the aim is to pick the door with the car. The doors are labelled A, B, and C.
You are asked to select a door, and the prize you get is behind the door you choose. But after you’ve chosen a door, Monty goes and opens one of the other doors. Behind that door is a goat. Then he gives you the opportunity to either switch your selection, or stick with your selection. The question: Do you stick, or switch?
To answer this, lets consider the three propositions, ‘A’, ‘B’, and ‘C’, which represent the propositions that the car is behind the respective door.
The probability that the car is behind A and not B is equal to the probability that the car is behind door A, obviously, which is equal to 1/3. The probability that the car is NOT behind door B is 1 – p(B), which, obviously, is 2/3, because it could be behind A or C, each with a 1/3 probability.
So we chose door A to begin with, then Monty went and opened door B. We now have the information that it is NOT B. We can use Bayes Theorem to update our probability distribution as such:
p(A|~B) = p(A AND NOT B) / p(NOT B) = (1/3) / (2/3) = 1/2. So this is telling us that the probability that it is behind door A now is 1/2, and the probability that it is behind door C now is 1/2.
But… wait. If we run a simulation, with the computer randomly switching or sticking, then we find that the switchers win about 2/3 of the time, and the stickers win about 1/3 of the time… Why is this?!
The solution is to get the right representation. Lets say that K~B is the proposition ‘I KNOW the car is not behind door B’.
So, we start out, we choose A. Now, we can work out p(K~B|A), ie. the probability that door B will be revealed to not have the car behind it, given that the car IS behind door A. If the car is behind door A, Monty will either open door B or door C. So the probability that we will KNOW that the car is not behind door B is 1/2.
If, on the other hand, the car ISN’T behind door A, then it will either be behind door B or door C. Monty will open one of these two doors. So again, p(K~B|~A), ie. the probability that door B will be revealed to now have the car behind it, given that the car is NOT behind door A, will be 1/2.
From these two pieces of data, we can work out, using Bayes Theorem, the value of p(A|K~B), as follows:
p(A|K~B) = (p(K~B|A)p(A) / (p(K~B|A)p(A) + p(K~B|~A)(1-p(A)))
p(A|K~B) = (1/2 * p(A)) / (1/2 * p(A) + 1/2 * (1 – p(A)))
Obviously p(A) will be 1/3, because it could be behind any door.
p(A|K~B) = (1/2 * 1/3) / (1/2 * 1/3 + 1/2 * 2/3)
p(A|K~B) = (1/6) / (3/6)
p(A|K~B) = 1/3
So the probability of the car being behind door A, given that we know that the car is NOT behind door B, is 1/3. This means, because there is a partition, and we know the probability of it being behind door B is 0, then the probability of the car being behind door C must be 2/3.
If you want to know more about the Monty Hall problem, check the wikipedia page for it out here. It’s quite cool. And there’s some proofs on there a LOOOT more convincing than mine.