GoldBullion (Poker Bot)


GoldBullion is an artificial intelligence engine which plays ring games of Limit Texas Hold'Em using a Bayesian simulation engine. The full source code is provided. Consult the ReadMe for details.

The new version does not include the code for scraping the poker sites. This code is for research purposes only and should not be used in real money poker games. This is an MSVC.NET C++ project! There is no executable included! This is not a poker cheating tool! It requires C++ programming and AI expertise to use.

Disclaimer of Warranties and Liability

You expressly agree that your use of the Gold Bullion poker application is at your own risk. To the fullest extent permissible pursuant to applicable law, CBloom disclaims all warranties, express or implied, including implied warranties of merchantability, fitness for a particular purpose, and non-infringement. The material provided on this website is provided for entertainment purposes only. CBloom does not warrant or make any representation regarding the use or the results of the use of the material found in this application in terms of their correctness, accuracy, reliability, or otherwise. CBloom makes no warranties that your use of this application will not infringe the rights of others, and, as such, assumes no liability.

By using this application, you expressly agree that Cbloom shall not be liable for (a) any claims, liability, direct, indirect, punitive, actual, incidental, consequential, special, exemplary or other damages resulting from your use of this application; (b) defects or other failure of performance in the application; (c) the use of this application in any illegal activities, as defined by applicable law.

Download GoldBullion.zip (2.4 MB) by clicking I have read and agree with this disclaimer . New version 8-07-08

Requires cblib

You need STLport 4.5.3 (zip, 1.2 MB) Actually I'm told this isn't necessary, the new MS STL works fine; w/e ymmv.

GoldBullion uses the Poker Eval library. Thanks to them. (the old version I use is included in GB, this is a link to the new pokersource project)

NOTE : I am not supporting any compatibility issues with the poker sites. The sites change often and I'm sure the spies are not compatible with the newest versions. Please do not contact me with any bugs related to poker site or hand record compatibility. If there are bugs in the AI logic, please do report those to me and I'll try to fix them. GoldBullion is no longer being actively developed.


The following is not an introduction to poker AI. This is a collection of tech notes on GoldBullion. Please search the web for poker AI papers to get you started. For example, the Alberta poker research page


The basic Poki-style Limit bot

(AIShared.cpp + PostFlopLoki or PostFlopPoki)

The primary limit hold'em bot in GB is basically a Poki bot. To understand it you should first go read the Poki papers, and also search for "Bayesian Poker" and read some of that.

The Poki papers I believe are intentionally vague about some of the crucial details. The basic Poki bot works like this : when an opponent acts, you update his hole probabilities based on the conditional probability that he would make that action with that hole. This requires that you have an "OpponentModel" function to generate a probability for each opponent action, which you run on all the holes. When it's your turn, you need to estimate the EV of {fold,call,raise} and pick the best one. To do this you need an estimate of the chance your hole will win vs. the various opponent hole probabilities, and you also need an estimate of what will happen in the future after you do each action.

That sounds simple enough, but there are lots of big black boxes there that are not trivial. The system is very sensitive to having a good OpponentModel. Poki has a strange heuristic OpponentModel that somehow works well for them. You can't afford to have a very complex OpponentModel because you run it on every hole each time an opponent acts. Note also that you have to OpponentModel your *self* too, because when opponents act they need to have a model of what you might be holding. (you only need to run your true self AI on your actual holding).

Another issue is estimating the EV's in the future. To get some idea of the EV if you call, you need the chance your hand is best. But you don't actually want the chance against the current opponent hole probability spectrum - you want the chance *if the hand goes to showdown*. Obviously people will not be taking their weakest hands to showdown. So for example, say first guy raises, action is on you, and there are a few live guys behind you. You need to estimate the EV if you call, to do that you have to guess how often the guys behind you will call or not as well, and what they hold when they do call.

Now, in theory you could run a full tree simulation. What that means is at every branch point you try all possible options and weight by the probability of each. Each time action is on somebody they have 3 options {fold,call,raise}, you can use OpponentModel to generate a chance they do each (with each possible hole). Finally you sum the whole weighted probability tree to generated EV's. In practice this is far far too slow, because you have to do for each possible hole anyone could have, as well as simulating all possible draw cards. However, for the special case of heads up poker you can in fact do this, and I'll talk more about full simulations later. But basic Poki can't do this.

So for a basic Poki model you simplify the full tree and try to just estimate the probability of a few basic cases. One crucial case is everybody else folds. If I raise, I want to know the chance everyone folds and I just win without showdown; this lets me bluff because it doesn't matter what I'm holding. The other case is to estimate how many callers I get, and what they have if they call, for the cases that I raise or I call. If you're on a street before the river you might also want to make an estimate of what will happen on later streets (this is important, otherwise the bot will try to take hands that are too weak to showdown because it's not accounting for getting charged on later streets).

In practice I found it hard to tweak all this hacky stuff to behave well, and the Poki bots in GB are not great because of that.

In the GB code, "AIShared" holds the current estimate of Bayesian hole weights for all players. That's a "HoleWeights" object. The function that generates opponent probabilities is "AIShared::OpponentModelBlend" which is actually a blend of a TTH and a Loki bot (the various blending of opponent probabilities tries to remove overconfidence in any particular, which can lead to big errors). The primary Bayesian update step is in "UpdateBeforeAction" ; there's also a confidence factor in there which tries to estimate how good we think the opponent model is for the current situation.

PWins and Bucketting

(Mostly HoleRanking.cpp, HoleProbabilityBuckets.cpp, and AIShared.cpp)

One of the things we need is to know the chance of a hole winning against a certain probability spectrum of other holes. This is for your single hole vs. any of theirs, and it's on the current table.

First consider the simple case of one opponent on the river, so there's no draw. You walk over all opponent holes, see if it beats your hole, if so add his hole probability to the weight beating you, if not, add your hole probability to the weight you win. Then normalize. Okay, we've already made an error here. Some of the holes may tie against you, in which case we should really add up a probability of tieing and that should be returned separately so we can simulate splitting the pot. In practice I don't bother with handling ties and split pots right in the simulation, for ties I put half the weight on win and half on lose BUT with one important exception : if you hold the absolute nuts, I give you a 100% chance of winning even if there is some chance of a tie. Also another important note : some possible opponent holes are blocked by the cards on the table or the hole that I am querying about, those should be skipped. This is basically "ComputeWeightedPWinVs1".

There are N opponent holes to walk over (N = 1326), so this query is O(N). We can do it another way that requires an O(N) setup, but then queries are O(1). To do that, we first sort all the holes by win rank (the win rank is just the number that comes out of pokereval). Now to tell if a hole beats another hole, you just see if it's earlier or later in the sorted order. This is the "RankedHole" in HoleRanking. Then we create a cumulative probability sum for the ranked holes for the given opponent. If the opponent has a chance P(H) of holding hole H, the cumulative probability C(H) = Sum of all P(I) for all holes I <= H in the sorted rank. The total probability is then just the last C(). Now the chance that this opponent can beat a given hole H is just C(H) / C(all). That's very fast, but note that we've just made some more approximations; we dropped the fancy stuff we were doing to exclude overlaps and to handle ties. This is "GetWeightedPWinVs1".

This O(1) fast query is important, however for when you do opponent modeling. When I run my *own* AI it only needs to run on my actual hole, and I consider how I do against all possible opponent holes, so it's O(N). When I run the opponent AI, I have to model all N possible opponent holes, so if I did the same thing it would O(N^2). Instead, I use the fixed cumulative probability tables when running opponent AI, so their query is O(1) and the total is still O(N). The sorted order of what hole beats what other hole only needs to be computed when the table cards change, eg. once per street, so that's no problem. The cumulative probability table has to be updated after we do the Bayesian update of opponent hole probabilities, so only once per actual observed action.

To get your PWin vs K opponents, you need the probability that you beat them all, which means multiplying the probabilities. In theory to be correct you should also do exclusions - eg. for each opponent, if they have a given hole it means the other guys can't have that hole. However, if you did this the cost would be O(N^K) since for each hole of any given opponent you would have to do a separate sim for all other opponents. In practice I approximate and ignore the exclusions which means I just sim each opponent separately and then multiply together my probability of winning. This makes it O(N*K). This is "GetWeightedPWinVsN".

Now we need to handle draws. One way would be to give each hole a chance of winning vs. each other hole, rather than just a binary win/lose. But that would ruin our simple sorted order of holes. Instead, on the flop & turn, we consider all possible future cards that can come off, and generate a simple hole ranking for each case. For each of those, we do things just like we did on the river. So to get pwins with the draw, we just average over all possible draws. To be correct you need to exclude the draw cards from the possible opponent holdings, and exclude your cards from the draw. On the flop there are 47 draw cards, on the turn there are 46, so we multiply our storage & CPU needs by a factor of 47. This is "HoleRankingWithDraw".

This is all good as long as we're only looking at actions in the past, but if we want to simulate into the future we need something even faster. I want to be able to do things like "if I raise, and he calls - update both of our hole probabilities for that situation, and tell me my Pwin in that case". I can't afford to update the entire hole probability structure for that.

To do that faster I use bucketting. This is "HoleProbabilityBuckets" ; in GB I use 10 buckets. Instead of consider all 1326 possible holes, the holes are divided into 10 groups based on how strong they are (in particular, the division is done by the chance of winning vs. the current average hole probability distribution). For each player I track the probability that hey have something in each of the buckets, but without knowing what in that bucket they might have. Basically this is a quantized approximate spectrum of how strong of a hand they have. For each bucket, I precompute a HoleRanking that tells me how any hole would do against something in that bucket (the O(1) query). Now, to get a Pwin vs. any probability spectrum, I just look at the chance of having something in each bucket, then use the precompute pwin vs. that bucket, and make a weighted sum. This is what's happening in the static versions of "GetWeightedPWin" that take a "pRankingsVsBuckets" parameter.

When you simulate what holes people would have in the future if they did certain moves, all you have to do is update the probability they hold something in each bucket. This is only 10 floats so you can easily push/pop it when you simulate to handle the branching tree of possible futures.

In AIShared, the basic call "GetChanceOfWinningAgainstCurrentEstimate" uses buckets and is fast. "GetChanceOfWinningAgainstCurrentEstimateAccurateExpensive" does not use buckets, and I use it only for the player's own AI. The "OpponentPrediction" struct lets you create a possible future scenario and then ask about what the chances of winning in that scenario.

Sim vs. TTH

(this is mostly in PostFlopSim1.cpp)

TTH (Turbo Texas Holdem) is the old way of doing poker bots; it's a simple rule based system, like the board shows this, I have this, there's been one bet, what do I do? Because of that it's very fast. There's an implementation of all the TTH bots called "mabots" out there on the net that I put in GB.

The interesting thing about the TTH bots is that if you use them for your OpponentModel function, they're fast enough that you can actually do a full tree simulation of future action.

When it's my turn to choose {fold,call,raise} , I simply try all 3. Then I let the next opponent take his turn using the TTH OpponentModel, then the next guy goes, etc. when action comes back to me I take another 3-way branch (obviously the "fold" branch is a simple leaf so only 2 branches continue). If it gets to a showdown you simply see who has the best hand. If you finish the street and need to draw a card, you draw all possible cards (a 46 or 47 way branch). You then compute the EV for the original player in each leaf, and he simply picks the best one.

This simulation also must be done over all possible hole cards for the opponents, weighted by the current Bayesian probability of various holes for them. Note that within the simulation we didn't do any bayesian probability updating at all, it's a very simple non-probabilistic concrete simulation of the future.

This is implemented in "PostFlopSim1". In practice, running this full tree is still a bit too slow (except in one case : heads up on the river, or close to that scenario). But we can limit the time spent without giving up much accuracy.

First of all, very deep traversal does not affect EV very much. Once you've both put in 3 raises on a round it really doesn't matter much whether you put in another one or not. What that means is you don't need to do the {F,C,R} 3-way branch on yourself very many times. In particular, I do it twice if the game has 3 players, and 3 times if the game has 2 players. (I also don't run the sim with more than 3 players - partly this is because the larger the # of players, the easier it is to just make a simple heuristic decision).

The other trick we do is not running all opponent holes. If there's only 2 players (you and one opponent), I just weight on all opponent holes (1326 minus the ones that conflict with your hole or the table). If there are 3 players, that would be (47*46*45*44)/4 = 1,070,190 simulations, which is too many. Instead I pick only 24 representative holes for each opponent. These holes are chosen very carefully. First the chance of choosing each hole is weighted by its Bayesian probability. Second, they are evenly spaced in hand strength. That gaurantees that even a small sampling gives me one each of all the crucial holes. This function is "DrawSortedLinearRandomHoles". 24 holes doesn't sound like much, but after the flop a player's hand range is generally small, and only the exact identity of the very best hole matter, exactly which representative you choose for the very bad holes doesn't really matter.

That's it for the sim. There's one other interesting thing I did with TTH. I ran a sim game where I played all the TTH bots against the Poki bots and gathered PokerTracker style statistics on all of that. That way I built a database of what kind of statistics each TTH bot generates. Now, when I'm observing some player, I track statistics on him. I can match those observed stats to the database of stats on the TTH bots, and find the bot which matches his pattern as closely as possible. The assumption is that two players who generate the same stats are using similar logic. You can then use that TTH bot as part of the OpponentModel for that player.

In practice, neither of these worked all that well against real players, because the TTH bots are just so weird and bad. The "Sim" bot is the very best possible bot you can write against a TTH opponent. In fact, the Sim bot is playing 100% perfect poker - assuming it has the right model for the opponent it is making the best possible decision at all times. But the real world opponent is not the TTH bot, and they are a bad fit, because the TTH bots do some very stupid things, and the Sim bot in fact actively seeks to trick them into those stupid things (it goes for bluff-raises on the river a lot because TTH folds too much, it goes for check-raises because TTH continues aggression too much, etc. etc.)

Preflop Pushbot

(this is mostly in Pushbot.cpp)

I wrote an exact simulating bot for preflop push/fold play. There are two key assumptions : 1. The actions people take are limitted. If the pot is not opened, I can only open or fold. (an open is an initial raise, usually to 3 or 4 BB). If the pot is opened I can only shove or fold. 2. The hand ranges to be considered are only a linear range from AA down to X.

I'm doing a game theoretic solution. That means I'm not just simulating possible moves, I'm simulating possible *strategies*. I assume that every player knows the other players strategies, and each one chooses the best possible strategy for themselves. This is the game theoretic equilibrium. The players don't know their opponents actual cards, but they do know what they would do with each hole.

More about the hand ranges : In general, for each player, you would have to consider strategies on every one of the 1326 holes. Since it's preflop, suits don't matter, so there are actually only 169 different holes. A "strategy" consists of a raise or fold for each hole, which means there are 2^169 possible strategies. That's way too many to try them all.

The approximation we make is that we assume that if you raise with hole X, then you also raise with all holes stronger than X. That means a strategy can be labeled just by the weakest hole that you will raise with. I call this your "range" and it's just a number from 0 to 168 (0 means you only play AA, there's no null strategy, you always at least play AA).

The definition of "stronger" for purpose of this range comes from "c_holeRankAllin" in HoleStats, which comes from "c_probabilityWinsRiver_g[3]". Note that this is in fact an approximation - it does not necessarilly give you the best possible strategy. One source of error is that we only consider the probability wins vs. 3 players, when in fact a real strategy should consider various things like how often the hand flops good, and how many people are actually in the pot, etc. Another source of error is that you might want to just call sometimes with mediocre hands, and you might want to shove the hands that are too bad to call with (as bluffs). (note : it's a common misconception that you should bluff your weakest hands, that's not true, in fact you should bluff the best hands which are too weak to call or raise with for value). I did some measurements and I find that the "strong range approximation" is very good; you lose less than 1% of EV.

A few holes in range order : {AA,KK,AKs,QQ,AKo,JJ,TT,AQs,AQo,99,AJs,AJo,88,ATs,77,KQs,ATo,66,KQo,A9s,KJs,55,A8s,A7s}

So for example, strategy range 6 means you play {AA,KK,AKs,QQ,AKo,JJ,TT}

Now with this approximation I can very easily solve preflop problems. Say for example it folded to the small blind. The small blind is going to open some range A. The BB will then shove with range B and fold the rest. The small blind then calls that shove some of the time with range C, (C <= A). To solve this you just brute force try them all. For all A, try all B, try all C - then each guy makes the best decision for himself. This gives you optimal open and shove ranges. This particular problem is implemented in "FindSBOpenRange".

To solve something like 3-player preflop poker (button + blinds) we can just use the same technique and try all strategies and it's fast enough, but N-way you start running into trouble. The number of strategies is 169^N , which gets bad. (in practice you don't actually need to try all 169, you can restrict the range to about 80 strategies that are likely, and you can also binary search to optimize strategies which reduces it further (though this is a bit rough because EV is not a smooth curve)).

To make the N-way more tractable, we make another approximation. The nice thing is that we're only considering shoving over opens, and after we shove it's very unlikely that a lot of people will get involved in the hand. Note that this only works if we have 15 BB's or more, if we're much shorter than that, the "shove is scary" approximation breaks down. We assume that at most only 2 other people will call a shove, so the hand is never more than 3-way all in. Even if me the original pusher is not scary, once somebody calls my shove, it's very unlikely others will fuck around much.

What I do is I define a range that I call "verygood". It's range = 4, which is 5 holes : {AA,KK,AKs,QQ,AKo}. Anybody who holds those cards will always play them no matter what. If 1 person calls my shove, then everyone else will fold unless they have a hand in "verygood" or they've already put a lot of money in the pot (eg. they were the original opener).

What this means is that instead of considering 2^N possible configurations of call/not-call for all my opponents after I shove, I'm only considering N of them - who calls my shove, and then I add in the chance of anybody else having a "verygood" hand, and they will also call.

To compute the EV's and do the simulation I use precomputed pwin tables for ranges. Note that these are not hole vs. hole table,s they are range vs. range, and include proper counting of hole multiplicity and exclusions (eg. if know I hold AA, eg. my range is 0, then there's only 1 other way for him to have AA). This is "pwin_range_vs_range". A small detail : if you look at that table you'll see it's not exactly monotonic; that's a flaw in the simple range strategy approximation. In practice we want the ranges to act monotonic, eg. range R should alway be better than range R+1, so we use "pwin_range_vs_range_monotonic".

There's also a 3-way table "pwin_hole_v_ranges" , but storing all 169*169*169 is too big so I only store it for the best 32 ranges and then extrapolate if necessary in "GetPWinThreeway". This is based on the assumption that it is rare to play a 3-way all-in unless all 3 people have a pretty good hand.

You do need to be very careful about hole blocking, because when people are playing tight ranges they block each other a lot. For example say I know someone tight opened, and then I shove a very tight range and someone behind me calls - we have all said we have very good hands, and that makes it very unlikely that anybody else has AA or KK. This is counted in "num_holes_unblocked_by_range".

BTW this math is all sort of similar to the Sklansky-Chubukov Rankings . Sklansky-Chubukov tells you where you can open shove profitably vs. one other opponent. I've extended this to the general case of N opponents, arbitrary stacks, previous amounts in the pot, etc.

There's another factor I haven't talked about, which is for people who have already acted (limping or opening or whatever), we need to put them on a hand range. Anyone who hasn't acted yet has range = 168 (all holes), but others obviously have some range which depends on how tight they are, what position they're in, etc. This is a heuristic evil.

The same code is also used to computer what you should open if you assume other players are playing push/fold. Also, it's mostly useful for short stack or tournament scenarios, but it can also be used to compute optimal 3bet and 4bet/shove ranges for full stack NLH ring games.

Other good code bits

PokerHistory.cpp - is a lightweight hand database, way faster than SQL or something. It builds a transcation file and a compiled database so it's robust even if the app crashes, it won't lose records.

HandRecord.cpp - supports parsing hand histories for most the major sites.

MainWindow/SpyManager has a lot of stuff for managing lots of poker tables, like smart auto activation. Allows playing with a gamepad, auto fish finding, table scanning, auto table opening, etc.

SimpleSpy has lots of nice overlay drawing stuff including hand action summary and last hand popups. It also has some hacky Omaha stuff that should've been factored out.


Charles Blooom [cb][at][cbloom][dot][com]
Send Me Email

Back to the Index

The free web counter says you are visitor number