Micmac Manual
Table of Contents
[in package MICMAC]
- [system] "micmac"
- Version: 0.0.2
- Description: Micmac is mainly a library of graph search algorithms such as alpha-beta, UCT and beam search, but it also has some MCMC and other slightly unrelated stuff.
- Licence: MIT, see COPYING.
- Author: Gábor Melis mailto:mega@retes.hu
- Mailto: mega@retes.hu
- Homepage: http://melisgl.github.io/mgl-gpr
- Bug tracker: https://github.com/melisgl/mgl-gpr/issues
- Source control: GIT
- Depends on: alexandria, mgl-pax
1 Introduction
1.1 Overview
MICMAC is a Common Lisp library by Gábor Melis focusing on graph search algorithms.
1.2 Links
Here is the official repository and the HTML documentation for the latest version.
2 Graph Search
[function] alpha-beta state &key (depth 0) alpha beta call-with-action maybe-evaluate-state list-actions record-best
Alpha-beta pruning for two player, zero-sum maximax (like minimax but both players maximize and the score is negated when passed between depths). Return the score of the game
statefrom the point of view of the player to move atdepthand as the second value the list of actions of the principal variant.call-with-actionis a function of (statedepthactionFN). It carries outaction(returned bylist-actionsornil) to get the state corresponding todepthand calls FN with that state. It may destructively modifystateprovided it undoes the damage after FN returns.call-with-actionis called withnilasactionfor the root of the tree, in this casestateneed not be changed. FN returns the same kinds of values asalpha-beta. They may be useful for logging.maybe-evaluate-stateis a function of (statedepth). Ifstateatdepthis a terminal node then it returns the score from the point of view of the player to move and as the second value a list of actions that lead fromstateto the position that was evaluated. The list of actions is typically empty. If we are not at a terminal node thenmaybe-evaluate-statereturnsnil.list-actionsis a function of (statedepth) and returns a non-empty list of legal candidate moves for non-terminal nodes. Actions are tried in the orderlist-actionsreturns them: stronger movescall-with-action,maybe-evaluate-stateandlist-actionsare mandatory.record-best, if non-nil, is a function of (depthscoreactions). It is called when atdeptha new best action is found.actionsis a list of all the actions in the principle variant corresonding to the newly found best score.record-bestis useful for graceful degradation in case of timeout.alphaandbetaare typicallynil(equivalent to -infinity, +infinity) but any real number is allowed if the range of scores can be boxed.See
test/test-alpha-beta.lispfor an example.
[function] beam-search start-nodes &key max-depth (n-solutions 1) (beam-width (length start-nodes)) expand-node-fn expand-beam-fn score-fn upper-bound-fn solutionp-fn (finishedp-fn solutionp-fn)
In a graph, search for nodes that with the best scores with beam search. That is, starting from
start-nodesperform a breadth-first search but at each depth only keepbeam-widthnumber of nodes with the best scores. Keep the bestn-solutions(at most) complete solutions. Discard nodes known to be unable to get into the bestn-solutions(due toupper-bound-fn). Finally, return the solutions and the active nodes (the beam) as adjustable arrays sorted by score in descending order.start-nodes(a sequence of elements of arbitrary type).score-fn,upper-bound-fn,solutionp-fn,finishedp-fnare all functions of one argument: the node.solutionp-fnchecks whether a node represents a complete solution (i.e. some goal is reached).score-fnreturns a real number that's to be maximized, it's only called for node for whichsolutionp-fnreturned true.upper-bound-fn(if notnil) returns a real number that equal or greater than the score of all solutions reachable from that node.finishedp-fnreturns true iff there is nowhere to go from the node.expand-node-fnis also a function of a single node argument. It returns a sequence of nodes to 'one step away' from its argument node.expand-beam-fnis similar, but it takes a vector of nodes and returns all nodes one step away from any of them. It's enough provide eitherexpand-node-fnorexpand-beam-fn. The purpose ofexpand-beam-fn. is to allow more efficient, batch-like operations.See
test/test-beam-search.lispfor an example.
[function] parallel-beam-search start-node-seqs &key max-depth (n-solutions 1) beam-width expand-node-fn expand-beams-fn score-fn upper-bound-fn solutionp-fn (finishedp-fn solutionp-fn)
This is very much like
beam-searchexcept it solves a number of instances of the same search problem starting from different sets of nodes. The sole purpose ofparallel-beam-searchis to amortize the costexpand-beam-fnif possible.expand-beams-fnis called with sequence of beams (i.e. it's a sequence of sequence of nodes) and it must return another sequence of sequences of nodes. Each element of the returned sequence is the reachable nodes of the nodes in the corresponding element of its argument sequence.parallel-beam-searchreturns a sequence of solutions sequences, and a sequence of active node sequences.See
test/test-beam-search.lispfor an example.
2.1 UCT
[in package MICMAC.UCT]
uct Monte Carlo tree search. This is what makes current Go programs
tick. And Hex programs as well, for that matter. This is a cleanup
and generalization of code originally created in course of the
Google AI Challenge 2010.
For now, the documentation is just a reference. See
test/test-uct.lisp for an example.
-
A node in the
ucttree. Roughly translates to a state in the search space. Note that the state itself is not stored explicity, but it can be recovered by `replaying' the actions from the starting state or by customizingmake-uct-node.
-
Outgoing edges.
[accessor] average-reward uct-node (:average-reward = 0)
Average reward over random playouts started from below this node. See
update-uct-statisticsand REWARD.
-
An edge in the
ucttree. Represents an action taken from a state. The value of an action is the value of its target state which is not quite as generic as it could be; feel free to specializeaverage-rewardfor the edges if that's not the case.
[accessor] from-node uct-edge (:from-node)
The node this edge starts from.
[accessor] to-node uct-edge (= nil)
The node this edge points to if the edge has been visited or
nil.
- [function] visited-edges node
- [generic-function] edge-score node edge exploration-bias
[generic-function] select-edge node exploration-bias
Choose an action to take from a state, in other words an edge to follow from
nodein the tree. The default implementation chooses randomly from the yet unvisited edges or if there is none moves down the edge with the maximumedge-score. If you are thinking of customizing this, for example to make it choose the minimum at odd depths, the you may want to consider specializing REWARD orupdate-uct-statisticsinstead.
[generic-function] outcome->reward node outcome
Compute the reward for a node in the tree from
outcomethat is the result of a playout. This is called by the default implementation ofupdate-uct-statistics. This is where one typically negates depending on the parity ofdepthin two player games.
[generic-function] update-uct-statistics root path outcome
Increment the number of visits and update the average reward in nodes and edges of
path. By default, edges simply get their visit counter incremented while nodes also get an update toaverage-rewardbased on whatoutcome->rewardreturns.
[generic-function] make-uct-node parent edge parent-state
Create a node representing the state that
edgeleads to (fromparent). Specialize this if you want to keep track of the state, which is not done by default as it can be expensive, especially in light of TAKE-ACTION mutating it. The default implementation simply creates an instance of the class ofparentso that one can start from a subclass ofuct-nodeand be sure that that class is going to be used for nodes below it.
[generic-function] state node parent edge parent-state
Return the state that corresponds to
node. This is not a straightforward accessor unlessnodeis customized to store it. The rest of the parameters are provided so that one can reconstruct the state by taking the action ofedgein theparent-stateofparent. It's allowed to mutateparent-stateand return it. This function must be specialized.
[generic-function] list-edges node state
Return a list of edges representing the possible actions from
nodewithstate. This function must be customized.
[generic-function] play-out node state reverse-path
Play a random game from
nodewithstateand return the outcome that's fed intoupdate-uct-statistics. The way the random game is played is referred to as `default policy' and that's what makes or breaksuctsearch. This function must be customized.
[function] uct &key root fresh-root-state exploration-bias max-n-playouts
Starting from the
rootnode, search the tree expanding it one node for each playout. Finally return the mutatedroot.rootmay be the root node of any tree, need not be a single node with no edges.fresh-root-stateis a function that returns a fresh state corresponding toroot. This state will be destroyed unless special care is taken instate.
3 Metropolis Hastings
[in package MICMAC.METROPOLIS-HASTINGS with nicknames MICMAC.MH]
Generic interface for the Metropolis-Hastings algorithm, also Metropolis Coupled MCMC.
References:
http://en.wikipedia.org/wiki/Metropolis–Hastings_algorithm
Markov Chain Monte Carlo and Gibbs Sampling Lecture Notes for EEB 581, version 26 April 2004 c B. Walsh 2004 http://web.mit.edu/~wingated/www/introductions/mcmc-gibbs-intro.pdf
Geyer, C.J. (1991) Markov chain Monte Carlo maximum likelihood
For now, the documentation is just a reference. See
test/test-metropolis-hastings.lisp for an example.
-
A simple markov chain for Metropolis Hastings. With temperature it is suitable for
mc3.
[accessor] temperature mc-chain (:temperature = 1.0d0)
The PROBABILITY-RATIO of samples is raised to the power of 1 /
temperaturebefore calculating the acceptance probability. This effectively flattens the peaks iftemperature> 1 which makes it easier for the chain to traverse deep valleys.
[function] jump-to-sample chain jump &key (result-sample (state chain))
From the current state of
chainmakejump(from the current distribution ofchain) and return the sample where we landed. Reuseresult-samplewhen possible.
[generic-function] jump-to-sample* chain jump result-sample
This function is called by
jump-to-sample. It is wherejump-to-samplebehaviour shall be customized.
[generic-function] prepare-jump-distribution chain
Prepare for sampling from the F(X) = Q(SAMPLE->X) distribution. Called by
random-jump. The around method ensures that nothing is done unless there was a state change.
[generic-function] random-jump chain
Sample a jump from the current distribution of jumps that was computed by
prepare-jump-distribution.
[generic-function] log-probability-ratio chain sample1 sample2
Return P(
sample1)/P(sample2). It's in the log domain to avoid overflows and the ratio part is because that it may allow computational shortcuts as opposed to calculating unnormalized probabilities separately.
[generic-function] log-probability-ratio-to-jump-target chain jump target
Return P(
target)/P(state) wherejumpis from the current state ofchaintotargetsample. This can be specialized for speed. The default implementation just falls back onlog-probability-ratio.
[generic-function] log-jump-probability-ratio chain jump target
Return Q(TARGET->STATE) / Q(STATE->TARGET) where Q is the jump distribution and
jumpis from the currentstateofchaintotargetsample.
[generic-function] acceptance-probability chain jump candidate
Calculate the acceptance probability of
candidateto whichjumpleads from the currentstateofchain.
[generic-function] accept-jump chain jump candidate
Called when
chainacceptsjumptocandidate.
[generic-function] reject-jump chain jump candidate
Called when
chainrejectsjumptocandidate. It does nothing by default, it's just a convenience for debugging.
[generic-function] maybe-jump chain jump candidate acceptance-probability
Randomly accept or reject
jumptocandidatefrom the current state ofchainwithacceptance-probability.
[generic-function] jump chain
Take a step on the markov chain. Return a boolean indicating whether the proposed jump was accepted.
-
High probability island separated by low valley make the chain poorly mixing.
mc3-chainhas a number ofhot-chainsthat have state probabilities similar to that of the main chain but less jagged. Often it suffices to set the temperatures of thehot-chainshigher use the very same base probability distribution.
[generic-function] accept-swap-chain-states mc3 chain1 chain2
Swap the states of
chain1andchain2ofmc3.
[generic-function] reject-swap-chain-states mc3 chain1 chain2
Called when the swap of states of
chain1andchain2is rejected. It does nothing by default, it's just a convenience for debugging.
[generic-function] maybe-swap-chain-states mc3 chain1 chain2 acceptance-probability
Swap of states of
chain1andchain2ofmc3withacceptance-probability.
[generic-function] jump-between-chains mc3
Choose two chains randomly and swap their states with
mc3acceptance probability.
[class] enumerating-chain mc-chain
A simple abstract chain subclass that explicitly enumerates the probabilities of the distribution.
-
Mix this in with your chain to have it print trace of acceptances/rejections.
4 Game Theory
[in package MICMAC.GAME-THEORY]
[function] find-nash-equilibrium payoff &key (n-iterations 100)
Find a Nash equilibrium of a zero-sum game represented by
payoffmatrix (a 2d matrix or a nested list).payoffis from the point of view of the row player: the player who choses column wants to minimize, the row player wants to maximize. The first value returned is a vector of unnormalized probabilities assigned to each action of the row player, the second value is the same for the column player and the third is the expected payoff of the row player in the nash equilibrium represented by the oddment vectors.