Summary of solutions
1.Pure backtracking
— If the current partial assignment is consistent
— Choose a variable, assign each value from its domain in turn
— Search the resulting sub-tree
2.Forward checking
— Prune values from neighbor variables if they are not supported by the assigned one
3.Arc consistency
— Prune similarly for all pairs of values related by a constraint
Constraint learning
♦ Learned constraints may be added to the network or kept separately
♦ A separate store of nogoods is usual, as they are usually large
— May add binary ones to the network and store the rest
— Data structures matter: indexing for rapid inference is important
♦ Every branch may add another nogood, so there are too many of them
— Storage requires exponential space
♦ Hence common to have a strategy for forgetting them
— e.g. let the longest ones lapse after a while
— or just keep the “tail” and discard when backtracking leaves the region
where it applies
♦ Constraint learning useful for CSP solvers; essential for SAT solvers
Constraint graphs
· vertices are decision variables
· edges are constraints
- Graph contains information about the structure of the problem.
- The dual graph: the vertices are the constraints and an edge between two constraints means they share a variable, gives yet another view.
- The bipartite graph: variable nodes and constraint nodes.
- the constraint graph only shows which decision variables are connected. It is not affected by whether the problem has solutions or not.
Tree-like constraint graphs
- Choose a vertex to be the root of the tree
- Start assigning values at the root
- Don’t assign a value to a variable before its parent in the tree
- Do forward checking at each step
Symmetry
Value symmetry and variable symmetry are frequently present in CSPs
Example:
Suppose we have a pigeonhole problem: show that it’s impossible to fit 10 pigeons in 9 pigeonholes (without overcrowding)
Answer:
9! backtracks, even with arc consistency; no solution.
But one pigeon looks just like another (to a CSP solver), and one hole looks just like another as well.
A good symmetry-breaker is ∀x∀y((x < y) → (hole(x) < hole(y))). —would be true of 1 solution if there were just enough holes.
Therefore, 9! branches reduced to 1.
The bad part: if a problem has lots of symmetries, we can waste huge amounts of time searching symmetric (and equally empty) sub-spaces, or generating solutions that tell us nothing really new.
The good part: if we explore one of these sub-spaces, we know we can prune all of the others without losing anything essential
How to use: Note that if solutions are symmetric, partial assignments have (at least) the same symmetries, so early pruning may be possible.
The usual method for removing symmetric sub-problems is to add symmetry-breaking constraints
Tightening CSPs by learning from mistakes
if we found (v1 : b, v2 : r, v3 : b) was no good, remember that combination (v1 : b, v2 : r, v3 : b) as a disallowed triple of a (3-ary) constraint.
Never repeat a mistake: don’t backtrack twice for the same reason.
Two common ways of defining “better” or “worse” solutions:
- via an objective function: a quantity to me minimized (or maximized)
- via soft constraints: can be violated, but as little as possible