Learning Local Search Heuristics for Boolean Satisfiability
Abstract
GNN (select variable) + local search algorithm.
RL,初始X,GNN结合sofetmax作为policy 函数,选择p最大的variable翻转,每个Trajectory有一个reward(找到使得fai为真的reward为1),训练参数。
SAT problem: 属于[决定性问题],也是第一个被证明属于[NP完全]
命题逻辑公式,也称为布尔表达式,由变量,运算符AND(连接,也用∧表示),OR(分离,∨), NOT (否定,¬)和括号构成。如果通过为其变量分配适当的逻辑值(即TRUE,FALSE)可以使公式为TRUE,则称该公式是可满足的.
子句(Clause)就是每个人的愿望清单,eg,x1∨¬x2
文字(Literal)就是一个愿望 eg, x1,¬x2..
命题变元(Variable)x的真假值
Introduction
we focus on stochastic local search (SLS) and propose a learnable algorithm with a variable selection heuristic computed by a graph neural network。用强化学习训练一组solver,针对不同类别的启发式算法。
Background
boolean formula:n个variable,m个clause(wish list, 由 ∨连接literals的式子),
CNF formula is the conjunction (∧) of all clauses
φ(X) : n个{0/1} ->{0/1}, input为n个bool变量,output是0/1,最后的真假值。
local search: start with a random initial candidate solution - > iteratively refine it
Select Variable x,flip (0->1 or 1->0)
Select Variable:
1)walkSAT
randomly selects a clause unsatisfied by the current assignment 使得最少的previously satisfied clauses becoming unsatisfie
2)This work:
employs a graph neural network to select a variable. 可以选任意,不一定要在unsatisfied clauses里面选。
with some probability randomly select a variable from a randomly selected unsatisfied clause.
GNN : maps each node to a vector space, embedding by iteratively updating the representation of the node based on its neighbors.
Model
graphical representation for CNF formulas:
Factor graph(因子图):
undirected bipartite graph (二分图,一边variable,一边clause(∨连接)) +
two types of edge: positive (x)and negative(¬x) polarities (正负极)
input of the mode:
formula φ + an assignment X
achitecture of GNN -> policy part of RL
Training
MDP represented as a tuple (S_D,A,T,R,γ) ,learning a good heuristic is equivalent to finding an optimal policy π, max accumulated reward R
训练policy network中的参数,ie找到最好的Π使得每次找到合适的action a (意义为:翻转X_a)
S_D: a set of states (s = (φ,X) ),
A: maps states to available actions A(s) = {1,...,n}
T: SD ×{1,...,n}→SD mapping from a state-action pair to the next state T(s,a) = T((φ,X),a) = (φ,Flip(X,a))
R : SD →{0,1}is the reward function, sR(s) = R((φ,X)) = φ(X), only 1 when the assignment satisfies folmula
γ ∈ (0,1] is the discount factor.
Policy is the function ρθ(φ,X) which returns an action(variable index) a ∼ πθ(φ,X)
where πθ is the policy network, learn θ
Data
A problem distribution D:we sample a formula φ ∼ D,generate multiple trajectories for the same formula with several different initial assignments X.
Curriculum Learning: training is performed on a sequence of problems of increasing difficulty.
the policy learned for easier problems should at least partially generalize to more difficult problems,
Experiments
baseline:WalkSAT.
imporve:
model-based algorithms such as Monte Carlo tree search [12] can provide critical improvements. As a step towards practicality, it is also important to incorporate the suite of heuristics in an algorithm portfolio. In this work, we have achieved promising results for learning heuristics, and we hope that this helps pave the way for automated algorithm design.
Words and Phrases
runs for T iterations
makes it viable to 使它可行
a number of avenues for improvement 许多改进方法
omitted for brevity, 为了简洁起见
implement == achieve, realize, implement
Solid and dashed edges correspond respectively to 实线和虚线分别对应。。。
concretely具体的
opt == select
iteratively refine it
迭代地完善它
explicitly == clearly ==demonstratively 明确的
Lately... In a more recent work...
On another front ~~~ on the other hand
from scratch
从头开始
incorporate A in B
在B中引入A
generic == common;general
Recently there has been a surge of interest in applying machine learning to combinatorial optimization
(a surge of 激增)
manually-designed, requiring significant insight into the problem
insight: 慧眼 manually-designed (手动设计)
abbreviated SAT 缩写为SAT
is referred to 被称为
refer to 参考/指
SAT is the canonical NP-complete problem 典型的
a plethora of problems 大量的问题
arising from A 由A产生
algorithms that require fewer, although costlier, steps to arrive at a solution.
更少step,但是更贵