Guiding High-Performance SAT Solvers with Unsat-Core Predictions
Abstract
NeuroSAT is a method to improve CDCL solvers with predictions of unsat-cores.
can provide effective guidance to high-performance SAT solvers on real problems.
代码:
https://github.com/dselsam/neurocore-public
unsat-core : In mathematical logic, given an unsatisfiable Boolean propositional formula in conjunctive normal form, a subset of clauses whose conjunction is still unsatisfiable is called an unsatisfiable core of the original formula.
A minimum unsatisfiable core contains the smallest number of the original clauses required to still be unsatisfiable. No practical algorithms for computing the minimum core are known
An unsatisfiable core is called a minimal unsatisfiable core, if every proper subset (allowing removal of any arbitrary clause or clauses) of it is satisfiable. Thus, such a core is a local minimum, though not necessarily a global one. There are several practical methods of computing minimal unsatisfiable cores.[1][2]
Introduction
aim: 1.help inform variable branching decisions within high-performance SAT solvers on real problems
- how to make use of the predictions inside a SAT solver.
Our approach is to generate a supervised dataset mapping unsatisfiable problems to the variables in their unsatisfiable cores.
for some problems, the smallest core may include every variable, and of course for satisfiable problems, there are no cores at all.
每一次variable branching decision都querying NeuroSAT 时间花费过大,所以选择: 对现有heuritics算法进行补充而不是试图替代
branching on the free variable with the highest score.
refer to our integration strategy as periodic refocusing 称积分策略为定期调整
pipeline:
1.通过抽取现有问题 Generate many unsatisfiable problems
2. generate a DRAT proof, 并且extract the variables that appear in the unsat core
3.Train NeuroSAT ( NeuroCore), map unsat problem to vaiables in the core.
4.solver定期询问 NeuroCore,通过这个网络的预测 reset variable activity scores。
Data Generation
to predict which variables will be involved in unsat cores
现有数据集过小,通过unsatisfiable subproblems of existing problems来generate数据,获得 problems with labeled cores。
Background on neural networks.
basic building block: MLP/ feed-forward network/ fully-connected network
放射变化+激活函数(rectified linear unit (ReLU:sets all negative numbers to zero.) )
NeuroCore architecture.
The goal is not to learn a perfect core predictor, but rather only to learn a coarse heuristic that broadly assigns higher score to more important variables.
Hybrid Solving: Extending CDCL with NeuroCore (混合求解)
Modern SAT solvers are based on the Conflict-Driven Clause Learning (CDCL) algorithm
NeuroCore then returns a vector ˆ v ∈Rnv, where a higher score for a variable indicates that NeuroCore thinks the corresponding variable is more likely to be in the core.
decay factor ρ is often rather small
Word & Phrase
此后 hereafter, afterwards, henceforth, afterward, hence
To the extent 在某种程度上
integrate集成,整合
each but the last 除了最后一个的每一个
specify them by hand
A toy problem : is a problem that doesn't have all the complexity of a real-world engineering problem. It could be a simplified or shallow version of a more difficult and intricate problem or class of problems.
Our results demonstrate that。。。
wonderful, fantastic, marvelous, intriguing, fantastical, marvellous
实用 practical, functional, pragmatic, functionary
tens of thousands of 数万的
on-device sharing 设备共享
settle for less 退而求其次
The gains are even greater 收益更大