Attention, Learn to Solve Routing Problems

Attention, Learn to Solve Routing Problems

Wouter Kool, Herke van Hoof, Max Welling
University of Amsterdam
Published in ICLR 2019

Motivation

From a high-level perspective, there is a shift in learning paradigm from human engineering to machine learning in recent years.

Machine learning algorithms have replaced humans as the engineers of algorithms to solve various tasks.

For combinatorial optimization, exact methods and heuristics are two main approaches to make decisions.

Heuristics are typically expressed in the form of rules, which can be interpreted as policies to make decisions. We believe that these policies can be parameterized using DNNs, and be trained to obtain new and stronger algorithms for many different combinatorial optimization problems.

The objective of this paper is to design better models and better training methods to replace handcrafted heuristics by learning to route.

we propose to use a powerful model based on attention and train this model using REINFORCE with a simple but effective greedy rollout baseline.

The value of the proposed method is not to outperform existing human-designed heuristics on specific tasks, but to scale to different routing problems with high flexibility.

This is important progress towards the situation where we can learn strong heuristics to solve a wide range of different practical problems for which no good heuristics exist.

Attention Model

The attention model consists of an encoder and a decoder. The encoder is used to learn the representation of each node and the graph. The decoder is used to predict the routing.

Encoder

  • Input: coordinates of each node.
  • Output: representation of each node; the representation of the graph is computed as the mean of all nodes' embedding.
  • Model: graph attention network.
    This figure extracted from the original paper shows the structure of the encoder.

Decoder

  • Input: node embeddings; context: graph embedding + start node embedding + previous node embedding.
  • Output: a sequence of nodes with the highest compatiblity which are selected to add into the path in each step.
  • Model: graph attention network. The visited nodes are masked out to ensure the feasibility of the solution.
    This figure extracted from the original paper shows the structure of the decoder.

Remarks: the graph structure seems only to be used to calculate the compatibility between different nodes by changing it to negative infinity for nonadjacent nodes?

REINFORCE With Greedy Rollout Baseline

Motivation

The goal of a baseline is to estimate the difficulty of the instance s. The difficulty of an instance can (on average) be estimated by the performance of an algorithm applied to it

Method

The baseline is defined as the cost of a solution from a deterministic greedy rollout of the policy defined by the best model so far.

The full training algorithm is shown in the figure below.

Experiments

Experimental Setup

Train:

  • Instance size: n = 20, 50, 100
  • 100 epoches * 2500 steps * 512 batch size

Test:

  • graph size is consistent with training
  • 10000 test instances
  • greedy decoding + sampling decoding (the best of 1280 sampling solutions)

Baseline:

  • SOTA operation research tools
  • Previous work on learning to route

Problems

  • Travelling salesman problem
  • Vehicle routing problem
  • Orienteering problem: maximize the prizes collected during a close-loop trip within a length budget
  • Prize collecting TSP (PCTSP): each node has not only an associated prize, but also an associated penalty. The goal is to collect at least a minimum total prize, while minimizing the total tour length plus the sum of penalties of unvisited nodes.
  • Stochastic PCTSP: the expected node prize is known upfront, but the real collected prize only becomes known upon visitation.

Observation on Experimental Results

  • The gap between learning to route and human-designed heuristics increases as the graph size grows
  • Samping decoding performs better than greedy decoding, but requires much more computation time
  • The experiment didn't show that whether the proposed method can generalize to graph with larger size

Conclusion

From a high-level perspective,

We believe that our method is a powerful starting point for learning heuristics for other combinatorial optimization problems defined on graphs, if their solutions can be described as sequential decisions.

Specifically, compared to previous work,

Compared to previous works, by using attention instead of recurrence (LSTMs) we introduce invariance to the input order of the nodes, increasing learning efficiency.

For future work,

Scaling to larger problem instances is an important direction for future research. Another challenge is that many problems of practical importance have feasibility constraints that cannot be satisfied by a simple masking procedure.

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容

  • 渐变的面目拼图要我怎么拼? 我是疲乏了还是投降了? 不是不允许自己坠落, 我没有滴水不进的保护膜。 就是害怕变得面...
    闷热当乘凉阅读 9,783评论 0 13
  • 夜莺2517阅读 127,809评论 1 9
  • 版本:ios 1.2.1 亮点: 1.app角标可以实时更新天气温度或选择空气质量,建议处女座就不要选了,不然老想...
    我就是沉沉阅读 11,844评论 1 6
  • 我是一名过去式的高三狗,很可悲,在这三年里我没有恋爱,看着同龄的小伙伴们一对儿一对儿的,我的心不好受。怎么说呢,高...
    小娘纸阅读 8,715评论 4 7
  • 那一年,我选择了独立远行,火车带着我在前进的轨道上爬行了超过23个小时; 那一年,我走过泥泞的柏油路,在那个远离故...
    木芽阅读 5,573评论 4 5