There are a lot of variants of bandit problem. In NIPS'2014, there are eight papers on bandit. I think it's a good sign to represent how important bandit problem is for researchers as well as online problems. They all focus on different variants of bandit problems.
Only one of them is on theory of bandit convex optimization, which is a special case of online convex optimization. This is studied by Elad Hazan, who has published two papers in NIPS'2014 and now joins Princeton. He is preparing for an online convex optimization book and is one of committees for COLT. But nowadays researchers always focus on different variants but rarely on theory analysis because it is rather hard to develop new techniques for analysis. Usually they will learn basic skills from Auer, one of which I have mentioned in last article.
Two of them change the expressions of target function-regret. One focuses on extreme reward but accumulated reward, that is, the maximum of obtained reward so far. Then compare it with the maximum of a single arm pulled an equal times. This variant is useful in detecting network intrusion which is determined by the heaviest load in the whole network. The other one is focused on non-stationary rewards. As I have explained before about stochastic bandits, reward of each arm belongs to a fixed distribution. But this paper assumes the reward distribution changes little over time, allowing more freedom in stochastic bandit problems.
There are two of them change the process for getting feedback, that is, getting the reward from the environment. One gives more observations while the other one omit some. The first one gives more than one reward of pulled arm. It gives a graph structure of all arms then you can get reward of arms when there is an edge from pulled arm to it. The other one doesn't give you reward information if you change arm this turn, that is, arm of this turn is different from last turn. In the original setting, we can only get one reward information. Using such a limited information, we should still inference how to act next. To be honest, it is really hard. So when considering variants of feedbacks, we will not limit more but usually give more. Therefore, the second one is somewhat a limit for limiting information.
Usually we consider those arms are independent. But there is one on structured bandits where reward of one arm is depending on other arms. In the original version of stochastic bandits, we focus on actual means of their rewards. But in this paper, these means share a common parameter which stands for dependence. This dependence is not so oblivious but very general.
One of them is on linear bandits which I have discussed little but a common one. And it studies best-arm identification except accumulated regrets. Usually in bandits, there are three popular aims, accumulated regret, best-arm in fixed turns and least turns to get best-arm. The three are of equal importance and all have a lot of papers. The last one of the eight papers is on best-superArm in combinatorial bandits. It is strange and also reasonable to have only one on combinatorial topic. The past combinatorial papers mainly generalize old techniques but not developing new ones. It is difficult to study original bandits, let alone combinatorial bandits.
To further researches on this topic, one can find a reasonable variant and develop an efficient algorithm for it which is easiest approach. One can also develop new techniques for online convex analysis which is hardest.