支持向量机系列(一)——线性可分情形下的SVM

Linear Support Vector Machines in the Linearly Separable Case

Problem Description

Assume we have a learning set of data, \mathscr{L}= \{(\mathbf{x}_i,y_i):i=1,2,\cdots,n\} where \mathbf{x}_i\in \mathbb{R}^r and y_i\in\{1,-1\}. The binary classification problem is to use \mathscr{L} to construct a function f: \mathbb{R}^r \rightarrow \mathbf{R} so that
\begin{align*} C(\mathbf{x})&= sign(f(\mathbf{x}))\\ &=sign (\beta_0 + \mathbf{x}^{\top}{\boldsymbol\beta}) \end{align*} is a classifier.

If \mathscr{L} is linearly separable, then the optimization problem is given by
\begin{align*} \min\limits_{\beta_0,{\boldsymbol\beta}} \ &\dfrac{1}{2}\|{\boldsymbol\beta}\|^2\\ \qquad \qquad s.t. \quad& y_i (\beta_0 + \mathbf{x}_i^{\top} {\boldsymbol\beta})\geq 1 ,\qquad i=1,2,\cdots,n \end{align*}

Primal Problem Given by Lagrangian Multipliers

By using Lagrangian multipliers, the primal function is given by
\begin{align*} F_P(\beta_0,{\boldsymbol\beta},{\boldsymbol\alpha})&= \frac{1}{2}\|{\boldsymbol\beta}\|^2 + \sum_{i=1}^{n}\alpha_i [1-y_i(\beta_0 + \mathbf{x}_i^{\top} {\boldsymbol\beta})] \end{align*}
where {\boldsymbol\alpha}= \begin{pmatrix} \alpha_1&\cdots&\alpha_n \end{pmatrix}^{\top}\succeq \mathbf{0} is the Lagrangian coefficients. So the primal problem is equivalent to
\begin{align*} \min\limits_{\beta_0,{\boldsymbol\beta}}\max\limits_{{\boldsymbol \alpha}}\ & F_P(\beta_0,{\boldsymbol\beta},{\boldsymbol\alpha}) \\ s.t. \quad& {\boldsymbol\alpha} \succeq \mathbf{0} \end{align*}

The Karush–Kuhn–Tucker conditions give necessary and sufficient conditions for a solution to a constrained optimization problem:
\begin{align*} \dfrac{\partial F_P(\beta_0, {\boldsymbol\beta},{\boldsymbol\alpha})}{\partial\beta_0} & =- \sum_{i=1}^{n}\alpha_i y_i=0\\ \dfrac{\partial F_P(\beta_0, {\boldsymbol\beta},{\boldsymbol\alpha})}{\partial{\boldsymbol\beta}} & ={\boldsymbol\beta}- \sum_{i=1}^{n}\alpha_i \mathbf{x}_i =0\\ 1-y_i(\beta_0 + \mathbf{x}_i^{\top} {\boldsymbol\beta})&\leq 0\\ \alpha_i&\geq 0\\ \alpha_i [1-y_i(\beta_0 + \mathbf{x}_i^{\top} {\boldsymbol\beta})]&=0 \end{align*}
KKT conditions yields
\begin{align*} \sum_{i=1}^{n}\alpha_iy_i&=0\\ {\boldsymbol\beta} &= \sum_{i=1}^{n}\alpha_iy_i \mathbf{x}_i \end{align*}
and \beta_0 is implicitly determined by the KKT complementarity condition, by choosing any i for which \alpha_i \neq 0 and computing \beta_0 (note that it is numerically safer to take the mean value of \beta_0 resulting from all such equations).

Applying KKT conditions to the primal function simplifies the dual function
\begin{align*} F_D({\boldsymbol\alpha})&= \min\limits_{\beta_0,{\boldsymbol\beta}}F_P(\beta_0,{\boldsymbol\beta},{\boldsymbol\alpha}) \\ &= \frac{1}{2}\sum_{i=1}^{n}\sum_{j=1}^{n}\alpha_i\alpha_jy_iy_j \mathbf{x}_i^{\top}\mathbf{x}_j - \sum_{i=1}^{n} \sum_{j=1}^{n}\alpha_i\alpha_jy_iy_j \mathbf{x}_i^{\top}\mathbf{x}_j+ \sum_{i=1}^{n}\alpha_i\\ &= -\frac{1}{2}\sum_{i=1}^{n}\sum_{j=1}^{n}\alpha_i\alpha_jy_iy_j \mathbf{x}_i^{\top}\mathbf{x}_j + \sum_{i=1}^{n}\alpha_i\\ &= -\frac{1}{2}{\boldsymbol\alpha}^{\top}\mathbf{H}{\boldsymbol\alpha}+ \mathbf{1}^{\top}_n {\boldsymbol\alpha} \end{align*}
where \mathbf{H}= \begin{pmatrix} \langle y_i \mathbf{x}_i, y_j \mathbf{x}_j\rangle \end{pmatrix}.

Dual Problem

When KKT conditions are satisfied, the primal problem is equivalent to the dual problem
\begin{align*} \max\limits_{{\boldsymbol \alpha}}\min\limits_{\beta_0,{\boldsymbol\beta}}\ &F_P(\beta_0,{\boldsymbol\beta},{\boldsymbol\alpha}) \\ s.t. \quad& {\boldsymbol\alpha} \succeq \mathbf{0} \end{align*}
i.e.
\begin{align*} \max\limits_{{\boldsymbol \alpha}}\ &F_D({\boldsymbol\alpha}) \\ s.t. \quad& {\boldsymbol\alpha} \succeq \mathbf{0}\\ &{\boldsymbol\alpha}^{\top}\mathbf{y}=0 \\ \end{align*}

If {\boldsymbol\alpha}^* solves this optimization problem, then
{\boldsymbol\beta}^*= \sum_{i=1}^{n} \alpha_i^* y_i \mathbf{x}_i
By using KKT complementarity condition,
{\boldsymbol\beta}^*= \sum_{i\in SV} \alpha_i^* y_i \mathbf{x}_i
where SV\subset\{1,2,\cdots,n\} is the set of supporting vectors.

Then \beta_0^*= \dfrac{1}{|SV|} \sum_{i\in SV} \dfrac{1-y_i \mathbf{x}_i^{\top}{\boldsymbol\beta}^*}{y_i}

Since
\begin{align*} \|{\boldsymbol\beta}\|^2&= \sum_{i=1}^{n}\alpha_i^*y_i \mathbf{x}_i^{\top} {\boldsymbol\beta} \\ &=\sum_{i=1}^{n}\alpha_i^*[1-y_i\beta_0]\\ &=\sum_{i=1}^{n}\alpha_i^* \end{align*}
the maximum margin is given by \dfrac{2}{\|{\boldsymbol\beta}\|}=\dfrac{2}{\sqrt{\sum_{i=1}^{n}\alpha_i^*}}.

©著作权归作者所有,转载或内容合作请联系作者
【社区内容提示】社区部分内容疑似由AI辅助生成,浏览时请结合常识与多方信息审慎甄别。
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

相关阅读更多精彩内容

  • 《世说新语》中的俭啬表面意思是小气,节约,他是他们《世说新语》后我并不这么认为了,因为她还有更多的意思。让我带着...
    任嘟嘟嘟阅读 1,551评论 0 0
  • 学习计划:教材第12---14页 学习《反省为时不晚》 ️作业安排:想一想,我们为什么对孩子失去了爱?! 学习反馈...
    MaxTZ阅读 250评论 0 0
  • 那一声声早晚安,是多么苍白无力。 对于一个感性的我来说,我又不知如何表达。 但我也只能说出这样平淡无味的话。 等,...
    凡夫俗子y阅读 124评论 5 6
  • 1.下载安装文件(64位) 2.安装install,选择默认安装路径 3.mysql配置 1).配置环境变量,将安...
    东城_86阅读 260评论 0 0

友情链接更多精彩内容