I have written a note with respect to SVM. Unfortunately , I was not familiar with convex optimization at that time .Now ,it's time to fix it up .
https://www.jianshu.com/p/8fd28df734a0
Convex Set
So what is convex set ?
For example ,round and square are convex set .Because all direction of these figure is not concave .So the definition of the convex set is :
Take two points in this set randomly ,if the line segment between two points inside this set ,we call this set is convex set .
Mathematical language description :
,then C is a convex set .
Convex function
For example ,is a convex function .And convex function have four definition :
1) The dom f is convex set .Chose two points randomly from this function ,if the line segment is above the graph of f,the function f is convex function .
2) The dom f is convex set ,,if g(t) is convex function ,then function f is convex function .
There is a function with a hight dimension .If we want to distinguish this function is convex or not ,we can cut this function into several low-dimension function ,if these incisions of this pieces are convex function,then the original function is convex too .
3) There is a function f with convex dom f ,and the function f is differentiable,the gradient of f is exist .,function f is convex .
4) If the dom f is convex ,and satisfy (Hassain matrix is positive define matrix ),function f is convex .
Dual Function
Now ,we have mention all fundamental knowledge what we need .
There is a problem :
f(x) is the function we need to optimize ,subject to some inequality and affine function .And f(x) may not be a convex or concave function .If function f not a convex or concave ,it maybe have many minimum or maximum value .
So there is a feasible way that we could transform function f into a convex or concave function .Actually ,the difference between convex and concave is negative sign ,so there is no difference between concave and convex function in this case.
Therefore ,we can consider a method to transform f into concave function .Remember we have a optimal method ,Conjugate gradient descent .conjugate function is a convex function .
We can prove it .
Retrospect convex function definition 1.We can use it to prove.
Prove :
Take two points randomly from domf .
Now ,support we have a function g ,,and is convex function .
And x is constant not variable ,so elements inside the 'sup' is affine function .As we know ,affine function must be convex .Support ,it's over .Conjugate function is convex .
However ,in this case ,I wouldn't transform f(x) to convex function by conjugate function ,I just explain a proof method by conjugate function .
First ,Lagrange Function :
Support a new function :
Utilize the way we prove conjugate function ,we can prove this function is concave function ,so this function just have one maximum .
We call is dual function .
Now ,we have an access to transform f into concave .
Dual Problem
Support P* is the optimal solution of the primal problem ,and d is the optimal solution of the dual problem .Obviously ,P* >= d.
Before we go into the detail ,we introduce several concepts .
The problem we mentioned at the beginning have two constraint .We define :
feasible region :
domain :
Now ,we transform the format of the question into Lagrange function .
This format of problem still is equal to primal problem .
Look at dual function :,we have two difference .
First ,in primal problem ,the x variable belong to feasible region .And dual function is from domain of the target function .
second ,dual function didn't maximum the parameters ,and it drop another constraint .If we lose this constraint ,dual function can be infinity .
For reaching a consensus with original problem ,dual function will take with .
Obviously ,Domain is larger than feasible region .Therefore ,the minimum solution which we find in primal problem is greater than or equal to the one we find in dual function .And dual problem didn't maximize the parameters,
P * >= d is not the answer what we want ,we have to approach to the optimal solution of the primal problem .So we maximize the optimal solution of the dual function d.We got :
From now on ,dual problem appeared .
Dual Problem :
For example ,
First ,construct Lagrange function .
Second ,dual function .
If have one element greater than zero and x equal to negative infinity ,then dual problem can reach negative infinity .
If have one element less than zero and x equal to positive infinity ,then dual problem can still reach negative infinity .
Therefore ,must be equal to zero . Otherwise ,minimum a negative infinity is nonsensical .
According to theory above ,we can write down the concrete formula of dual problem :
Now ,we got dual problem .
KKT
P* >= d*,we can tackle the dual problem to approach the primal problem .But that is not we want .We want to get P* instead of d*.So ,under what situation is P* equal to d*?
(Why don't we solve the primal problem directly ?Coz f(x) we not sure is convex or not ,in contrast to dual problem ,dual function is an advantage for us .)
Support d* is equal to P*.And is the optimal variable when we get P*.
Primal Problem :
Apparently ,means subject to constraints .So the
Now ,leave the primal Problem alone ,we focus on dual problem .
Notice ,so we can utilize gradient to solve this problem .
Then we got our first condition over the KKT:
We call Stationary .
That is not enough .
,this is complementary slackness.
Addtion of Primal feasible and dual feasible,we got the whole KKT condition .
From now on ,we have derived KKT.However ,it's a necessary condition .
When the f(x) is convex ,KKT condition is sufficient and necessary condition .
Prove :
Support is convex ,and satisfy KKT,prove the .
If f(x) is convex ,then KKT condition is sufficient and necessary.
SVM
The target function is ,obviously it's convex .So we can use KKT directly .
Slater Condition
For a convex problem :
If there exists a ,then P* = d*.KKT condition must be necessary ,so we could use KKT .
Weak Slater Condition
For a convex problem ,If the inequality is affine (Ax <= B),then P* = d*,we still can use KKT .SVM satisfy the weak salter condition ,so P* = d* and KKT is no problem .
Note Slater condition work for convex problem .If primal problem not convex ,we cannot use Slater.
Conclusion
At first ,we mentioned a primal problem .It may not be a convex or concave problem .So we can transform the problem into dual problem by dual function .Then we got P* >= d*.
As the result of the convenience of the dual problem ,we want to solve dual problem but primal problem .When P* = d*? Then we introduce the KKT and Slater condition .
KKT focus on optimal solution .Slater Condition focus on P* is equal to d* or not .But to some extend ,these two parties are connected .
If a problem satisfy Salter Condition ,optimal solution satisfy KKT condition because of the necessary of KKT .
If KKT are necessary and sufficient for a problem ,then P* = d*.But it may not satisfy Slater Condition .Because its f(x) < 0 or affine inequality may not exist .
So Inverse Slater Condition is not true.