Multiple Features
1.Gradient Descent
Feature Scaling
We can speed up gradient descent by having each of our input values in roughly the same range. This is because θ will descend quickly on small ranges and slowly on large ranges, and so will oscillate inefficiently down to the optimum when the variables are very uneven.
x = x/(max-min,or range of feature)
Mean normalization
x = (x - u)/s , u is average of inputs, s is max - min
Learning Rate
Debugging gradient descent.Make a plot with number of iterations on the x-axis. Now plot the cost function, J(θ) over the number of iterations of gradient descent. If J(θ) ever increases, then you probably need to decrease α.
Automatic convergence test.Declare convergence if J(θ) decreases by less than E in one iteration, where E is some small value such as 10^−3. However in practice it's difficult to choose this threshold value.
To summarize:
If α is too small: slow convergence.
If α is too large: may not decrease on every iteration and thus may not converge.
Features and Polynomial Regression
Sometimes,the linear regression may not fit our data well,while there are polynomial regressions such as quadratic、cube、or square functions can do better.
One important thing to keep in mind is, if you choose your features this way then feature scaling becomes very important.
2.Normal Equation
Normal equation is another way to minimize J,we will minimize J by explicitly taking its derivatives with respect to the θj ’s, and setting them to zero. This allows us to find the optimum theta without iteration.
compare to gradient descent:
(1) No need to choose alpha
(2) No need to iterate
(3) O (n^3), need to calculate inverse of X'X
(4) slow,if n is large
if X’X is noninvertible, two reasons:
.redundant features,where two features are very closely related (i.e. they are linearly dependent)
.Too many features (e.g. m ≤ n). In this case, delete some features or use "regularization" (to be explained in a later lesson).