It's really a long time after last time i learned this part.
However,more time will be spent on this part with exams passed one by one.
When i read something about Gradient Descent,i felt it was really difficult for me to understand this part,thus i decided to write down some notes to summarize it.
The first part of Msachine Learning concerning Model and cost Function.
The first stage is about Model Representation ---- Linear regression with one variable,and the method of representing h(hypothesis).A pair (x(i),y(i))is called a training example, and the dataset that we’ll be using to learn—a list of m training examples(x(i),y(i));i=1,...,m—is called a training set.
The second stage of this part is Cost Representation,and the first key point is how to fit data,the solution to it is choosing theta0, theta11,so that h(x)is close to y for our training examples(x,y),minimizing the square difference between the output of the hypothesis and the actual data.