[Note-1] Supervised and Unsupervised learning

在coursera上听Andrew Ng讲machine learning


重点在于区分几类问题和概念
监督学习:对于给定的数据集,已经知道输出结果是什么样子的。
分为两类问题:

  • regression(回归):预测结果是一个连续值(如房价)
  • classification(分类):预测结果是离散的值
    非监督学习:对于给定的数据集,不知道输出结果是什么样子。
    分为两类问题:
  • clustering(聚类):根据特征,对数据集进行分组
  • non-clustering(非聚类):对数据集中的数据按某种规则进行提取
Supervised Learning

In supervised learning, we are given a data set and already know what our correct output should look like, having the idea that 【there is a relationship between the input and the output】

Supervised learning problems are categorized into "regression" and "classification" problems. In a regression problem, we are trying to predict results within a continuous output, meaning that we are trying to map input variables to some continuous function. In a classification problem, we are instead trying to predict results in a discrete output. In other words, we are trying to map input variables into discrete categories.

Example 1:

Given data about the size of houses on the real estate market, try to predict their price. Price as a function of size is a continuous output, so this is a regression problem.

We could turn this example into a classification problem by instead making our output about whether the house "sells for more or less than the asking price." Here we are classifying the houses based on price into two discrete categories.

Example 2:

(a) Regression - Given a picture of a person, we have to predict their age on the basis of the given picture

(b) Classification - Given a patient with a tumor, we have to predict whether the tumor is malignant or benign.

Unsupervised Learning

Unsupervised learning allows us to approach problems with little or no idea what our results should look like. We can derive structure from data where we don't necessarily know the effect of the variables.
We can derive this structure by clustering the data based on relationships among the variables in the data.
With unsupervised learning there is no feedback based on the prediction results.

Example:

Clustering: Take a collection of 1,000,000 different genes, and find a way to automatically group these genes into groups that are somehow similar or related by different variables, such as lifespan, location, roles, and so on.

Non-clustering: The "Cocktail Party Algorithm", allows you to find structure in a chaotic environment. (i.e. identifying individual voices and music from a mesh of sounds at a cocktail party).

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容

  • 小时侯,幸福是一件东西,拥有就幸福;长大后,幸福是一个目标,达到就幸福;成熟后,发现幸福原来是一种心态,领悟就幸福...
    古墓道人阅读 170评论 0 0
  • 了解了SIFT算子之后,来看看HOG算子吧~其实仔细看看,会发现HOG和SIFT特征感觉像是孪生兄弟呢~当然,即便...
    sunsimple阅读 3,128评论 0 2
  • 音乐响起,总有那么几句歌词打动你 是否也有歌词荡气回肠 是否也有歌词让你泪如雨下 是否也有歌词让你陶醉 是否也有歌...
    绫筱初夏阅读 306评论 0 3