Outline
1.算法思想
2.概念解释
3.Sklearn Code
Part 1.算法思想:
目的:计算logistic Function,根据概率阈值设定进行二元分类
核心步骤:算出概率(P) & 提升精度(log-loss)
具体实施:
step1. f = m1x1+m2x2+……+b -->
step2. log-odds = log(f) -->
step3. P = sigmoid(log-odds) -->
step4. according to threshold,give the classification -->
step5. minimize log-loss(gradient descent)
Part 2.概念解释
- sigmoid function:

image
- log-loss Function:

image
如何理解log-loss Fuction:针对某一个sample,当y=1时,loss function对应以下左图;当y=0时,loss function对应以下右图。

image
y=1/0的概率越高,loss越小。这是符合算法逻辑要求的。所有样本相加,再利用梯度下降的算法,获得对于整体而言的最低的loss。
- Threshold:
1.阈值通常设为0.5
2.如下第一幅图所示。下图可以想象为根据大量实际数据和logistic regression算出来的概率与实际预测有无癌症的hist-gram分布。

image
3.当我们调整阈值0.5-->0.4,无癌症但被预测为有癌症的数量增加(误判增加,精度减少),有癌症确诊增加(被预测为无癌症的数量减少,漏判减少,召回率增加)。这个案例说明阈值的设定,往往是依据需求选择的,重大疾病检出更多的是需要召回率,而非精度。

image
Part 3.Sklearn Code:
import numpy as np
from sklearn.linear_model import LogisticRegression
from exam import hours_studied_scaled, passed_exam, exam_features_scaled_train, exam_features_scaled_test, passed_exam_2_train, passed_exam_2_test, guessed_hours_scaled
# Create and fit logistic regression model here
model = LogisticRegression()
model.fit(hours_studied_scaled,passed_exam)
# Save the model coefficients and intercept here
calculated_coefficients = model.coef_
intercept = model.intercept_
print(calculated_coefficients)
print(intercept)
# Predict the probabilities of passing for next semester's students here
passed_predictions = model.predict_proba(guessed_hours_scaled)
# Create a new model on the training data with two features here
model_2 = LogisticRegression()
model_2.fit(exam_features_scaled_train,passed_exam_2_train)
# Predict whether the students will pass here
passed_predictions_2 = model_2.predict(exam_features_scaled_test)
print(passed_predictions_2)
print(passed_exam_2_test)