4-2.Regularized Model-Ridge Code
[목적]
- Ridge Regression
- Regularized Linear Model을 활용하여 Overfitting을 방지함
- Hyperparameter lamba를 튜닝할 때 for loop 뿐만 아니라 GridsearchCV를 통해 돌출해봄
- Regularized Linear Models의 경우 X’s Scaling을 필수적으로 진행해야함
[Ridge Regression]
- Hyperparameter Tuning using for Loop
- Hyperparameter Tuning using GridSearchCV
[Ridge Regression Parameters]
- Package : https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html
- alpha : L2-norm Penalty Term
- alpha : 0 일 때, Just Linear Regression
- fit_intercept : Centering to zero
- 베타0를 0로 보내는 것 (베타0는 상수이기 때문에)
- max_iter : Maximum number of interation
- Loss Function의 Ridge Penalty Term은 Closed Form 값이기는 하지만 값을 찾아 나감
-
Penalty Term : (1 / (2 * n_samples)) * y - Xw ^2_2 + alpha * w -2
- closed form-> 미분해서 0찾기 가능, but gradient기반으로 값 찾아나감 penalty term: w=베타, alpha=감마(페널티)
penelty = [0.00001, 0.00005, 0.0001, 0.001, 0.01, 0.1, 0.3, 0.5, 0.6, 0.7, 0.9, 1, 10]
# Using For Loop !!
# Ridge Regression
# select alpha by checking R2, MSE, RMSE
for a in penelty:
model = Ridge(alpha=a).fit(X_scal.iloc[train_idx], Y.iloc[train_idx])
score = model.score(X_scal.iloc[valid_idx], Y.iloc[valid_idx])
pred_y = model.predict(X_scal.iloc[valid_idx])
mse = mean_squared_error(Y.iloc[valid_idx], pred_y)
print("Alpha:{0:.5f}, R2:{1:.7f}, MSE:{2:.7f}, RMSE:{3:.7f}".format(a, score, mse, np.sqrt(mse)))
# 0.01일때 R2값이 높고, RMSE값(오류)가 낮음-> 모델 선택
model_best = Ridge(alpha=0.01).fit(X_scal.iloc[train_idx], Y.iloc[train_idx])
summary(model_best, X_scal.iloc[valid_idx], Y.iloc[valid_idx], xlabels = X_scal.columns)
# Using GridSearchCV
ridge_cv=RidgeCV(alphas=penelty, cv=5)
model = ridge_cv.fit(X_scal.iloc[train_idx], Y.iloc[train_idx])
print("Best Alpha:{0:.5f}, R2:{1:.4f}".format(model.alpha_, model.best_score_))
# gridcv 썼을 때 오히려 R2값 감소
# GridSearchCV Result
model_best = Ridge(alpha=model.alpha_).fit(X_scal.iloc[train_idx], Y.iloc[train_idx])
score = model_best.score(X_scal.iloc[valid_idx], Y.iloc[valid_idx])
pred_y = model_best.predict(X_scal.iloc[valid_idx])
mse = np.sqrt(mean_squared_error(Y.iloc[valid_idx], pred_y))
print("Alpha:{0:.5f}, R2:{1:.7f}, MSE:{2:.7f}, RMSE:{3:.7f}".format(0.01, score, mse, np.sqrt(mse)))
summary(model_best, X_scal.iloc[valid_idx], Y.iloc[valid_idx], xlabels=X_scal.columns)
댓글남기기