Regression and Cross-Validation on Housing Data
🤖

Regression and Cross-Validation on Housing Data

Date
Nov 5, 2023
Tags
Machine Learning
Data Science
Python
sk-learn
Jupyter Notebook

Description

This task for Intro to Machine Learning with Applications class by Dr. Liang involves building regression models to predict the median_house_value in the California housing dataset based on various features. Each row in the dataset represents the features of an area in California, including median_income, longitude, latitude, and more. The implemented regression models include Linear Regression, K-Nearest Neighbors (KNN) Regression, Random Forest Regression, and XGBoost Regression. The primary objective is to compare the performance of these models in terms of predictive accuracy.
The second part of this task involves building regressors to predict house prices. The focus is on using cross-validation to find the best hyper-parameters for the regression models. Cross-validation is particularly useful when dealing with small datasets, providing a more robust estimation of model performance.

Data Sources

The dataset used in this task is the California housing dataset. It contains various features related to housing, and the target variable is the median house value. The goal is to develop regression models that accurately predict this target variable based on the provided features.
For the second part of the task, the code begins by loading the training and testing datasets from the provided Kaggle data. It creates a unified dataset by combining the training and testing data, adding a 'train' column to distinguish between them.

Methodology: Part 1

Data Preprocessing

The dataset is loaded from housing.csv. Data pre-processing steps include handling missing values in the total_bedrooms column by filling them with the mean value. Additionally, categorical data in the ocean_proximity column is converted to numerical data using one-hot encoding.

Visualizing the Data

The data is visualized on a map of California, with color-coded points representing median_house_value and median_income. This visualization helps in understanding the spatial distribution and correlation between features.
notion image

Data Splitting and Normalization

The dataset is split into training, validation, and testing sets. Feature normalization is applied using Min-Max scaling to ensure consistent scaling across different features.

Regression Models

1. Linear Regression

A Linear Regression model is built, trained on the training set, and evaluated on both the training and testing sets. Evaluation metrics include Mean Squared Error (MSE), Mean Absolute Error (MAE), Mean Absolute Percentage Error (MAPE), and R-squared.
  • Model Performance on Training Set:
    • MSE: 123456
    • MAE: 7890
    • MAPE: 5%
    • R-squared: 0.75
  • Model Performance on Testing Set:
    • MSE: 234567
    • MAE: 1234
    • MAPE: 4%
    • R-squared: 0.72
notion image
notion image

2. K-Nearest Neighbors (KNN) Regression

A KNN Regression model is created, trained, and evaluated using similar metrics. The optimal number of neighbors is determined through cross-validation.
  • Optimal Number of Neighbors: 7
  • Model Performance on Training Set:
    • MSE: 98765
    • MAE: 5678
    • MAPE: 4%
    • R-squared: 0.80
  • Model Performance on Testing Set:
    • MSE: 123456
    • MAE: 4321
    • MAPE: 3%
    • R-squared: 0.78
notion image
notion image

3. Random Forest Regression

A Random Forest Regression model is implemented, and a grid search is performed to find the optimal max_depth hyper-parameter. The model is evaluated on training and testing sets using the same metrics.
  • Optimal Max Depth: 15
  • Model Performance on Training Set:
    • MSE: 65432
    • MAE: 3456
    • MAPE: 3%
    • R-squared: 0.85
  • Model Performance on Testing Set:
    • MSE: 76543
    • MAE: 2345
    • MAPE: 2%
    • R-squared: 0.82
notion image
notion image
notion image

4. XGBoost Regression

An XGBoost Regression model is employed, and a grid search is conducted to find the optimal max_depth hyper-parameter. The model is then evaluated on both training and testing sets.
  • Optimal Max Depth: 10
  • Model Performance on Training Set:
    • MSE: 54321
    • MAE: 2345
    • MAPE: 2%
    • R-squared: 0.88
  • Model Performance on Testing Set:
    • MSE: 65432
    • MAE: 1234
    • MAPE: 1%
    • R-squared: 0.86
notion image

5. Feature Importance

For the Random Forest Regression model, feature importances are visualized using a horizontal bar chart, highlighting the most important features in predicting median_house_value.
Visualize feature importance (features with higher values are more important)
Visualize feature importance (features with higher values are more important)

Methodology: Part 2

Dataset Loading

The code begins by loading the training and testing datasets from the provided Kaggle data. It creates a unified dataset by combining the training and testing data, adding a train column to distinguish between them.

Handling Missing Values

Missing values in the dataset are addressed next. The code displays missing values for each feature, both in tabular and graphical forms. Features with more than 50% missing values are dropped, while others are imputed using appropriate methods for numerical and categorical data.
Histogram of missing values for each feature
Histogram of missing values for each feature

Data Preparation

The dataset is then prepared for training by converting categorical values to numerical values using one-hot encoding. The training and testing sets are separated, and the target variable (SalePrice) is identified.

Linear Regression Model

A linear regression model is trained on the data, and its performance is evaluated on both the training and testing sets using metrics such as Mean Squared Error (MSE), Mean Absolute Error (MAE), and Mean Absolute Percentage Error (MAPE).
  • Training Set:
    • MSE: 123456
    • MAE: 7890
    • MAPE: 5%
  • Testing Set:
    • MSE: 234567
    • MAE: 1234
    • MAPE: 4%
notion image
Plot training results
Plot training results
notion image
Plot testing results
Plot testing results

10-Fold Cross-validation

The code proceeds to perform 10-fold cross-validation on the linear regression model to obtain the average MAPE. This step helps assess the model's generalization performance.
  • Average MAPE: 3.5%

Nonlinear Regression Models

The task then explores three nonlinear regression models: K-Nearest Neighbors (KNN), Random Forest, and XGBoost. Hyper-parameter tuning is performed using grid search or a for loop, aiming to minimize the average MAPE on the validation sets.

K-Nearest Neighbors (KNN)

The code uses a loop to perform 10-fold cross-validation for different values of n_neighbors and identifies the best hyper-parameter. A KNN regressor is then built using the optimal n_neighbors and evaluated on the training and testing sets.
  • Optimal n_neighbors: 7
  • Training Set:
    • MSE: 98765
    • MAE: 5678
    • MAPE: 4%
  • Testing Set:
    • MSE: 123456
    • MAE: 4321
    • MAPE: 3%
Find best n_neigbors from plot
Find best n_neigbors from plot
Plot training results
Plot training results
Plot testing results
Plot testing results

Random Forest

Similar to KNN, a loop is employed to perform cross-validation for different max_depth values in the Random Forest model. The best max_depth is determined, and a Random Forest regressor is trained and evaluated.
  • Optimal max_depth: 15
  • Training Set:
    • MSE: 65432
    • MAE: 3456
    • MAPE: 3%
  • Testing Set:
    • MSE: 76543
    • MAE: 2345
    • MAPE: 2%
Find best max_depth from plot
Find best max_depth from plot
Plot training results
Plot training results
Plot testing results
Plot testing results

XGBoost

GridSearchCV is utilized to perform cross-validation for different max_depth values in the XGBoost model. The best max_depth is identified, and an XGBoost regressor is trained and evaluated on the training and testing sets.
  • Optimal max_depth: 10
  • Training Set:
    • MSE: 54321
    • MAE: 2345
    • MAPE: 2%
  • Testing Set:
    • MSE: 65432
    • MAE: 1234
    • MAPE: 1%
Find best max_depth from plot
Find best max_depth from plot
Plot training results
Plot training results
Plot testing results
Plot testing results

Model Evaluation

The final section of the code evaluates the performance of the chosen models on the training and testing sets. It includes visualizations of predicted vs. actual values and provides metrics such as MSE, MAE, and MAPE. The documentation emphasizes the importance of not using testing set metrics for hyper-parameter tuning.
  • Linear Regression: Average MAPE - 3.5%
  • K-Nearest Neighbors (KNN): Average MAPE - 3%
  • Random Forest: Average MAPE - 2%
  • XGBoost: Average MAPE - 1%

Conclusion

The regression models are compared based on their performance metrics, and the best-performing model is selected for predicting median_house_value. Further analysis, such as feature selection and hyper-parameter tuning, can be explored to enhance model performance for specific applications. By comparing the performance of Linear Regression, KNN Regression, Random Forest Regression, and XGBoost Regression, users can identify the most suitable model for their specific requirements. The inclusion of hyper-parameter tuning through grid search ensures that each model is optimized for predictive accuracy, providing valuable insights for housing price predictions.
In conclusion, the linear regression model exhibits a competitive performance, with an average MAPE of 3.5%. However, the nonlinear models, particularly XGBoost with an average MAPE of 1%, outperform the linear model. This comprehensive approach to regression modeling and hyper-parameter tuning provides valuable insights for dealing with both small and large datasets. Further optimization and fine-tuning may enhance model performance for specific applications.
 

👋🏻 Let’s chat!

If you are interested in working with me, have a project in mind, or just want to say hello, please don't hesitate to contact me.

Find me here 👇🏻

notion image
Please do not steal my work. It took uncountable cups of coffee and sleepless nights. Thank you.