Using The Past To Predict The Future

Part 2 of 3, Creating a Regression Model in Python


Using the past to predict the future! Say hello to part 2 of 3 in this series on regression modeling with python! In blog 1, I covered the important processing steps prior to creating a linear regression model. In this blog I build from that foundation by creating the actual regression model along with demonstrating how to reuse the model with new data. In blog 3 I will cover how to check for the post-linear model creation assumptions.

For a more thorough overview of the project related to this series of blogs see: .

Now let’s get creating!

Refresher Of Our Data: A blend of King County Housing data with scraped data from the King County Tax Assessors Dept. For more details see blog 1.

1. Creating Our Model

With cleaned and assumption adhering data in hand, our first step is to separate our target from our features.

1. Our Actual Price — This is our Target (dependent variable)

2. Our Features (independent variables)

Remember in blog 1 we scaled our data. Therefore, we will use our scaled data to feed our model. I should also mention, that in this blog series I am using StatsModel to create our model. However, another popular regression model is available from SK Learn.

WOW! Just like that our model is MADE! Now lets view our results.

Adj R Squared — See red circle — This is your models overall score. It is a number between 0–1. A score of .8 says that 80% of the variability in the predicted price can be explained by the model. .8 is a good score.

Durbin-Watson — See blue circle — Tests autocorrelation in residuals. Values from 0 to less than 2 indicate positive autocorrelation and values from 2 to 4 indicate negative autocorrelation. If we get a value less than 2.5 we are in good shape.

Coefficients — See maroon square — These are the values that predict your target, in our case, price. While the actual values shown are not overly important, the comparisons between the values are important. Because we scaled our continuous values using minmax we can compare the coefficient of TotalAppraisalValue_Sc = 1.27 to sqft_above_sc = .131. The fact that TotalAppraisalValue is 800% higher than sqft_above_sc indicates the heavy reliance the model places on TotalAppraisalValue for making predictions. The coefficient of TotalAppraisalValue being so much higher than the other features is powerful, yet it does leave our model very susceptible to bias from that feature. Said differently, if our data in the column TotalAppraisalValue for an individual row has an error for any reason (bad appraisal, recording error, etc.) our error in the prediction will be large. If you have a model with a more balanced set of coefficients, an error introduced from any particular feature will not have as big of an impact on the predicted value, as the numbers used to generate the prediction are spread amongst a larger number of features.

Pvalues — See black circle — Shows the statistical significance of each of our model coefficients. Values less than .05 suggests our coefficients are significant and should remain in the model.

Great! Now we have our model but where are the predictions?

Seeing Our Predictions and Reviewing our Errors

See the highlighted cells. These are the columns that show our predictions for each home. Something to note. Given we scaled our data, the model made predictions in the scaled values. Therefore, we need to “reverse” our log transformation to make real sense of the data. We do that by using np.exp() on our log transformed data. See the line of code above, which we used to create the column “price_Predicted”. Also, I added in a few additional columns that will be nice to have in viewing our residuals — “price_Residuals”, and “price_Residuals_abs”. The column “price_Residuals_abs” can be used to sort and then examine the rows with the largest errors.

Lastly, to get a sense of all the predicted prices against the actual price, we can create a scatter plot with actual price on the y axis and predicted price on the x axis. Ideally the points will fall in a fairly straight 45-degree pattern.

Overall, Not bad! However, as indicated by the red circle our model is struggling some with homes that sold for higher prices, as our predicted prices are larger than the actual prices. Under normal circumstances we could go back and review those particular rows and see if there is something odd in the data and or if we are potentially missing a feature that would help us better predict the prices of homes that sell for > $700k.

Now that we have our predictions and residuals, we can graph them and examine our errors against our predictions. Remember our errors are the difference between the actual prices and our prices predicted from our model. More on visualizing these errors in blog 3. Here is a quick look. Additionally, you can sort your abs_price_predicted to get a sense of those rows. See below:

See highlighted residuals. Those shown are those with the largest gap between actual and predicted. These are the points we circled in red above. Wow, it shows our model is off by $410,000 in the worst case. More on that in blog 3.

Checking Remaining “Post-Model” Assumptions

Now that our model has been created, and assuming we are happy with the Adj R Score, there are a few more things we need to do prior to declaring victory. Before we can be comfortable using our model in a production setting, we need to check the remaining two assumptions associated with linear regression modeling: Homoscedasticity and Normality in the Error term. Both of these topics are covered in blog 3. For the sake of this blog, we will move past these assumptions and I will quickly show you how to reuse your model after initial creation.

2. Using Your Model Again

What if we want to make new predictions? Do we have to add that new house into our original data source as another row and start over from step 1? Thankfully, no. All you need to do is the following:

1. Get access to your original scaler object

2. Get access to your original modeler object

3. Feed your new data into the scaler and modeler object and then run model.predict()

For this example, I created a fictitious row derived by taking the average of each of the features from our original dataset. Essentially this could be considered “The Average Home” in our dataset.

Now, we just need to feed our unscaled data into our scaler and then feed that output, along with our categorical features, into our previously created model. A note of caution! You need to make sure you are feeding your scaler the new data in the exact same order in which you fed the scaler in your original scaling. This also needs to hold true when using our model for making new predictions with our modeler object.

See highlighted. This is your newly predicted value. You can use the exact same process for any new row of data. Simply get the unscaled values, scale them, and feed both the continuous and categorical values into your model. You can determine the impact changing a feature has on the predicted value simply by noting the predicted value prior to making a change and then comparing that to the prediction after making the change. This would be the impact that changing a feature would have on the value of a home.

Conclusion & Preview of Blog 3

Above I covered the steps to create a model using Statsmodel. I also demonstrated how you can reuse your model with new data. In blog 1 I demonstrated some of the important processing steps required prior to model creation. In blog 3 I will finish my introduction to linear regression modeling by reviewing how to check for the remaining two assumptions. I look forward to seeing you soon!


Innovation Leader and Insight Enthusiast !

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store