We have a theory – we know that as interest rates go up, mortgage affordability declines and with that, home values follow. That’s what we expect, an inverse or a negative relationship between the dependent variable, home values and an independent variable, interest rates. That’s not the only factor but one of the factors. We have a hypothesis about this relationship based on what we know and understand and then we use data and fit a model on that data to see if the coefficient or the β associated with the interest rates variable is significant or not. Hypothesis comes first and then comes data. And testing that hypothesis and drawing statistical inference from that allows us to answer questions about the real world…from the data. A statistically significant coefficient associated with a variable does not imply that we proved the theory. Likewise, a statistically insignificant β does not mean we disproved the theory. The hypothesis is driven by the theory but the results of the test does not have any validity on the theory. All it says is that we have a sample and based on that sample, our results either confirms the theory or it does not. Another sample from that population and the results might be different. But it doesn’t say much about the validity of the theory. It’s just a confidence building measure.

So that’s that but how do we test an hypothesis and draw inferences from a sample of data? The first step of course is to state the hypothesis to be tested. This is done before we fit a model and estimate an equation from the data so we’ll do that now and assume that our theory says this…

Home_Value = β0 – β1.Interest_Rate + β2.Sq_Ft + β3.Lot_Size + β4.School_Score + ε

To test whether the 4 independent variables on the right cause the level of the dependent variable on the left to change, we’ll setup null and alternative hypotheses. A rule of thumb – whatever our theory or intuition says or what we want to prove right becomes the alternative hypothesis and the flip side of that forms the null. So the null and alternative hypotheses for the ** interest rate** variable then is…

Null Hypothesis or H_{O}: β1 ≥ 0

Alternative Hypothesis or H_{A}: β1 < 0

Rejecting the null hypothesis will give statistical credence to our theory that as interest rates rise, home values fall with other factors kept constant. But how do we reject or not reject the null hypothesis? By comparing the p-value for each variable with the significance level of say 0.05 or 5%. If p-value for a given variable is less than 0.05, we reject the null hypothesis else we do not reject. An alternate metric that is also used is the absolute value of t-stat for each variable. A t-stat value of 2 corresponds to 5% significance level and hence t-stat values greater than 2 deems a variable statistically significant while values less than 2 implies that the coefficient is not significant. A sample model output (not for home value prediction just discussed) is shown below.

The estimated coefficients are under the ‘Estimate’ column, the t-stats associated with each of those coefficients under the ‘t value’ column and the p-values under the ‘Pr(>|t|)’ column.

A statistically insignificant variable does not mean that it is an unnecessary variable and hence does not belong in the model. That is a theory question. Does that variable belong in the model based on what we know about the problem at hand? Do interest rates matter to home values? Of course they do so if that variable turned out to be statistically insignificant does not mean we throw that variable out. And oftentimes, statistical significance changes with the size of the sample you draw from the population. A variable that’s insignificant with a smaller dataset can become significant as the sample size increases. And hence the power of big data with an ability to conduct increasingly accurate hypothesis tests as ‘small data’ grows into big data.

Wrapping this up by setting up the hypotheses for the remaining three variables…the ** square feet **variable first…

Null Hypothesis or H_{O}: β2 ≤ 0

Alternative Hypothesis or H_{A}: β2 > 0

And for the ** lot size**…

Null Hypothesis or H_{O}: β3 ≤ 0

Alternative Hypothesis or H_{A}: β3 > 0

And the ** school scores**…

Null Hypothesis or H_{O}: β4 ≤ 0

Alternative Hypothesis or H_{A}: β4 > 0

So that was for the individual variables but how do we assess the overall model fit? By conducting a joint hypothesis test or an F-test that tests whether all coefficients are zero or not (more on that at That Venerable F-Test). Rejecting the null hypothesis in this case i.e., a statistically significant F-stat proves that we have a good model fit. The F-stat and the associated p-value are shown on the last line of the output above.

But what about R-squared (or adjusted R-squared to be more precise)? R-squared as we know is also a measure of model fit. So what if the adjusted R-squared is 0.9415 as we have in the output above and the F-test turns out to be statistically insignificant? Do we have a good model fit? Nope. The derived model then does not fit the data. What if the adjusted R-squared value was say 0.02 and the F-test turns out to be significant? There’s a model fit then and a significant F-stat is the determining statistic for evaluating that.

More to follow…

*Image credit – Franco Dal Molin, Flickr*