What really matters is how different the predicted values of the dependent variable are at values of the continuous variable that are important.The difference-in-difference (DID) technique originated in the field of econometrics, but the logic underlying the technique has been used as early as the 1850’s by John Snow and is called the ‘controlled before-and-after study’ in some social sciences. Then again, in an interaction model, particularly where one of the variables is continuous, the statistical significance of the interaction term is, usually unimportant, and often misleading. For most people that's not particularly well, so guessing the statistical significance of the difference from looking at the separate outputs is usually a losing game. It depends on how your visual perception of difference between the coefficients aligns with statistical significance. So, assuming that the most important values of h_score are, for sake of discussion, 2 through 5, you would be better off looking atĤ. What really matters is how different the predicted values of the dependent variable are at values of the continuous variable that are important. You can get these numbers more directly and more easily by running -margins sex, dydx(h_score)- after the regression.Ĥ. What you can say is that for sex = 0, a unit difference in h_score is associated with a 0.309 difference in the expected value of lop_score, whereas for sex = 1, a unit difference in h_score is associated with a 0.309 - 0.068 = 0.241 difference in the expected value of lop_score. Either way, though, your interpretation is not correct. Well, you don't say whether female is coded 0 or 1. a#b causes Stata to include a, and b, and the interaction term.ģ. a#b causes Stata to include the interaction term between a and b in the model, but it does not include each of a and b separately (so you have to write out a and b separately to have a valid model). Thanks all for any clarification whatsoever most of this is new to me and I'm trying my best to become as knowledgeable about this as possible.Ģ. Is it possible that you can have an interaction term in a regression that ends up not significant, even if you've run univariable regressions, seperated by sex, and seen that the B values are different from each other? This is a hybrid interaction as one term is continuous and the other is not would I interpret this as: What is the difference between # and # if any?ģ. Is running a factorial ANOVA technically the same thing as a linear regression, in terms of a p value? The p value is interestingly the same for my Beta coefficient for interaction term in my Lin Reg and the for the Prob>F value in my ANOVA corresponding to the interaction term. Regress lop_score h_score lop_score#c.h_score So I then assessed formally for an interaction by running: Then, I realized the slopes and the coefficient are different between sexes Which gives me the crude lop_score, as well as the lop_score for each sex. What I have done so far is the following code: I am trying to assess the effect of my IV (h_score) on my DV (lop_score), but I want to see if sex is an effect modifier of this association.