Intercept
Regression Coefficient
Omnibus test
^Y=b0+b1X1+b2X2+⋯+bkXk
Intercept is the value of Y when all predictors = 0
Regression coefficients are the predicted change in Y for a 1 unit change in X, holding all other predictors constant
Residual in simple regression can be thought of as a measure of Y that is left over after accounting for your DV
library(here)stress.data = read.csv(here::here("R", "stress.csv"))library(psych)describe(stress.data$Stress)
## vars n mean sd median trimmed mad min max range skew kurtosis se## X1 1 118 5.18 1.88 5.27 5.17 1.65 0.62 10.32 9.71 0.08 0.22 0.17
mr.model <- lm(Stress ~ Support + Anxiety, data = stress.data)summary(mr.model)
## ## Call:## lm(formula = Stress ~ Support + Anxiety, data = stress.data)## ## Residuals:## Min 1Q Median 3Q Max ## -4.1958 -0.8994 -0.1370 0.9990 3.6995 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.31587 0.85596 -0.369 0.712792 ## Support 0.40618 0.05115 7.941 1.49e-12 ***## Anxiety 0.25609 0.06740 3.799 0.000234 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 1.519 on 115 degrees of freedom## Multiple R-squared: 0.3556, Adjusted R-squared: 0.3444 ## F-statistic: 31.73 on 2 and 115 DF, p-value: 1.062e-11
library(visreg)visreg2d(mr.model,"Support", "Anxiety", plot.type = "persp")
mr.model <- lm(Stress ~ Support + Anxiety, data = stress.data)summary(mr.model)
## ## Call:## lm(formula = Stress ~ Support + Anxiety, data = stress.data)## ## Residuals:## Min 1Q Median 3Q Max ## -4.1958 -0.8994 -0.1370 0.9990 3.6995 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.31587 0.85596 -0.369 0.712792 ## Support 0.40618 0.05115 7.941 1.49e-12 ***## Anxiety 0.25609 0.06740 3.799 0.000234 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 1.519 on 115 degrees of freedom## Multiple R-squared: 0.3556, Adjusted R-squared: 0.3444 ## F-statistic: 31.73 on 2 and 115 DF, p-value: 1.062e-11
mr.model <- lm(Stress ~ Support + Anxiety, data = stress.data)summary(mr.model)
## ## Call:## lm(formula = Stress ~ Support + Anxiety, data = stress.data)## ## Residuals:## Min 1Q Median 3Q Max ## -4.1958 -0.8994 -0.1370 0.9990 3.6995 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.31587 0.85596 -0.369 0.712792 ## Support 0.40618 0.05115 7.941 1.49e-12 ***## Anxiety 0.25609 0.06740 3.799 0.000234 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 1.519 on 115 degrees of freedom## Multiple R-squared: 0.3556, Adjusted R-squared: 0.3444 ## F-statistic: 31.73 on 2 and 115 DF, p-value: 1.062e-11
mr.model <- lm(Stress ~ Support + Anxiety, data = stress.data)summary(mr.model)
## ## Call:## lm(formula = Stress ~ Support + Anxiety, data = stress.data)## ## Residuals:## Min 1Q Median 3Q Max ## -4.1958 -0.8994 -0.1370 0.9990 3.6995 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.31587 0.85596 -0.369 0.712792 ## Support 0.40618 0.05115 7.941 1.49e-12 ***## Anxiety 0.25609 0.06740 3.799 0.000234 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 1.519 on 115 degrees of freedom## Multiple R-squared: 0.3556, Adjusted R-squared: 0.3444 ## F-statistic: 31.73 on 2 and 115 DF, p-value: 1.062e-11
Same interpretation as before
Adding predictors into your model will increase R2 – regardless of whether or not the predictor is significantly correlated with Y.
Adjusted/Shrunken R2 takes into account the number of predictors in your model
One of the benefits of using regression is that it can handle both continuous and categorical predictors and allows for using both in the same model.
Categorical predictors with more than two levels are broken up into several smaller variables. In doing so, we take variables that don't have any inherent numerical value to them (i.e., nominal and ordinal variables) and ascribe meaningful numbers that allow for us to calculate meaningful statistics.
You can choose just about any numbers to represent your categorical variable. However, there are several commonly used methods that result in very useful statistics.
In dummy coding, one group is selected to be a reference group. From your single nominal variable with K levels, K−1 dummy code variables are created; for each new dummy code variable, one of the non-reference groups is assigned 1; all other groups are assigned 0.
Occupation | D1 | D2 |
---|---|---|
Engineer | 0 | 0 |
Teacher | 1 | 0 |
Doctor | 0 | 1 |
The dummy codes are entered as IV's in the regression equation.
In dummy coding, one group is selected to be a reference group. From your single nominal variable with K levels, K−1 dummy code variables are created; for each new dummy code variable, one of the non-reference groups is assigned 1; all other groups are assigned 0.
Occupation | D1 | D2 |
---|---|---|
Engineer | 0 | 0 |
Teacher | 1 | 0 |
Doctor | 0 | 1 |
The dummy codes are entered as IV's in the regression equation.
Person | Occupation | D1 | D2 |
---|---|---|---|
Billy | Engineer | 0 | 0 |
Susan | Teacher | 1 | 0 |
Michael | Teacher | 1 | 0 |
Molly | Engineer | 0 | 0 |
Katie | Doctor | 0 | 1 |
Solomon’s paradox describes the tendency for people to reason more wisely about other people’s problems compared to their own. One potential explanation for this paradox is that people tend to view other people’s problems from a more psychologically distant perspective, whereas they view their own problems from a psychologically immersed perspective. To test this possibility, researchers asked romantically-involved participants to think about a situation in which their partner cheated on them (self condition) or a friend’s partner cheated on their friend (other condition). Participants were also instructed to take a first-person perspective (immersed condition) by using pronouns such as I and me, or a third-person perspective (distanced condition) by using pronouns such as he and her.
solomon <- read.csv(here::here("R", "solomon.csv"))
Grossmann, I., & Kross, E. (2014). Exploring Solomon’s paradox: Self-distancing eliminates self-other asymmetry in wise reasoning about close relationships in younger and older adults. Psychological Science, 25, 1571-1580.
psych::describe(solomon[,c("ID", "CONDITION", "WISDOM")], fast = T)
## vars n mean sd min max range se## ID 1 120 64.46 40.98 1.00 168.00 167.00 3.74## CONDITION 2 120 2.46 1.12 1.00 4.00 3.00 0.10## WISDOM 3 115 0.01 0.99 -2.52 1.79 4.31 0.09
psych::describe(solomon[,c("ID", "CONDITION", "WISDOM")], fast = T)
## vars n mean sd min max range se## ID 1 120 64.46 40.98 1.00 168.00 167.00 3.74## CONDITION 2 120 2.46 1.12 1.00 4.00 3.00 0.10## WISDOM 3 115 0.01 0.99 -2.52 1.79 4.31 0.09
library(knitr)library(kableExtra)library(tidyverse)head(solomon) %>% select(ID, CONDITION, WISDOM) %>% kable() %>% kable_styling()
ID | CONDITION | WISDOM |
---|---|---|
1 | 3 | -0.2758939 |
6 | 4 | 0.4294921 |
8 | 4 | -0.0278587 |
9 | 4 | 0.5327150 |
10 | 2 | 0.6229979 |
12 | 2 | -1.9957813 |
solomon = solomon %>% mutate(dummy_2 = ifelse(CONDITION == 2, 1, 0), dummy_3 = ifelse(CONDITION == 3, 1, 0), dummy_4 = ifelse(CONDITION == 4, 1, 0)) solomon %>% select(ID, CONDITION, WISDOM, matches("dummy")) %>% kable() %>% kable_styling()
ID | CONDITION | WISDOM | dummy_2 | dummy_3 | dummy_4 |
---|---|---|---|---|---|
1 | 3 | -0.2758939 | 0 | 1 | 0 |
6 | 4 | 0.4294921 | 0 | 0 | 1 |
8 | 4 | -0.0278587 | 0 | 0 | 1 |
9 | 4 | 0.5327150 | 0 | 0 | 1 |
10 | 2 | 0.6229979 | 1 | 0 | 0 |
12 | 2 | -1.9957813 | 1 | 0 | 0 |
14 | 3 | -1.1514699 | 0 | 1 | 0 |
18 | 2 | -0.6912011 | 1 | 0 | 0 |
21 | 2 | 0.0053117 | 1 | 0 | 0 |
25 | 4 | 0.2863499 | 0 | 0 | 1 |
26 | 4 | -1.8217968 | 0 | 0 | 1 |
30 | 1 | -1.2823302 | 0 | 0 | 0 |
32 | 1 | -2.3358379 | 0 | 0 | 0 |
35 | 4 | 0.2710307 | 0 | 0 | 1 |
50 | 1 | 0.7179373 | 0 | 0 | 0 |
53 | 1 | -2.0595072 | 0 | 0 | 0 |
57 | 4 | -0.2327698 | 0 | 0 | 1 |
58 | 4 | 0.0214245 | 0 | 0 | 1 |
60 | 3 | 0.1112851 | 0 | 1 | 0 |
62 | 1 | -1.7895030 | 0 | 0 | 0 |
65 | 2 | 0.9330889 | 1 | 0 | 0 |
68 | 1 | -0.3152235 | 0 | 0 | 0 |
71 | 4 | 0.7765844 | 0 | 0 | 1 |
76 | 4 | 1.1960573 | 0 | 0 | 1 |
84 | 2 | 0.0248331 | 1 | 0 | 0 |
86 | 3 | 1.2175357 | 0 | 1 | 0 |
88 | 3 | 0.5025819 | 0 | 1 | 0 |
89 | 1 | -0.4693998 | 0 | 0 | 0 |
95 | 4 | 0.4821839 | 0 | 0 | 1 |
99 | 1 | -0.0352657 | 0 | 0 | 0 |
102 | 1 | 1.1155606 | 0 | 0 | 0 |
105 | 2 | 1.4556172 | 1 | 0 | 0 |
117 | 1 | NA | 0 | 0 | 0 |
122 | 2 | 0.4161299 | 1 | 0 | 0 |
143 | 1 | -1.3339417 | 0 | 0 | 0 |
145 | 4 | NA | 0 | 0 | 1 |
152 | 4 | 0.6508028 | 0 | 0 | 1 |
153 | 2 | -1.8543092 | 1 | 0 | 0 |
159 | 2 | -0.8511141 | 1 | 0 | 0 |
168 | 2 | 0.0029835 | 1 | 0 | 0 |
2 | 4 | 0.1340113 | 0 | 0 | 1 |
3 | 4 | -0.8836265 | 0 | 0 | 1 |
4 | 4 | 0.9063644 | 0 | 0 | 1 |
5 | 1 | 1.7905951 | 0 | 0 | 0 |
7 | 1 | -0.9868494 | 0 | 0 | 0 |
11 | 3 | 1.0372247 | 0 | 1 | 0 |
13 | 3 | -2.4860158 | 0 | 1 | 0 |
15 | 2 | 1.1166410 | 1 | 0 | 0 |
16 | 3 | 0.0412327 | 0 | 1 | 0 |
17 | 3 | 0.1183208 | 0 | 1 | 0 |
19 | 2 | -1.2353752 | 1 | 0 | 0 |
20 | 3 | 0.5182724 | 0 | 1 | 0 |
22 | 3 | 0.6202474 | 0 | 1 | 0 |
23 | 3 | -0.6130326 | 0 | 1 | 0 |
24 | 2 | 0.0114708 | 1 | 0 | 0 |
27 | 4 | 0.5735473 | 0 | 0 | 1 |
29 | 1 | -0.9486002 | 0 | 0 | 0 |
31 | 1 | 0.1183208 | 0 | 0 | 0 |
33 | 3 | -0.0208230 | 0 | 1 | 0 |
34 | 3 | 0.9004090 | 0 | 1 | 0 |
36 | 4 | 0.8704434 | 0 | 0 | 1 |
37 | 3 | 0.9556476 | 0 | 1 | 0 |
38 | 2 | 1.0240299 | 1 | 0 | 0 |
39 | 3 | -0.1556817 | 0 | 1 | 0 |
40 | 3 | 0.6229979 | 0 | 1 | 0 |
41 | 2 | -0.8691839 | 1 | 0 | 0 |
42 | 4 | 1.2319783 | 0 | 0 | 1 |
43 | 1 | -1.4556055 | 0 | 0 | 0 |
44 | 4 | 0.9341692 | 0 | 0 | 1 |
45 | 4 | -0.2287715 | 0 | 0 | 1 |
46 | 1 | -0.2903366 | 0 | 0 | 0 |
47 | 2 | 0.7034946 | 1 | 0 | 0 |
48 | 3 | 0.7551061 | 0 | 1 | 0 |
49 | 3 | -0.5291273 | 0 | 1 | 0 |
51 | 1 | 0.7262208 | 0 | 0 | 0 |
52 | 2 | 0.6108835 | 1 | 0 | 0 |
54 | 3 | -0.1134342 | 0 | 1 | 0 |
55 | 3 | 0.4150495 | 0 | 1 | 0 |
56 | 3 | 1.2991128 | 0 | 1 | 0 |
59 | 1 | -2.3324293 | 0 | 0 | 0 |
61 | 3 | -1.1745673 | 0 | 1 | 0 |
63 | 3 | 0.8560007 | 0 | 1 | 0 |
64 | 2 | -0.0486279 | 1 | 0 | 0 |
66 | 2 | 0.9532683 | 1 | 0 | 0 |
67 | 4 | NA | 0 | 0 | 1 |
69 | 2 | 0.8188319 | 1 | 0 | 0 |
70 | 4 | 1.6041250 | 0 | 0 | 1 |
72 | 2 | 0.9870285 | 1 | 0 | 0 |
73 | 4 | 0.1554896 | 0 | 0 | 1 |
74 | 1 | 0.3141548 | 0 | 0 | 0 |
75 | 1 | NA | 0 | 0 | 0 |
77 | 1 | -2.3046244 | 0 | 0 | 0 |
78 | 1 | 0.2277028 | 0 | 0 | 0 |
79 | 4 | 0.0545949 | 0 | 0 | 1 |
80 | 3 | -0.1217177 | 0 | 1 | 0 |
81 | 1 | -0.8641051 | 0 | 0 | 0 |
82 | 3 | 0.3524040 | 0 | 1 | 0 |
83 | 3 | 0.1565700 | 0 | 1 | 0 |
85 | 3 | 0.3430401 | 0 | 1 | 0 |
87 | 2 | 1.1792865 | 1 | 0 | 0 |
90 | 1 | 0.4329007 | 0 | 0 | 0 |
91 | 2 | -0.8083760 | 1 | 0 | 0 |
92 | 1 | 1.1427757 | 0 | 0 | 0 |
93 | 1 | 0.4101745 | 0 | 0 | 0 |
94 | 3 | 0.2387368 | 0 | 1 | 0 |
96 | 1 | -1.3751088 | 0 | 0 | 0 |
97 | 2 | 0.0834802 | 1 | 0 | 0 |
98 | 1 | -0.9282022 | 0 | 0 | 0 |
100 | 4 | 1.6584869 | 0 | 0 | 1 |
101 | 1 | -0.5150559 | 0 | 0 | 0 |
103 | 3 | 0.2421454 | 0 | 1 | 0 |
104 | 4 | -1.2128165 | 0 | 0 | 1 |
106 | 1 | -0.9736546 | 0 | 0 | 0 |
107 | 3 | 0.1843749 | 0 | 1 | 0 |
108 | 1 | -2.5231846 | 0 | 0 | 0 |
134 | 1 | 0.7839913 | 0 | 0 | 0 |
135 | 2 | 0.5787934 | 1 | 0 | 0 |
146 | 3 | 0.4955462 | 0 | 1 | 0 |
149 | 3 | 1.0877557 | 0 | 1 | 0 |
154 | 3 | NA | 0 | 1 | 0 |
mod.1 = lm(WISDOM ~ dummy_2 + dummy_3 + dummy_4, data = solomon)summary(mod.1)
## ## Call:## lm(formula = WISDOM ~ dummy_2 + dummy_3 + dummy_4, data = solomon)## ## Residuals:## Min 1Q Median 3Q Max ## -2.6809 -0.4209 0.0473 0.6694 2.3499 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.5593 0.1686 -3.317 0.001232 ** ## dummy_2 0.6814 0.2497 2.729 0.007390 ** ## dummy_3 0.7541 0.2348 3.211 0.001729 ** ## dummy_4 0.8938 0.2524 3.541 0.000583 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 0.9389 on 111 degrees of freedom## (5 observations deleted due to missingness)## Multiple R-squared: 0.1262, Adjusted R-squared: 0.1026 ## F-statistic: 5.343 on 3 and 111 DF, p-value: 0.001783
When working with dummy codes, the intercept can be interpreted as the mean of the reference group.
^Y=b0+b1D2+b2D3+b3D2^Y=b0+b1(0)+b2(0)+b3(0)^Y=b0^Y=¯YReference
What do each of the slope coefficients mean?
From this equation, we can get the mean of every single group.
newdata = data.frame(dummy_2 = c(0,1,0,0), dummy_3 = c(0,0,1,0), dummy_4 = c(0,0,0,1))predict(mod.1, newdata = newdata, se.fit = T)
## $fit## 1 2 3 4 ## -0.5593042 0.1220847 0.1948435 0.3344884 ## ## $se.fit## 1 2 3 4 ## 0.1686358 0.1841382 0.1634457 0.1877848 ## ## $df## [1] 111## ## $residual.scale## [1] 0.9389242
From this equation, we can get the mean of every single group.
solomon %>% mutate_at("CONDITION", ~as.factor(.)) %>% group_by(CONDITION) %>% drop_na() %>% summarize(meanWisdom = mean(WISDOM))
## # A tibble: 4 × 2## CONDITION meanWisdom## <fct> <dbl>## 1 1 -0.559## 2 2 0.122## 3 3 0.195## 4 4 0.334
And the test of the coefficient represents the significance test of each group to the reference. This is an independent-samples t-test.
The test of the intercept is the one-sample t-test comparing the intercept to 0.
summary(mod.1)$coef
## Estimate Std. Error t value Pr(>|t|)## (Intercept) -0.5593042 0.1686358 -3.316641 0.0012319438## dummy_2 0.6813889 0.2496896 2.728944 0.0073896074## dummy_3 0.7541477 0.2348458 3.211247 0.0017291997## dummy_4 0.8937927 0.2523909 3.541303 0.0005832526
What if you wanted to compare groups 2 and 3?
solomon = solomon %>% mutate(dummy_1 = ifelse(CONDITION == 1, 1, 0), dummy_3 = ifelse(CONDITION == 3, 1, 0), dummy_4 = ifelse(CONDITION == 4, 1, 0)) mod.2 = lm(WISDOM ~ dummy_1 + dummy_3 + dummy_4, data = solomon)summary(mod.2)
## ## Call:## lm(formula = WISDOM ~ dummy_1 + dummy_3 + dummy_4, data = solomon)## ## Residuals:## Min 1Q Median 3Q Max ## -2.6809 -0.4209 0.0473 0.6694 2.3499 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.12208 0.18414 0.663 0.50870 ## dummy_1 -0.68139 0.24969 -2.729 0.00739 **## dummy_3 0.07276 0.24621 0.296 0.76816 ## dummy_4 0.21240 0.26300 0.808 0.42104 ## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 0.9389 on 111 degrees of freedom## (5 observations deleted due to missingness)## Multiple R-squared: 0.1262, Adjusted R-squared: 0.1026 ## F-statistic: 5.343 on 3 and 111 DF, p-value: 0.001783
solomon = solomon %>% mutate_at("CONDITION", ~as.factor(.))mod.3 = lm(WISDOM ~ CONDITION, data = solomon)summary(mod.3)
## ## Call:## lm(formula = WISDOM ~ CONDITION, data = solomon)## ## Residuals:## Min 1Q Median 3Q Max ## -2.6809 -0.4209 0.0473 0.6694 2.3499 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.5593 0.1686 -3.317 0.001232 ** ## CONDITION2 0.6814 0.2497 2.729 0.007390 ** ## CONDITION3 0.7541 0.2348 3.211 0.001729 ** ## CONDITION4 0.8938 0.2524 3.541 0.000583 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 0.9389 on 111 degrees of freedom## (5 observations deleted due to missingness)## Multiple R-squared: 0.1262, Adjusted R-squared: 0.1026 ## F-statistic: 5.343 on 3 and 111 DF, p-value: 0.001783
summary(mod.1)
## ## Call:## lm(formula = WISDOM ~ dummy_2 + dummy_3 + dummy_4, data = solomon)## ## Residuals:## Min 1Q Median 3Q Max ## -2.6809 -0.4209 0.0473 0.6694 2.3499 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.5593 0.1686 -3.317 0.001232 ** ## dummy_2 0.6814 0.2497 2.729 0.007390 ** ## dummy_3 0.7541 0.2348 3.211 0.001729 ** ## dummy_4 0.8938 0.2524 3.541 0.000583 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 0.9389 on 111 degrees of freedom## (5 observations deleted due to missingness)## Multiple R-squared: 0.1262, Adjusted R-squared: 0.1026 ## F-statistic: 5.343 on 3 and 111 DF, p-value: 0.001783
summary(mod.2)
## ## Call:## lm(formula = WISDOM ~ dummy_1 + dummy_3 + dummy_4, data = solomon)## ## Residuals:## Min 1Q Median 3Q Max ## -2.6809 -0.4209 0.0473 0.6694 2.3499 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 0.12208 0.18414 0.663 0.50870 ## dummy_1 -0.68139 0.24969 -2.729 0.00739 **## dummy_3 0.07276 0.24621 0.296 0.76816 ## dummy_4 0.21240 0.26300 0.808 0.42104 ## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 0.9389 on 111 degrees of freedom## (5 observations deleted due to missingness)## Multiple R-squared: 0.1262, Adjusted R-squared: 0.1026 ## F-statistic: 5.343 on 3 and 111 DF, p-value: 0.001783
summary(mod.3)
## ## Call:## lm(formula = WISDOM ~ CONDITION, data = solomon)## ## Residuals:## Min 1Q Median 3Q Max ## -2.6809 -0.4209 0.0473 0.6694 2.3499 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.5593 0.1686 -3.317 0.001232 ** ## CONDITION2 0.6814 0.2497 2.729 0.007390 ** ## CONDITION3 0.7541 0.2348 3.211 0.001729 ** ## CONDITION4 0.8938 0.2524 3.541 0.000583 ***## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 0.9389 on 111 degrees of freedom## (5 observations deleted due to missingness)## Multiple R-squared: 0.1262, Adjusted R-squared: 0.1026 ## F-statistic: 5.343 on 3 and 111 DF, p-value: 0.001783
anova(mod.3)
## Analysis of Variance Table## ## Response: WISDOM## Df Sum Sq Mean Sq F value Pr(>F) ## CONDITION 3 14.131 4.7105 5.3432 0.001783 **## Residuals 111 97.855 0.8816 ## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
When we have two variables, A and B, in a regression model, we are testing whether these variables have additive effects on our outcome, Y. That is, the effect of A on Y is constant over all values of B.
However, we may hypothesize that two variables have joint effects, or interact with each other. In this case, the effect of A on Y changes as a function of B.
Let's use data about stress. We have an outcome (Stress) that we are interested in predicting from trait Anxiety and levels of Social Support. We can ignore the group
status for the time being.
library(here)stress.data = read.csv(here("R/stress.csv"))library(psych)describe(stress.data)
## vars n mean sd median trimmed mad min max range skew## id 1 118 488.65 295.95 462.50 485.76 372.13 2.00 986.00 984.00 0.10## Anxiety 2 118 7.61 2.49 7.75 7.67 2.26 0.70 14.64 13.94 -0.18## Stress 3 118 5.18 1.88 5.27 5.17 1.65 0.62 10.32 9.71 0.08## Support 4 118 8.73 3.28 8.52 8.66 3.16 0.02 17.34 17.32 0.18## group* 5 118 1.53 0.50 2.00 1.53 0.00 1.00 2.00 1.00 -0.10## kurtosis se## id -1.29 27.24## Anxiety 0.28 0.23## Stress 0.22 0.17## Support 0.19 0.30## group* -2.01 0.05
R
i.model1 = lm(Stress ~ Anxiety + Support + Anxiety:Support, data = stress.data)i.model2 = lm(Stress ~ Anxiety*Support, data = stress.data)
Both methods of specifying the interaction above will work in R
. Using the *
tells R
to create both the main effects and the interaction effect. Note, however that the following code gives you the wrong results:
imodel_bad = lm(Stress ~ Anxiety:Support, data = stress.data)# This does not create main effects.# It is VERY WRONG# Don't do this
i.model1 = lm(Stress ~ Anxiety*Support, data = stress.data)summary(i.model1)
## ## Call:## lm(formula = Stress ~ Anxiety * Support, data = stress.data)## ## Residuals:## Min 1Q Median 3Q Max ## -3.8163 -1.0783 0.0373 0.9200 3.6109 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -2.73966 1.12101 -2.444 0.01606 * ## Anxiety 0.61561 0.13010 4.732 6.44e-06 ***## Support 0.66697 0.09547 6.986 2.02e-10 ***## Anxiety:Support -0.04174 0.01309 -3.188 0.00185 ** ## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 1.462 on 114 degrees of freedom## Multiple R-squared: 0.4084, Adjusted R-squared: 0.3928 ## F-statistic: 26.23 on 3 and 114 DF, p-value: 5.645e-13
library(broom)library(knitr)kable(tidy(i.model1))
term | estimate | std.error | statistic | p.value |
---|---|---|---|---|
(Intercept) | -2.7396625 | 1.1210052 | -2.443934 | 0.0160605 |
Anxiety | 0.6156122 | 0.1301016 | 4.731780 | 0.0000064 |
Support | 0.6669669 | 0.0954746 | 6.985802 | 0.0000000 |
Anxiety:Support | -0.0417408 | 0.0130933 | -3.187954 | 0.0018497 |
kable(head(augment(i.model1)))
Stress | Anxiety | Support | .fitted | .resid | .hat | .sigma | .cooksd | .std.resid |
---|---|---|---|---|---|---|---|---|
3.19813 | 10.18520 | 6.1602 | 5.020185 | -1.8220554 | 0.0205374 | 1.458248 | 0.0083121 | -1.2592378 |
7.00840 | 5.58873 | 8.9069 | 4.563653 | 2.4447470 | 0.0173247 | 1.450055 | 0.0125411 | 1.6868210 |
6.17400 | 6.58500 | 10.5433 | 5.448214 | 0.7257861 | 0.0131721 | 1.466888 | 0.0008333 | 0.4997215 |
8.69884 | 8.95430 | 11.4605 | 6.133020 | 2.5658202 | 0.0379024 | 1.447732 | 0.0315283 | 1.7891912 |
5.26707 | 7.59910 | 5.5516 | 3.880245 | 1.3868246 | 0.0200085 | 1.462572 | 0.0046863 | 0.9581874 |
5.12485 | 8.15600 | 7.5117 | 4.734061 | 0.3907895 | 0.0100296 | 1.468032 | 0.0001828 | 0.2686407 |
kable(glance(i.model1))
r.squared | adj.r.squared | sigma | statistic | p.value | df | logLik | AIC | BIC | deviance | df.residual | nobs |
---|---|---|---|---|---|---|---|---|---|---|---|
0.4083528 | 0.3927831 | 1.462042 | 26.22746 | 0 | 3 | -210.2205 | 430.441 | 444.2944 | 243.6827 | 114 | 118 |
^Y=b0+b1X+b2Z+b3XZ
You can interpret the interaction term in the same way you normally interpret a slope coefficient -- this is the effect of the interaction controlling for other variables in the model.
You can also interpret the intercept the same way as before (the expected value of Y when all predictors are 0).
But here, b1 is the effect of X on Y when Z is equal to 0.
^Y=b0+b1X+b2Z+b3XZ
Lower-order terms change depending on the values of the higher-order terms. The value of b1 and b2 will change depending on the value of b3.
Higher-order terms are those terms that represent interactions. b3 is a higher-order term.
Is b0 a higher-order or lower-order term?
Ask what values b0 depends on -- both intercept and slope. Maybe be helpful to use paper to represent plane.
Higher-order interaction terms represent:
stress.data$AxS = stress.data$Anxiety*stress.data$Supporthead(stress.data[,c("Anxiety", "Support", "AxS")])
## Anxiety Support AxS## 1 10.18520 6.1602 62.74287## 2 5.58873 8.9069 49.77826## 3 6.58500 10.5433 69.42763## 4 8.95430 11.4605 102.62076## 5 7.59910 5.5516 42.18716## 6 8.15600 7.5117 61.26543
summary(lm(Stress ~ Anxiety + Support + AxS, data = stress.data))
## ## Call:## lm(formula = Stress ~ Anxiety + Support + AxS, data = stress.data)## ## Residuals:## Min 1Q Median 3Q Max ## -3.8163 -1.0783 0.0373 0.9200 3.6109 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -2.73966 1.12101 -2.444 0.01606 * ## Anxiety 0.61561 0.13010 4.732 6.44e-06 ***## Support 0.66697 0.09547 6.986 2.02e-10 ***## AxS -0.04174 0.01309 -3.188 0.00185 ** ## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 1.462 on 114 degrees of freedom## Multiple R-squared: 0.4084, Adjusted R-squared: 0.3928 ## F-statistic: 26.23 on 3 and 114 DF, p-value: 5.645e-13
summary(lm(Stress ~ Anxiety*Support, data = stress.data))
## ## Call:## lm(formula = Stress ~ Anxiety * Support, data = stress.data)## ## Residuals:## Min 1Q Median 3Q Max ## -3.8163 -1.0783 0.0373 0.9200 3.6109 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -2.73966 1.12101 -2.444 0.01606 * ## Anxiety 0.61561 0.13010 4.732 6.44e-06 ***## Support 0.66697 0.09547 6.986 2.02e-10 ***## Anxiety:Support -0.04174 0.01309 -3.188 0.00185 ** ## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 1.462 on 114 degrees of freedom## Multiple R-squared: 0.4084, Adjusted R-squared: 0.3928 ## F-statistic: 26.23 on 3 and 114 DF, p-value: 5.645e-13
They're the same!!
The regression line estimated in this model is quite difficult to interpret on its own. A good strategy is to decompose the regression equation into simple slopes, which are determined by calculating the conditional effects at a specific level of the moderating variable.
Simple slope: the equation for Y on X at differnt levels of Z; but also refers to only the coefficient for X in this equation
Conditional effect: the slope coefficients in the full regression model which can change. These are the lower-order terms associated with a variable. E.g., X has a conditional effect on Y.
Which variable is the "predictor" (X) and which is the "moderator" (Z)?
The conditional nature of these effects is easiest to see by "plugging in" different values for one of your variables. Return to the regression equation estimated in our stress data:
^Stress=−2.74+0.62(Anx)+0.67(Sup)+−0.04(Anx×Sup)
The conditional nature of these effects is easiest to see by "plugging in" different values for one of your variables. Return to the regression equation estimated in our stress data:
^Stress=−2.74+0.62(Anx)+0.67(Sup)+−0.04(Anx×Sup)Set Support to 5 ^Stress=−2.74+0.62(Anx)+0.67(5)+−0.04(Anx×5)=−2.74+0.62(Anx)+3.35+−0.2(Anx)=0.61+0.42(Anx)
The conditional nature of these effects is easiest to see by "plugging in" different values for one of your variables. Return to the regression equation estimated in our stress data:
^Stress=−2.74+0.62(Anx)+0.67(Sup)+−0.04(Anx×Sup)Set Support to 5 ^Stress=−2.74+0.62(Anx)+0.67(5)+−0.04(Anx×5)=−2.74+0.62(Anx)+3.35+−0.2(Anx)=0.61+0.42(Anx)Set Support to 10 ^Stress=−2.74+0.62(Anx)+0.67(10)+−0.04(Anx×10)=−2.74+0.62(Anx)+6.7+−0.4(Anx)=3.96+0.22(Anx)
Often we graph the simple slopes as a way to understand the interaction. Interpreting the shape of an interaction can be done using the numbers alone, but it requires a lot of calculation and mental rotation. For those reasons, consider it a requirement that you graph interactions in order to interpret them.
library(sjPlot)plot_model(imodel, type = "int")
plot_model(imodel, type = "int", mdrt.values = "meansd")
plot_model(imodel, type = "pred", terms = c("Support", "Anxiety[5,10]"))
plot_model(imodel, type = "pred", terms = c("Support", "Anxiety"), mdrt.values = "meansd")
^Stress=−2.74+0.62(Anx)+0.67(Sup)+−0.04(Anx×Sup)
We want to know whether anxiety is a significant predictor of stress at different levels of support.
library(reghelper)simple_slopes(imodel, levels = list(Support = c(4,6,8,10,12)))
## Anxiety Support Test Estimate Std. Error t value df Pr(>|t|) Sig.## 1 sstest 4 0.4486 0.0886 5.0617 114 1.610e-06 ***## 2 sstest 6 0.3652 0.0733 4.9791 114 2.289e-06 ***## 3 sstest 8 0.2817 0.0654 4.3095 114 3.488e-05 ***## 4 sstest 10 0.1982 0.0674 2.9424 114 0.003946 **## 5 sstest 12 0.1147 0.0786 1.4600 114 0.147036
If you don't list levels, then this function will test simple slopes at the mean and 1 SD above and below the mean.
What if you want to compare slopes to each other? How would we test this?
What if you want to compare slopes to each other? How would we test this?
The test of the interaction coefficient is equivalent to the test of the difference in slopes at levels of Z separated by 1 unit.
coef(summary(imodel))
## Estimate Std. Error t value Pr(>|t|)## (Intercept) -2.73966246 1.12100519 -2.443934 1.606052e-02## Anxiety 0.61561220 0.13010161 4.731780 6.435373e-06## Support 0.66696689 0.09547464 6.985802 2.017698e-10## Anxiety:Support -0.04174076 0.01309328 -3.187954 1.849736e-03
The regression equation built using the raw data is not only diffiuclt to interpret, but often the terms displayed are not relevant to the hypotheses we're interested.
Centering your variables by subracting the mean from all values can improve the interpretation of your results.
stress.data = stress.data %>% mutate(Anxiety.c = Anxiety - mean(Anxiety), Support.c = Support - mean(Support))head(stress.data[,c("Anxiety", "Anxiety.c", "Support", "Support.c")])
## Anxiety Anxiety.c Support Support.c## 1 10.18520 2.57086873 6.1602 -2.5697997## 2 5.58873 -2.02560127 8.9069 0.1769003## 3 6.58500 -1.02933127 10.5433 1.8133003## 4 8.95430 1.33996873 11.4605 2.7305003## 5 7.59910 -0.01523127 5.5516 -3.1783997## 6 8.15600 0.54166873 7.5117 -1.2182997
DO NOT CENTER YOUR DEPENDENT VARIABLE (Y; STRESS)
summary(lm(Stress ~ Anxiety.c*Support.c, data = stress.data))
## ## Call:## lm(formula = Stress ~ Anxiety.c * Support.c, data = stress.data)## ## Residuals:## Min 1Q Median 3Q Max ## -3.8163 -1.0783 0.0373 0.9200 3.6109 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 4.99580 0.14647 34.108 < 2e-16 ***## Anxiety.c 0.25122 0.06489 3.872 0.000181 ***## Support.c 0.34914 0.05238 6.666 9.82e-10 ***## Anxiety.c:Support.c -0.04174 0.01309 -3.188 0.001850 ** ## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 1.462 on 114 degrees of freedom## Multiple R-squared: 0.4084, Adjusted R-squared: 0.3928 ## F-statistic: 26.23 on 3 and 114 DF, p-value: 5.645e-13
summary(imodel)
## ## Call:## lm(formula = Stress ~ Anxiety * Support, data = stress.data)## ## Residuals:## Min 1Q Median 3Q Max ## -3.8163 -1.0783 0.0373 0.9200 3.6109 ## ## Coefficients:## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -2.73966 1.12101 -2.444 0.01606 * ## Anxiety 0.61561 0.13010 4.732 6.44e-06 ***## Support 0.66697 0.09547 6.986 2.02e-10 ***## Anxiety:Support -0.04174 0.01309 -3.188 0.00185 ** ## ---## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1## ## Residual standard error: 1.462 on 114 degrees of freedom## Multiple R-squared: 0.4084, Adjusted R-squared: 0.3928 ## F-statistic: 26.23 on 3 and 114 DF, p-value: 5.645e-13
What changed? What stayed the same?
So far, we've only discussed the unstandardized regression equation. If you're interested in getting the standardized regression equation, you can follow the same procedure of standardizing your variables first and then entering them into your linear model.
An important note: You must take the product of the Z-scores, not the Z-score of the products to get the correct regression model.
Y ~ z(X) + z(Z) + z(X)*z(Z)Y ~ z(X)*z(Z)
Y ~ z(X) + z(Z) + z(X*Z)
Interactions are all over the place and we can extend these concetps out:
Consider the case where D is a variable representing two groups. In a univariate regression, how do we interpret the coefficient for D?
^Y=b0+b1D
Consider the case where D is a variable representing two groups. In a univariate regression, how do we interpret the coefficient for D?
^Y=b0+b1D
b0 is the mean of the reference group, and D represents the difference in means between the two groups.
Extending this to the multivariate case, where X is continuous and D is a dummy code representing two groups.
^Y=b0+b1D+b2X
How do we interpret b1?
Extending this to the multivariate case, where X is continuous and D is a dummy code representing two groups.
^Y=b0+b1D+b2X
How do we interpret b1?
b1 is the difference in means between the two groups if the two groups have the same average level of X or holding X constant.
This, by the way, is ANCOVA.
Now extend this example to include joint effects, not just additive effects:
^Y=b0+b1D+b2X+b3DX
How do we interpret b1?
Now extend this example to include joint effects, not just additive effects:
^Y=b0+b1D+b2X+b3DX
How do we interpret b1?
b1 is the difference in means between the two groups when X is 0.
What is the interpretation of b2?
Now extend this example to include joint effects, not just additive effects:
^Y=b0+b1D+b2X+b3DX
How do we interpret b1?
b1 is the difference in means between the two groups when X is 0.
What is the interpretation of b2?
b2 is the slope of X among the reference group.
What is the interpretation of b3?
Now extend this example to include joint effects, not just additive effects:
^Y=b0+b1D+b2X+b3DX
How do we interpret b1?
b1 is the difference in means between the two groups when X is 0.
What is the interpretation of b2?
b2 is the slope of X among the reference group.
What is the interpretation of b3?
b3 is the difference in slopes between the reference group and the other group.
Polynomial regression (nonlinear) is most often a form of hierearchical regression that systematically tests a series of higher order functions for a single variable.
Linear: ^Y=b0+b1XQuadtratic: ^Y=b0+b1X+b2X2Cubic: ^Y=b0+b1X+b2X2+b3X3
Intercept
Regression Coefficient
Omnibus test
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |