Notes in ARMS (PSY)

To Subscribe, use this Key


Status Last Update Fields
Published 10/23/2024 Information in empirical collected data is captured in a {{c1::likelihood function::concept}}
Published 12/16/2023 In {{c1::frequentist}} approach: all relevant information for inference is contained in the likelihood function
Published 12/16/2023 In {{c1::Bayesian}} approach: in addition to the likelihood function to capture the information in the data, we may also have {{c2::prior}} informatio…
Published 12/16/2023 {{c1::Prior}} knowledge is updated with information in the data and together provides {{c2::posterior}} distribution for µ.
Published 02/21/2024 Posterior mean (or mode)
Published 02/21/2024 Posterior SD
Published 02/21/2024 What is meant by the posterior 95% credible interval?
Published 02/21/2024 What is Posterior Model Probability (PMP)?
Published 12/16/2023 Bayesian probability of a hypothesis being true depends on two criteria: 1. How sensible it is, based on current knowledge ({{c1::the prior}}) 2. How …
Published 12/16/2023 To overcome the assumption of interval/ratio values for {{c2::Multiple Regression Analysis}}, researchers can create {{c1::dummy variables.}}
Published 02/21/2024 When do you use a hierarchical linear regression analysis?
Published 02/21/2024 Method enter
Published 02/21/2024 How is the method stepwise (forward/backward) used?
Published 12/16/2023 {{c1::Priors}} are sometimes seen as the bottleneck of the Bayesian approach because you have to specify something, and it can affect the results.
Published 12/16/2023 {{c1::Bayesian}} statistics assumes that we know more than just the frequency of an event in a data set.
Published 12/16/2023 For the Multiple Linear Regression:Assumption: the {{c1::dependent variable}} is a continuous measure (Interval or Ratio).Assumption: the {{c2::indepe…
Published 12/16/2023 If an {{c1::outlier}} value is theoretically possible, you could run the analysis with and without the value and compare the results.
Published 12/16/2023 {{c1::Multicollinearity}} indicates whether the relationship between two or more independent variables is too strong. 
Published 12/16/2023 Association between {{c1::predictors}} is not a problem for MLR, but a very large association (r above .8 / .9) is.
Published 12/16/2023 The condition of {{c1::homoscedasticity}} means that the spread of the residuals must be approximately the same across all values for the predicted y.
Published 02/21/2024 Which assumptions (4) need to be checked to perform a multiple regression analysis?
Published 12/16/2023 There are no outliers in x-y space when Cook's distance is {{c1::below 1.}}
Published 12/16/2023 {{c1::Cook's distance}} is a measure of the overall influence of a (outlier)case on the model.
Published 12/16/2023 {{c1::R-squared (\(R^2\))}} refers to the proportion of explained variance in the sample.
Published 12/16/2023 The {{c1::adjusted R2}} is an estimate of the proportion of explained variance in the population.
Published 12/16/2023 The estimated proportion of explained variance in the {{c1::population}} is always somewhat lower than the proportion of explained variance in the {{c…
Published 12/16/2023 The {{c1::F-test}} tells us whether the model is a significant fit to the data overall
Published 12/16/2023 The {{c1::R-square change}} is the change in R-square when adding predictors to a model (or removing predictors from a model).
Published 02/21/2024 Explain what it means when BF10  = 8.
Published 12/16/2023 There can't be an {{c1::interaction}} effect in an ANCOVA
Published 02/21/2024 Homogenity of regression slopes
Published 12/16/2023 With an {{c1::ANCOVA}} we compare the scores of groups on a dependent variable while controling for another variable: the {{c2::covariate}}.
Published 12/16/2023 With ANCOVA the {{c1::independent}} variable is always a group variable and the {{c1::dependent}} variable is always a quantitative variable.
Published 02/21/2024 Group variable
Published 02/21/2024 Quantitative variable
Published 12/16/2023 Adding {{c1::covariates}} to an ANCOVA model will explain variation in y and therefore increase the {{c2::power}} of statistical tests.
Published 02/21/2024 What is the familiy-wise or experiment-wise error rate?
Published 12/16/2023 If you want to check for an interaction effect, the first step is to see the {{c1::significance level.}}
Published 12/16/2023 There are two ways to check for an {{c2::interaction effect:}}Visual checking {{c1::Simple main effects (JASP)}}
Published 12/16/2023 An {{c1::informative}} hypothesis can test your research question or expectations directly.
Published 12/16/2023 The {{c1::Bayes factor}} compares the fit of data to 2 competing hypotheses.
Published 12/16/2023 For a set of models/hypotheses, the {{c1::prior model probabilities}} add up to 1.
Published 12/16/2023 In {{c1::moderation}} the relationship between an independent variable (X) and a dependent variable (Y) depends on another independent variable (M): t…
Published 12/16/2023 When there is a significant {{c1::interaction}} effect, we say that the effect of the independent variable (X) on the dependent variable (Y) depends o…
Published 12/16/2023 {{c1::Mediation}} is when an independent variable (X) has an effect on a dependent variable (Y) through another independent variable (M).
Published 12/16/2023 The {{c1::direct effect}} is the effect of the independent variable X, on the dependent variable Y, controlled for the mediator M (denoted c').
Published 10/23/2024 The {{c1::indirect effect}} is the effect of the independent variable X on the dependent variable Y through the mediator M.This effect is obtained by …
Published 12/16/2023 {{c1::Complete mediation}} is when the direct effect of X on Y disappears when the mediator is added to the model (c' = 0).
Published 12/16/2023 In {{c1::partial mediation}}, only part of the direct effect disappears when the mediator is added to the model.In that case, there is both a direct a…
Published 12/16/2023 {{c1::Bootstrapping}} is a technique that can be used to estimate the distribution of a statistic when we don't know the true distribution of the stat…
Published 12/16/2023 When facing a significant {{c1::indirect}} effect and a non significant {{c1::direct}} effect, there is a complete mediation.
Published 02/21/2024 Repeated Measure Design
Published 02/21/2024 Factorial ANOVA
Published 02/21/2024 Mixed Design
Published 02/21/2024 Within-subjects
Published 02/21/2024 Between-subjects factor
Published 12/16/2023 The Greenhouse-Geisser and Huynh-Feldt corrections correct for the degrees of freedom in the {{c1::repeated measures analysis}}.
Published 12/16/2023 Partial Eta Squared is small ({{c1::.01}}), medium ({{c1::.06}}) or large ({{c1::.14}})
Published 12/16/2023 To test the homogeneity of regression lines, you run a model with an {{c1::interaction term}} and compare it to the model without the {{c1::interactio…
Published 02/21/2024 Post-Hoc Tests
Published 12/16/2023 The assumptions of ANOVA are:{{c1::Observations are independent}}{{c1::Dependent variable is at least interval scale}}{{c1::Dependent variable is norm…
Published 02/21/2024 Homoscedacticity / Homogeneity of variance
Published 12/16/2023 Adding control variables to an analysis of variance can improve the {{c1::internal validity}} by eliminating an alternative explanation.
Published 12/16/2023 The assumption of homogeneity of variances does apply to a mixed design, but it does not for {{c1::repeated measures}} analysis.
Published 12/16/2023 Sphericity is a model assumption for {{c1::repeated measures analysis}}.  
Published 12/16/2023 {{c1::Standardized regression}} coefficients should be used to determine which predictor is the most important one.
Published 12/16/2023 The assumptions of {{c1::repeated measures ANOVA}} are the same as the assumptions for a basic ANOVA with one addition: {{c2::sphericity}}. 
Published 02/21/2024 What are the assumptions (5) for Multiple Regression Analysis?
Published 12/16/2023 The assumption for {{c1::sphericity}} is only relevant if there is a within factor with more than 2 levels.
Published 12/16/2023 Partial Eta Squared (\(\eta^{2}_{p}\)) is the amount of variance explained by a condition in the {{c1::total}} variance.
Published 12/16/2023 The reported corrected p-value, like the Bonferonni correction, is always {{c1::larger}} than the uncorrected p-value.
Published 12/16/2023 With the {{c1::least squares method}} we can find a linear regression model which fits the data best.
Published 12/16/2023 The {{c1::\(R^2\)}} determines the proportion of variance of the response variable that is explained by the predictor variable(s).
Published 12/16/2023 If there is no explanation by the predictor variable in a {{c1::linear regression}} analysis, the \(R^2\) should be close to 00. 
Published 12/16/2023 There are no influential cases (outliers in x-y space) when {{c2::Cook's distance}} is {{c1::below 1.}}
Published 12/16/2023 {{c1::Prior Model Probabilities}} are calculated by considering every hypothesis to be equal. 
Published 12/16/2023 The value R indicates the correlation between the {{c1::observed scores}} (Y) and the {{c1::predicted scores}} (Ŷ).
Published 12/16/2023 Normally, \(R^2\)  is used to assess how much variance of the {{c1::dependent variable}} is explained by the model.
Published 12/16/2023 The {{c1::standardized coefficients}}, the Beta's, can be used to determine the most important predictor of the dependent variable.
Status Last Update Fields