<under construction>
Mediation in Multiple Regression by Rory Allan
Winsteps: Displacement measure
For a Rasch/Winsteps analysis that includes anchored items (difficulty parameters are fixed based on a set of values previously calibrated; as opposed to estimating them fresh from the data), one can check Table 14.1 of Winsteps output provides various diagnostics statistics.
http://www.winsteps.com/winman/displacement.htm
What is vertically equated scores
In educational evaluation field, we often have access to vertically equated scales. Scales means scores, measures, points. Vertically equated scores in the context of education are the ones that are comparable across grades, which means that you can pick a score from 5th grader and a score from 8th grader and consider them to be measuring the same construct on the same scale, such as math ability. I can say this or elaborate the concept in a couple of different ways.
- Vertically equated scales allow you to compare students of different grades on a common scale.
- If a 4th grader got a score of 50 and 9th grader also got score of 50, they have the same ability level.
- The 10 point difference among 5th graders (e.g., 50 and 60) and the 10 point difference among 8th graders (60 and 70) are considered equal.
Instead of providing a detailed methodological note, I'd like to use a metaphor to explain why equating is possible across different grades.
<Under construction>
Creating an Early Warning System: Predictors of Dropout in Delaware
I was part of a team conducting the ROC Curve Analysis using the state of Delaware's education data. We put a lot of details in this paper, so people can replicate what we did. Appendix section has a lot of explanations regarding statistical models and concepts.
http://www.doe.k12.de.us/cms/lib09/DE01922744/Centricity/Domain/91/MA1275TAFINAL508.pdf
Mediator and Moderater
Mediator: When X is related to Y and when the mediator variable is included in the model X's effect diminishes.
Moderator: this involves statistical interaction: e.g., the effectiveness of the intervention depends on student's demographic characteristic.
How to delete datasets in SAS using PROC DATASETS
Deleting all temporary datasets:
proc datasets library = work kill nolist;
quit;
Deleting specific datasets in the temp directory:
proc datasets;
delete kaz1 estes diminfo concon FITSTAT hlm1;
run;
Automate the choice between HLM and non-HLM
When running PROC GLIMMIX (SAS) in a macro-driven way (e.g., running similar models 100 times), what gets annoying is some HLM models do not converge and you have to comb through output and decide which models to convert to fixed effect models, which is simpler and is easier to converge. The following allows the execution of a fixed model (non-HLM) when a random effect model (HLM) fails.
The following macro (%mend checkds;) checks if the first random effect model produces one of the result files (parameter estimates) and if it doesn't exist (i.e., random effect model did not converge), it will run the model without the random effect statement.
proc glimmix data=asdf METHOD=RSPL;
class CAMPUS_14;
model &out = &main
stud_char
interX
&predictors
/dist=binomial link=logit s ddfm=kr STDCOEF;
random int / subject = CAMPUS_14;
covtest /wald;
ods output
ParameterEstimates=kaz1
CovParms=uekawa1
ModelInfo=estes
dimensions=diminfo
ConvergenceStatus=concon
FitStatistics=FITSTAT
;
run;
data hlm1;
hlm1="HLM ";
run;
/*Check if converged and if not run fixed model*/
%macro checkds(dsn);
%if %sysfunc(exist(&dsn)) %then %do;
/*there is concon created*/
%end;
%else %do;
/*delete imcomplete data from the previous proc
that did not converge*/
proc datasets;
delete kaz1 estes diminfo concon FITSTAT hlm1;
run;
proc glimmix data=asdf METHOD=RSPL;
class CAMPUS_14;
model &out = &main
stud_char
interX
&predictors
/dist=binomial link=logit s ddfm=kr;
ods output
ParameterEstimates=kaz1
/*CovParms=uekawa1*/
/*nobs=jeana */
ModelInfo=estes
dimensions=diminfo
/*ConvergenceStatus=concon*/
FitStatistics=FITSTAT;
run;
data hlm1;
hlm1="Fixed";
run;
%end;
%mend checkds;
/* Invoke the macro, pass a non-existent data set name to test */
*%checkds(work.concon);
*%checkds(work.uekawa1);
%checkds(work.FITSTAT);
PROC GLIMMIX: How to request a group-level variance size stat test
covtest /wald;
Calculating Odds Ratios from Logistic Regression Results
One can obtain odds ratios from the results of logistic regression model. Odds ratios derived are adjusted for predictors included in the model and explains the relationship between two groups (e.g., treatment and control group) and outcome (binary outcome). I wrote the following Excel document that calculates odds ratio based on logit coefficients from the intercept and the predictor of interest (binary ones: e.g., impact coefficient, gender effect, etc.).
https://drive.google.com/file/d/0B7AoA5fyqX_sN0RUc0E5aFowb00/view?usp=sharing
Appendix (p.27) of the following document includes description of odds ratio.
http://www.doe.k12.de.us/cms/lib09/DE01922744/Centricity/Domain/91/MA1275TAFINAL508.pdf