How to make the noodle part of Japanese ramen.
Random effects vs. Fixed Effects
Random effects:
Effects in this context refer to group outcome averages estimed by a regression model. In regular regression models (OLS, Logistic Regression model), these are often estimated as coefficients of a series of dummy variables representing group units (e.g., school A, school B, .. each coded as 0 or 1). These are fixed effects if estimated by non HLM model. If I am part of the school whose test score average is 500 and if it is estimated as fixed effects, that value is determined solely by the information obtained from that school (I used the word solely to emphasize the point, but this may not be exactly correct due to the fact that predictors from the whole data helped derive coefficients that in turn help adjust the average outcome).
HLM does something similar, but after the initial group averages were estimated, they will be adjusted by reliablity of the group estimates (and this process is called Baysian Shrinkage). They are adjusted such that the estimates are pulled towards the grand mean. Reliably measured group estimate will stay close to the original fixed effect value. Not so reliable estimates will be pulled towards the grand mean. The idea is to mix the grand mean and the group mean to prevent not reliant group estimate from being away from the mean.
Reliabitliy of the group average is a function of a) n of subjects in it and b) outcome variance. I think Intraclass correlation may be part of the algorithim, but I will check.
When to use HLM (Hierarchical Linear Modeling)
Not common: If your data is a census data (everyone is in the dataset, like US Census), you do not need to use HLM. You do not even need statistical testing because every estimate you get is a true estimate. (Note: my friend wrote me and said he disagrees. He said even if he had all people's data he wants to do stat testing to compare male and female when the difference is very small. Again I disagree.)
Common: If your data is a sample and data are hierarchically structured and thus errors are dependent (violation of the independence assumption), you may consider HLM to alleviate the clustering problem. Examples of hierarchically structured data are:
- Students nested within schools (2-levels)
- Repeated measures nested within subjects who are nested within schools (3-levels)
It is sometimes said that the motivation for the use of HLM must be whether the group units are a random sample of the population. The argument claims that if, for example, schools in the sample are a convenient sample, one cannot use HLM. This is not exactly correct. I state the following using RCT (randomized control trial) or QED (Quasi-experimental design) impact studies as a context.
If the group units are randomly sampled, one can generalize the result of impact analysis to the whole population. If Intervention program A was found effective (or not effective) and the sample was a random sample of US population, this finding is generalizable. If impact estimation relied on a convenient sample, one cannot generalize it to the whole population. If RCT, the random sample vs. convinient sample difference should not affect the internal validity of the impact estimate.
There is a tricky case. HLM is inappropriate if the group units are, for a lack of better word, distinct groups with apparent identity and as a researcher you are genuinely interested in the pure group estimates. This is a case when the exact group estimate, derived as fixed effects not as random effects, are of interest.
For example, if group units are US states, HLM is most likely inappropriate. State specific estimates should be interpreted as such and should not be treated as random effects. Imagine the outcome of interest is income level and the state you live in had the average income of 50,000. Just because your state had a small number of survey respondents (and thus reliability of the estimate is lower and HLM will pull your average closer to the grand average), you do not want to see your state’s average to be changed to look more like the national average. Another example would be a study of 20 regional hospitals. You should be interested in the fixed estimates of hospital outcomes.
When HLM treats schools as the mixed effects, we are treating school units somewhat instrumentally (a bit rude thing to do :>) in order to obtain the best value for the intercept (=grade average of the group specific effects estimated as random effects). So if you are a school, you may feel HLM is treating you without respect. HLM will not respect your data if the sample size is small and outcome variance in your school is large. But HLM is respecting you in a different way. Your data is unreliable, so let me just adjust it to be more normal, so you won't embarrass yourself.
How to adjust weights so the sum of weights = n of subjects in the sample
This SAS code adjusts weights (e.g., sample weights) such that the sum of weights equals the sample size. Weight1 is the original weight and weight 2 is the result.
proc sql;
create table comp2 as
select *,
psweight * (count(weight1)/Sum(psweight1)) as weight2
from comp;
run;
Matching with or without replacement (Propensity Score Matching)
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3144483/
Quote:
"I now discuss different methods for forming matched pairs of treated and untreated subjects when matching on the propensity score. In doing so, several decisions must be made. First, one must choose between matching without replacement and matching with replacement (Rosenbaum, 2002). When using matching without replacement, once an untreated subject has been selected to be matched to a given treated subject, that untreated subject is no longer available for consideration as a potential match for subsequent treated subjects. As a result, each untreated subject is included in at most one matched set. In contrast, matching with replacement allows a given untreated subject to be included in more than one matched set. When matching with replacement is used, variance estimation must account for the fact that the same untreated subject may be in multiple matched sets (Hill & Reiter, 2006)."
So, this means ...
Matching with replacement: Comparison students will be matched with multiple treatment students.
Matching without replacement: One comparison student will be matched with one treatment student.
Also the definition of weights from R documentation:
weights
A vector of length n that provides the weights assigned to each unit in the matching process. Unmatched units have weights equal to . Matched treated units have weight 1. Each matched control unit has weight proportional to the number of treatment units to which it was matched, and the sum of the control weights is equal to the number of uniquely matched control units.
Source: http://www.inside-r.org/packages/cran/MatchIt/docs/matchit
Taking outcome assessment errors into consideration
ESPRESSO: taking into account assessment errors on outcome and exposures in power analysis for association studies
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4528636/
QC'ing ID and time variables
When a data file is delivered in CVS or in Excel format and when it is converted into a software specific data table (e.g., SAS data), it is very important to check if ID and time variables are accurately converted. If data are delivered to you regularly and periodically, check if SAS (or other software programs) is consistently reading the variables accurately.
The in-coming data may look the same, but when SAS reads them, the format definition of some variables may be different.
Check if all digits in numeric IDs are read. If 2142 is read as 214 by mistake, this creates duplicative IDs as 2142 and 2143, for example, will be both 214. This happens because when SAS is reading a CSV or Excel file, it is misreading the number of digits that a column includes. You may need to create a fake row at the top row to force SAS to read the row correctly (e.g., enter 999999, so SAS will have ample space to read the number-based values).
Check if time/calendar vairiables are read correctly every time. Again EVERYTIME the data file is delivered, this has to be checked.
This is an example of an error:
06/15/2015 (CVS) IS READ AS 10MAR2002 (SAS).
How to edit a PROC GLIMMIX syntax into a PROC MIXED syntax
I encountered a situation where somehow PROC GLIMMIX can't produce results while PROC MIXED can. In this WORD document, I showed you how I edited the PROC GLIMMIX syntax to be a PROC MIXED syntax. Please ignore my macro %w.
https://drive.google.com/open?id=0B7AoA5fyqX_sVXQ1YzRlSlFlaW8
PROC GLIMMIX error messages
While running PROC GLIMMIX with a large dataset with categorical variables explicitly treated as CLASS variables AND with weight, I got the following error message:
WARNING: Obtaining minimum variance quadratic unbiased estimates as starting values for the covariance parameters failed.
The weight and the use of categorical variables were the cause of the problem as I found out that without weight the model produces results.
SAS support staff spent time trying to figure out what the issue is/was. I sent them a dataset (though I replaced the variables with randomly generated values) and the exact syntax. They determined that (thank you so much for going extra miles to figure this out!):
"the computation of MIVQUE fails because the default singular value (1e-12) leads to the detection of singularity while sweeping of the mixed model equation" -- May 12, 2016.
They proposed that singular value should be specified as a very small value to let the conversion happen:
proc glimmix data=example singular=1e-9;
weight psweight;
class race_cate level2 ;
model outcome= race_cate / s dist=normal link=identity ddfm=kr;
random intercept/subject=level2;
run;
****
Before hearing this final response from SAS, I wrote this and said that I chose to use PROC MIXED because it produces results.
SAS Support person suggested that I use of PARMS statement after Random, so I feed in the initial values for variance-covariance values manually/directly. I did something like this:
PARMS (0.2) (0.4);
Then I get:
“ERROR: Values given in PARMS statement are not feasible.”
Out of curiosity, I used PROC MIXED to run the same model without PARMS but with a weight statement. This time, the model produced results without encountering any errors.
PROC MIXED and PROC GLIMMIX produce identical results (or almost or essentially identical results) when the same models are specified.
I *think* my solution for now is to use PROC MIXED for this specific model.
The following two produced the identical results.
proc mixed data=analysis noclprint ;
where random_number > 0.8;
weight psweight;
class level2 race_cate2 loc_cate;
model zposttest=
<I put variable names here -- including categorical variables, level2 race_cate2 loc_cate>
/ s ddfm=kr ;
random int/subject=level2;
ParameterEstimates=kaz5
Diffs=DIF_GROUP5
LSMeans=LS5
Tests3=jointtest;
run;
proc glimmix data=analysis noclprint ;
where random_number > 0.8;
weight psweight;
class level2 race_cate2 loc_cate;
model zposttest=
<I put variable names here -- including categorical variables, race_cate loc_cate>
/ s dist=normal link=identity ddfm=kr STDCOEF ;
lsmeans
race_cate2 loc_cate
/ ilink diff
at (
treat
Zpretest
male
LEP
SPECED
DISADV
)=(0 0 0 0 0 0);
random int/subject=level2;
*parms (.03 ) (.45) /* /hold=1,2*/ ;
*ods output
ParameterEstimates=kaz5
Diffs=DIF_GROUP5
LSMeans=LS5
Tests3=jointtest;
run;
SAS for Mixed Models (2nd edition)
I have the copy of the first edition. The second edition is here: