Which statistical tool can be used to get the internal consistency of a dichotomous items scored 1 or 0 for right and wrong answers?
Reliability (visit the concept map that shows the various types of reliability) A test is reliable to the extent that whatever it measures, it measures it consistently. If I were to stand on a scale and the scale read 15 pounds, I might
wonder. Suppose I were to step off the scale and stand on it again, and again it read 15 pounds. The scale is producing consistent results. From a research point of view, the scale seems to be reliable because whatever it is measuring, it is measuring it consistently. Whether those consistent results are valid is another question. However, an instrument cannot be
valid if it is not reliable. There are three major categories of reliability for most instruments: test-retest, equivalent form, and internal consistency. Each measures consistency a bit differently and a given instrument need not meet the requirements of each. Test-retest measures consistency from one time to the next. Equivalent-form measures consistency between two versions of an
instrument. Internal-consistency measures consistency within the instrument (consistency among the questions). A fourth category (scorer agreement) is often used with performance and product assessments. Scorer agreement is consistency of rating a performance or product among different judges who are rating the performance or product. Generally speaking, the longer a test is, the more reliable it tends to be (up to a point). For research purposes, a minimum reliability of .70 is required
for attitude instruments. Some researchers feel that it should be higher. A reliability of .70 indicates 70% consistency in the scores that are produced by the instrument. Many tests, such as achievement tests, strive for .90 or higher reliabilities. Relationship of Test Forms and Testing Sessions Required for Reliability Procedures Test-Retest Method (stability: measures error because of changes over time) If one were investigating the reliability of a test measuring mathematics skills, it would not be wise to wait two months. The subjects probably would have gained additional
mathematics skills during the two months and thus would have scored differently the second time they completed the test. We would not want their knowledge to have changed between the first and second testing. Equivalent-Form (Parallel or Alternate-Form) Method (measures error because of differences in test forms) Internal-Consistency Method (measures error because of idiosyncrasies of the test items) – Split-Half – Kuder-Richardson Formula 20 (K-R 20)
and Kuder-Richardson Formula 21 (K-R 21) – Cronbach’s Alpha I have created an Excel spreadsheet that will calculate Spearman-Brown, KR-20, KR-21, and Cronbach’s alpha. The spreadsheet will handle data for a maximum 1000 subjects with a maximum of 100 responses for each. Scoring Agreement (measures error because
of the scorer) – Interrater Reliability – Percentage Agreement ——— All scores contain error. The error is what lowers an instrument’s reliability. ———- There could be a number of reasons why the reliability estimate for a measure is low. Four common sources of inconsistencies of test scores are listed below: Test Taker — perhaps
the subject is having a bad day ———- Del Siegle, Ph.D. Created 9/24/2002 Can you use Cronbach's alpha for dichotomous variables?Kuder-Richardson's coefficient (KR20) is a special case for Cronbach's alpha when items are dichotomous. Cronbach's alpha is a generalization of the KR20 formula. When using dichotomous data, Pearson's r is equivalent to the Phi coefficient.
Which statistical tool is used only for calculating the internal consistency of a test in which items are dichotomous?Cronbach's alpha is a measure used to assess the reliability, or internal consistency, of a set of scale or test items.
What is Cronbach alpha used for?Cronbach's alpha is most commonly used when you want to assess the internal consistency of a questionnaire (or survey) that is made up of multiple Likert-type scales and items. The example here is based on a fictional study that aims to examine student's motivations to learn.
How do you check for internal consistency in SPSS?To test the internal consistency, you can run the Cronbach's alpha test using the reliability command in SPSS, as follows: RELIABILITY /VARIABLES=q1 q2 q3 q4 q5. You can also use the drop-down menu in SPSS, as follows: From the top menu, click Analyze, then Scale, and then Reliability Analysis.
|