![]() ![]() Test-retest Reliability: Test-retest reliability is used to assess the consistency of measure from one time to another. When the measure is a continuous one inter-rater reliability is used. Inter-Rater or inter-observer reliability: It is use to assess the degree of reliability of different raters or observers give consistent estimate of the same phenomena. ![]() Parallel-forms reliability/split half reliability.Inter-Rater or inter-observer reliability.There are generally four general classes of reliability: So, reliability is an umbrella term under which different types of score stability are assessed. Reliability is the measure of how stable, dependable, trust worthy and consistent a test is in measuring the same thing each time.Īccording to Anastasi, “ Reliability refers to the consistency of scores obtained by the same persons when they are reexamined with the same test on different occasions or with different sets of equivalent items or under other variable examining conditions”Īccording to Aiken, “Reliability is defined as the ratio between variance and observed variance” ![]() Reliability: reliability is the degree of consistency between the two measures of same thing (Worthen and others1993). ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |