Reliability Analysis

According to Kabue (2016), reliability is the extent to which a tool of discovery yields consistent results or data. Analysis of the quality of being trustworthy or of performing consistently well. is referred to as reliability analysis. Evaluation of reliability analysis is measured by the ratio of systemic variability in a scale, that can be achieved by defining the association of scores collected from various scale administrations. Therefore, if the reliability assessment correlation is strong, the scale yields reliable results and that is why it is consistently good in quality or performance. Methods of reliability analysis give a good framework for rational accounting of these uncertainties. Reliability tests are conducted in this section. There are four forms of testing for reliability.

  1. Test-retest reliability
  2. Alternate or parallel form reliability
  3. Inter-rater reliability
  4. Internal consistency reliability

Test-retest reliability implies the test-retest reliability of test scores over time. This approximation also represents the accuracy of the test measurement of the feature or structure. Some structures are more stable than others. For instance, the reading ability of an individual is more consistent over a given time frame than the level of anxiety of that person. Thus, on a reading test, a higher coefficient of test-retest reliability than on a test would be expected than a test that measures anxiety.

Alternate or parallel form reliability determines how reliable test scores are probable to be if an individual take, two or more test forms. shows that the different test forms are very similar That means that what version of the test an individual take makes virtually no difference. On the other side, a weak parallel coefficient of reliability indicates that the various forms are unlikely to be comparable; They can evaluate various things and cannot, therefore, be utilized interchangeably.

Inter-rater reliability shows how reliable test scores are going to be when two or more raters score the test. Raters analyze answers to questions on certain tests and assess the score. Variations in test scores are likely to result in changes in assessments among raters. A high inter-rater reliability coefficient shows that the process of the ability to make considered decisions is reliable and the resulting scores are accurate. Inter-rater reliability coefficients are generally lower than other forms of estimates of reliability. But, if raters are properly trained, it is possible to achieve greater levels of inter-rater reliabilities.

Internal consistency reliability implies the degree to which the same thing is measured by items on a test. A high coefficient of internal consistency quality for a sample means that the test items are in content quite close to each other. It is important to realize that internal consistency reliability may be influenced by the length of a test. A very long test, for example, will falsely inflate the coefficient of reliability. testing multiple features is typically divided into separate parts. In addition to one for the entire test manuals for such tests usually, report a separate internal coefficient of reliability for each part. Test guides, documents, and reviews report several types of internal consistency reliability estimates. Under certain conditions, any form of the estimate is acceptable. The test guide will explain the reason for reporting a specific estimate.

In this research, the reliability is evaluated using Cronbach’s Alpha Coefficients The internal reliability of the variable factors is tested using the alpha coefficient of Cronbach’s Alpha and 10% of the survey sample is checked to ensure that the study is accurate and reliability until the target population data is collected and the main questionnaire is checked.

Table 3.1 Cronbach’s Alpha for Internal consistency

Source: Lee Cronbach (1951)

By utilizing Statistical Package for Social Sciences (SPSS) Software, Cronbach’s alpha coefficient was determined by the completed questionnaire from 110 students at KSI Institute that had been obtained to test reliability using a simple random sampling method. And to take the biases away or off from the respondents, for the key survey, these samples were not utilized. Based on A. Ferdinand (2006) also acknowledges the reliability of less than 0.7 for exploratory research is also able to be agreed on. As per S. Basri’s blog, the reliability level ranging from 0.5 to 0.7 drops into the moderate reliability category.