Attribute Data Agreement Analysis

The audit should help to identify the specific people and codes that are the main sources of problems and the evaluation of the attribute agreement should help to determine the relative contribution of reproducibility and reproducibility problems for those specific codes (and individuals). In addition, many bug databases have problems with precision records that indicate where an error was created because the place where the error is detected is recorded and not where the error was created. When the error is detected, there is not much to identify the causes, therefore the accuracy of the site assignment should also be an element of the audit. Unlike a continuous measuring device that can be accurate (on average) but not accurate, any lack of accuracy in an attribute measurement system necessarily leads to accuracy problems. If the error encoder is unclear or undecided about how to code an error, multiple errors of the same type are assigned to different codes, making the database inaccurate. In fact, for an attribute measurement system, imprecision is an important contribution to imprecision. Despite these difficulties, performing an attribute agreement analysis on bug tracking systems is not a waste of time. In fact, it is (or can be) an extremely informative, valuable and necessary exercise. The analysis of the award agreement must be applied with caution and a certain focus. In addition to the sample size issue, logistics can also be a challenge to ensure that evaluators do not remember the initial attribute they assigned to a scenario when they see it for the second time. Of course, this can be avoided a bit by increasing the sample size and, better yet, wait a while before the scenarios are made available to reviewers a second time (maybe one to two weeks).

Randomized passage from one comment to another can also be helpful. In addition, evaluators also tend to work differently if they know they are being examined, so the fact that they know that it is a test can also skew the results. Hiding this in one way or another can help, but it`s almost impossible to achieve it, despite the fact that it borders on ethics. And beyond the fact that they are at best marginally effective, these solutions increase the complexity and time of an already difficult study. For example, if the accuracy rate calculated with 100 samples is 70 percent, the margin of error is about +/- 9 percent. At 80 percent, the margin is about +/- 8 percent, at 90 percent, the margin is +/- 6 percent. Of course, more and more samples can be collected to check if more accuracy is needed, but the reality is that if the database is less than 90 percent exactly, the analyst probably wants to understand why. An attribute agreement analysis allows the impact of repeatability and reproducibility on accuracy to be assessed simultaneously. It allows the analyst to study the responses of multiple auditors, while examining multiple scenarios. It compiles statistics that assess the ability of evaluators to agree with themselves (repeatability), with each other (reproducibility) and with a well-known control or accuracy value (overall precision) for each characteristic – again and again.

If the audit is indeed planned and designed, it may reveal enough information about the causes of accuracy issues to justify a decision not to use attribute agreement analysis at all. In cases where the audit does not provide sufficient information, the analysis of the attribute agreement allows for a more detailed analysis indicating the implementation of safer training and modifications to the measurement system. . . .

 

Leave a Reply

Comments are closed.