Point By Point Agreement Formula

Great post Tara. This is a great example of what many are trying to get with LER in the studio code. Many of our clients in your example would have a slightly different approach. Since the behavior in question is an “answer,” they would use a single code button to mark each answer. Then, each advisor would independently evaluate the line by assigning captions to identify current or false answers. The IRR would then simply consider the labelling agreement for the proceedings. Thank you for this great example, Will. You are right. Most people would probably use a label to give more information about the type of response (as I do with my hierarchy of OTRs), but I tried to just start 🙂 I would save the addition of text characters for the kappa calculation of IRR, because once you agreed to respond, you can calculate irr with Kappa which contains how you wrote the answer (correctly , false, partly correct, etc.), which also takes into account the uniformity of labeling by chance.

I haven`t tried to calculate Cohen`s cappa with the studio code yet, but that`s one of my next steps. The only thing I tried to do for three lines (as if I had 3 coders) would be to combine lines for 1 and 2 with the “and” command so I only see their instances that ride. Then I could use the same process for IRR as in this video with this new line and Rater 3 🙂 If you think it`s something that people would be interested in seeing, I can definitely make a video for that! Tara… Excellent. I think irr is an interesting area. Enjoy the next video of the IRR duration. Do you have smart ideas for creating a script in your scenario if more than two lines are overlapping?.


Comments are closed.