Dr. Inkbot Boasts
an Inter-rater Reliability
Score of .80

So you can trust the science-based approach we take to scoring our projective tests.

dr-inkbot_brand-identity_110120-02.jpg
 
 

How does .80 measure up?

Four sets of recommendations for interpreting level of inter-rater agreement Cicchetti, D. V.; Sparrow, S. A. (1981). "Developing criteria for establishing interrater reliability of specific items: applications to assessment of adaptive behavior". American Journal of Mental Deficiency. 86 (2): 127–137.Fleiss, J. L. (1981-04-21). Statistical methods for rates and proportions. 2nd ed. ISBN 0-471-06428-9.Landis, J. Richard; Koch, Gary G. (1977). "An Application of Hierarchical Kappa-type Statistics in the Assessment of Majority Agreement among Multiple Observers". Biometrics. 33 (2): 363–74Regier, Darrel A.; Narrow, William E.; Clarke, Diana E.; Kraemer, Helena C.; Kuramoto, S. Janet; Kuhl, Emily A.; Kupfer, David J. (2013). "DSM-5 Field Trials in the United States and Canada, Part II: Test-Retest Reliability of Selected Categorical Diagnoses". American Journal of Psychiatry. 170 (1): 59–70.

Four sets of recommendations for interpreting level of inter-rater agreement

Cicchetti, D. V.; Sparrow, S. A. (1981). "Developing criteria for establishing interrater reliability of specific items: applications to assessment of adaptive behavior". American Journal of Mental Deficiency. 86 (2): 127–137.

Fleiss, J. L. (1981-04-21). Statistical methods for rates and proportions. 2nd ed. ISBN 0-471-06428-9.

Landis, J. Richard; Koch, Gary G. (1977). "An Application of Hierarchical Kappa-type Statistics in the Assessment of Majority Agreement among Multiple Observers". Biometrics. 33 (2): 363–74

Regier, Darrel A.; Narrow, William E.; Clarke, Diana E.; Kraemer, Helena C.; Kuramoto, S. Janet; Kuhl, Emily A.; Kupfer, David J. (2013). "DSM-5 Field Trials in the United States and Canada, Part II: Test-Retest Reliability of Selected Categorical Diagnoses". American Journal of Psychiatry. 170 (1): 59–70.

 
 

What is inter-rater reliability?

Inter-rater reliability is a statistic that measures the consistency of our coding methods. Basically, it’s a check to see if our trained coders are giving the same inkblot test responses the same codes.

 
IRR Site-03.png
 
dr-inkbot_brand-identity_110120-04.jpg

Example Codes & Responses


 
Inkblot1@3x.png
 

Example Response:

“I see two hands. One on each side wearing mittens so I can see the thumbs sticking out. I also see a pair of legs. I also see an upper torso with no head. I often see images of people and animals and things in the clouds and in the dirt or paint or whatever. I see myself in this image. I see clouds and indecisiveness and fog hovering all around me. I can relate to this image as I have been worried and stressed and unclear about some things lately.”

Example Code:

Mental Fog; Indecisiveness

 
Inkblot2@3x.png
 

Example Response:

“I see the silhouette of a Chinese festival child with an elaborate hairdo and earrings ...the child is on the side watching the parade and processions. the image seems to appear that way to me, possibly because I love fashion and any world cultures, in general...so it’s possible that's why that image was the one I saw.”

Example Code:

Love for Fashion; Love for Culture

 
Inkblot3@3x.png
 

Example Response:

“Two gorillas dancing. They look like they may be wearing some striped apparel, so it may be Christmassy. In a way, they kind of look like characters from an old TV show that was on when I was a kid called The Banana Splits. It shows me two individuals that are having a good time. I like to have a good time too! So in that way it kind of reminds me of the way my personality works!”

Example Code:

Positive Personality; Having a Good Time

 
 

Try it today!

 
dr-inkbot_brand-identity_110120-04.jpg