Do anti-plagiarism apps work?

They do, when properly used. Or when users temper their expectations. Consider moodLearning’s anti-plagiarism (mLaP) service, for instance. Like other anti-plagiarism services (including Turnitin), mLaP enables users to upload their articles or assignments, needless to say with no little prodding from instructors. Off they go! One might be forgiven to think that the results of the submission would say, “Oh, plagiarists!” Or, “Oh, congratulations! Not a single plagiarist bone in your body.” The reality, however, is more complicated than that. What the app returns are just similarity results in percentages. Meaning, how your document, or parts of it, resemble with certain documents or parts elsewhere or from the submissions of our your own peers.

So the challenge now is: which percentage would count as definitive in determining plagiarism? Of course there are the obvious ones: 100%, 90%, 80%, and so on. But what about single digits? Or even 11%? The rater, who hopefully knows what he’s doing, would have to go in and take a closer look at the passages or lines concerned.

How much of the similarities could be attributed to deliberate copying? Or to just carelessness?