In their efforts to deter plagiarism, many universities are using plagiarism-detection software. To be more precise, however, the most popular software products like Turnitin and SafeAssignment compare students’ submissions to databases composed of journal articles, news reports, and student research papers to identify “matches.” These matches indicate reasons for additional investigation by professors, but do not prove that acts of plagiarism have occurred.
A recent study at Texas Tech identified 400 papers submitted by students that were identified by plagiarism software as having “matches” with existing works. Further investigation; however, revealed only two of the students’ papers were actually found to be plagiarized.
This research suggests we might have a serious misconception of what these plagiarism-detection software products really do. Perhaps to name “plagiarism detection” is misplaced, in terms of describing what these products do and their appropriate role. They don’t so much detect plagiarism as much as they identify elements of students’ submissions that match to work of others. Thus, instances, where students have used common jargon or phrases in their work, can be expected for inclusion in the reports generated by most plagiarism-detection software products.
We are learning the proper role of plagiarism-detection software is to cast a very wide net that identifies only potential acts of plagiarism along with many appropriate uses of popular terms and phrases. Thus, these software products are only one tool in a university’s program of identifying and deterring plagiarism. Perhaps the most valuable role plagiarism-detection software can play is to deter students from copying the work of others. Students will be much less likely to knowingly commit acts of plagiarism if the know their submissions will be evaluated by plagiarism-detection software.