Plagiarism
"Plagiarism" is the word for using another person's literal expressions (words, images, etc.) or representing their ideas or concepts as your own, or in place of your own work.
Any amount of misrepresented work, large or small, passed off as your own, is plagiarism as a form of academic dishonesty.
In academic work you are expected to use your own words, and represent your own thoughts and ideas and concepts, which you have developed in the process of engaging with the work of many other people.
You are therefore expected to keep track of all of the other work you have read, and the expressions and ideas you have found there, and to be able to say clearly whose they are, and where they came from.
You are expected to be able to support your own words, your own thoughts and ideas and concepts, through exact references to all of that other work where it agrees with you, and to be able to argue with it in detail where you think something different.
The JKM Library is here to help with your research, and McCormick and LSTC also offer writing help through their own writing centers, including help with English, editing, and style guides.
Citation Guides | School Policies
Citation Guides
Accurately crediting and regularly citing your sources is an essential aspect of avoiding plagiarism.
It is important to know what information you need to quote, to paraphrase, and to cite, and how to do so properly for your sources and project.
The JKM Library is here to support you. We provide access to the following style guides, which will help you with a variety of sources and projects:
- Turabian: A Manual for Writers Citation Quick Guide (basic, for all papers)
- Chicago Manual of Style Online (18th Edition) (more kinds of projects and sources)
- The SBL Handbook of Style (for Bible scholarship)
If you are not sure which to use, or how to use them for your project, please reach out first to your professor, advisor, or the supervisor of your project for specific advice.
You can also email JKM staff at ihaveaquestion@jkmlibrary.org for more general information.
School Policies
McCormick and LSTC each have their own descriptions of what constitutes academic integrity and plagiarism, and policies for dealing with it:
- McCormick Academic Catalog 2024-25 (see Faculty Policy on Proper Use of Sources & Faculty Procedure for Dealing with Misuse of Sources and Plagiarism, pp. 43-47)
- LSTC Student Handbook 2024-25 (see Section 4 - Academic Integrity, pp. 28-30)
Other schools and programs have their own helpful descriptions and advice for dealing with plagiarism:
- University of Chicago Libraries on Academic Integrity
- CTU's Bechtold Library on Citing your Sources
- Saint Xavier University (a CARLI member) on Plagiarism
Plagiarism and "AI"
Large Language Model (LLM) "AI," also called "generative AI," is very popular today, and presents some serious problems from the standpoint of academic integrity.
These "AI" systems are capable of taking a brief prompt, and generating images or text on the basis of their training data.
Those images will look mostly like other images you see in artworks and on the internet. That text will sound mostly like other text you might read in books or on the internet.
That may sound tempting to you! Every author struggles to find the right words, after all. But we expect you to resist that temptation.
Remember: the point of academic work is that we expect your words, your work. You are the author, you are the artist, and any other source must be cited explicitly.
If you're worried about quality, just keep writing. Work with your teachers and your peers. You will find your voice. Your words and your work will only get better through practice!
"AI" companies engage in systematic copyright violation and plagiarism
It is important to recognize that the output from these "AI" systems looks the way it does because they are trained on data taken from existing authors and artists.
Overwhelmingly, from the very beginning, these copyrighted works have been taken and used as training data without author or artist consent, without compensation for their hard work, and without citation when that work is reproduced, in whole or in part, by companies intent on generating profit.
This means that "generative" or LLM "AI" is built on copyright violation, on a massive scale, and also plagiarizes the works it is based on. Everything it does is built on this basis.
But while that is a major problem, it is not the core complaint when it comes to whether or how you use it in your academic work.
"AI" generated words are not your words
When it comes to LLM "AI" as a source of text, you should treat its output like any other source of text you did not write.
No matter how much work you put into designing your prompt, you did not write the "AI" output. Its words are not your words, and do not represent your thoughts and ideas.
You are therefore not responsible for what it says. But you are responsible for how you use it!
If you receive LLM "AI" output text, and present that text as your own, you are engaging in plagiarism as a form of academic dishonesty.
And because LLM "AI" systems plagiarize their training data without telling you, you may also be engaging in plagiarism from real authors, without being aware of it. In addition to academic penalties, this may create the risk of lawsuit for copyright violation if you publish such material.
Citation does not solve the problem of "AI" text
Unfortunately, the problem of plagiarism when using "AI" output cannot be solved by treating it like any other source needing proper citation.
You can and should admit where you got these words, as they are not yours. However, LLM "AI" cannot give you a text for which citation is in any way meaningful.
The point of citation is to demonstrate the authorship and origin of the work that you are citing. This enables your reader to check your work against those sources, which still exist outside of your own work.
Setting aside its many other problems, LLM "AI" output does not exist outside of your interaction with that system. LLM "AI" systems do not give consistent responses. Your reader cannot follow your citation and get any useful information.
Additionally, LLM "AI" systems routinely make up answers that bear no resemblance to reality, and provide their own "citations" to work which simply does not exist—they made it up.
LLM "AI" systems serve only as a substitute for your work—and for the work of others you should be reading and citing. It will always take far more work from you to responsibly evaluate and modify their output, and you will still have to do the work yourself that these systems leave undone.
Your library is here to help!
You have libraries at your disposal, including the JKM Library and all of our partner libraries, which are full of reliable texts, which will still be there when you or your readers come back to them.
These are texts you must cite properly, but they are also texts of which you may be critical. They may also be wrong, but their authors were trying, in the best case, to learn the truth and to be right. And you may even help prove some of them wrong, but that will be a meaningful disagreement, and maybe even a very important one!
JKM Library staff, alongside the faculty and staff of McCormick and LSTC, will gladly help you through your process of becoming a capable and talented scholar using reliable resources.
We look forward to being able to share with you in the pride of your legitimate accomplishments. Do not cheat yourself, and us, of that joy!
Postscript: Why is this the case?
You may wonder: if all this is so, why is LLM "AI" being marketed to me so heavily? How does it actually work? Is there anything I can do to make it work better?
While none of the answers directly bear on plagiarism, they may help you understand.
"AI" replaces search engines because they are unprofitable, not because it is better
So LLM "AI" systems are not a reliable source, and they do not even function as a search engine pointing to other reliable sources.
Yet it seems like LLM "AI" systems are marketed as search engines—though significantly inferior at the task to actual search engines, which rely on indexing external information and directing searchers to it.
This is because the mature technology of web search is no longer profitable, and has become an arena for ad revenue as you pass through it on your way to other content. Indexing the growing internet was a difficult technical problem, and solving it drew a significant amount of investment money. Once it was solved, however, the companies relying on that funding had to survive on their business models alone.
There does not appear to be a practical end to the project of promising the science-fiction imaginary of "artificial intelligence," however. This means that the ability to draw investment money from the promise of its completion can only end when the investors stop believing in it, not when the project is done, unlike web search.
However, it also lines up with the reality that increasing profits in web search come from replacing the model of referencing external content with a system that provides you with its own answers—or at least provides the confident appearance of answering your question, so that you do not feel the need to leave the company product.
That confident appearance of answering your question comes from the "Large Language Model," which is at its roots a language synthesis engine, built on pattern matching. Instead of indexing, retrieving, or pointing to existing data, it surveys the data on which it was trained, relevant or not, and generates its own language in response to queries.
Furthermore, a standard search engine will tell you when it can find no answers for your query. This signal of failure is important. It tells you about limitations in the data set, as well as in your query.
However, LLM "AI" systems are not programmed to give this necessary signal. Instead, they are programmed to do exactly what they do: generate language. That language will be false, but will be presented to you just as if it were true, in the hopes that you will continue to accept and trust the system.
"AI" systems analyze and reproduce statistical patterns, not matters of fact
No matter how many facts may be in its training data, LLM "AI" systems are not programmed to understand that questions have factual answers, found by direct reference to external data. They are not designed to reproduce the data itself, for that matter.
Put simply: LLM "AI" systems are designed to detect and reproduce patterns, not factual data. Their core concern is the statistical likelihood that similar-looking material—language or imagery—appears in similar patterns elsewhere. They are designed to use the examples included in their training data to respond to your prompt by manufacturing output according to its patterns.
Furthermore, LLM "AI" systems do not have any sense of the meaning of the data they have been trained on. Textual training data has been processed for its linguistic patterns. These systems are programmed to notice what patterns appear most often in that language, and then to manufacture language that looks similar to those patterns, on the assumption that the most common patterns are correct, and that matching those patterns will satisfy your interest.
The "AI" system is most likely to give a response that is only linguistically, not factually, similar to the desired real-world data by some standard. It is likely to contain a variety of errors, from the subtle to the absurd. And it may even bear no resemblance to reality at all.
If meaning has been imposed, it has been done by human workers alongside the "AI" system, whether or not they can be trusted to understand the data themselves. (Software companies are not heavily staffed with subject-area experts.) This most often takes the form of a system of "weights" assigned to adjust the normal output of the LLM "AI" system, and those weights are subject to a high degree of bias. They can help limit the degree to which the system produces obvious nonsense, but they can and do also advance a variety of political agendas, and create their own degree of nonsense output by pitting ideology against factual data.
There is therefore some chance that the output of an LLM "AI" system might be correct—if the most common data appearing in the most common pattern of response in its training data contains the correct answer, and if artificial weight has not been imposed on some other answer. However, there remains no way to guarantee that the system will reproduce desired factual data, instead of manufacturing some other language. If it does reproduce that data, it will do so without citation, leaving you ignorant of information vital to your academic work. And if it does provide what appears to be a citation, that "citation" itself has been generated in the same way as all other "AI" output, and is likely to be fictional.
LLM "AI" systems operate in this way whether or not the generated output answers your question in any way, let alone correctly. To the extent that you may have complaints, these systems rely on your objections to adjust their output until it is satisfactory to you. This makes them radically unsuitable for any research work, where you are trying to learn things you do not yet know.
The same is true of summarizing text, another common application for which LLM "AI" is not suited. Once again, it does not understand the meaning of the text, only its patterns. Its summary is likely to be linguistically, and not necessarily factually, "similar" to the text. However, that summary may bear no resemblance to the text on key points, and worse, it is likely to introduce language that does not exist in the text, but does exist in the LLM "AI" training data.
"AI" systems add extra time and work over and above actual research
It is possible for companies to spend money and engineering time to force any LLM "AI" system to operate in ways that reduce (but cannot eliminate) its inherent weaknesses as outlined above. Even that amount of work, however, is not included in the free versions offered for you to use online.
It is also possible for you to spend large amounts of time "prompt engineering" in order to try and game any given LLM "AI" system into producing output that looks better to you.
None of this saves you time, or effort, or money. None of it actually replaces the work of research and writing, unless you choose to use "AI" instead of doing research and writing.
If, knowing the inherent weaknesses of LLM "AI" systems, you choose to add the extra time and effort of interacting with these systems, you will still have to check every output of the "AI" without using the "AI" to do so, against actual sources in pursuit of actual facts.
Worse, given the proliferation of LLM "AI" generated misinformation, even in the course of your actual research and writing, you will have to evaluate recent sources in the suspicion that they may contain generated and so inherently untrustworthy text.
Knowing all of this does not make you immune. No amount of investment in "AI literacy" will make these systems actually trustworthy. No amount of investment in techniques to compensate for inherent errors will make these systems produce factual data instead of generated language.
Because these systems exist and are being offered to—and often imposed upon—you, it is important that you know about them, but it is not important that you learn how to use them, as though your skill could change what they are.
What is important, and where your skills are truly valuable and needed, is your work! It remains important that you do your research, that you talk with your fellow students and teachers, that you learn, and that what you write expresses what you have learned in your own words.


