Guidelines: Ethical AI Use & Acknowledgments for authors
Ethical Artificial Intelligence (AI) use guidelines and acknowledgement for authors
Compiled for Unisa Press by Kirstin Krauss, WWIS (kirstin.krauss@wwis.co.za).
Last updated July 2025, August inputs being processed
The Ethical AI Use Guidelines and principles in this document and its appendices should be used to guide Unisa Press authors in the conceptualisation and production of scholarly work, manuscript preparation, or extending of scholarly research, such as converting a thesis into a scholarly book or a general interest manuscript for the purpose of publication in book or article form.
What is academic integrity?
Academic integrity can be described as a commitment to, and compliance with "ethical and professional principles, standards, practices and consistent system of values, that serve as guidance for making decisions and taking actions in education, research and scholarship" (NAIN, 2021).
GenAI and AI tools1 may pose a threat to academic integrity when it is used inappropriately to generated unauthorised content. “Unauthorised content generation … is the production of academic work, in whole or part, for academic credit, progression or award, whether or not a payment or other favour is involved, using unapproved or undeclared human or technological assistance.” (Foltynek et al., 2023, p.2).
To uphold academic integrity, AI tools should be used ethically and responsibly in the production of scholarly work.
Authorship of scholarly manuscripts
Authorship confers credit and has important academic, social, and financial implications. Authorship also implies responsibility and accountability for published work. Authorship of scholarly manuscripts is based on the following criteria:
· Substantial contributions to the conception and design of the work; including the acquisition, analysis, or interpretation of data for the work; and
· Drafting the work or reviewing it critically for important scholarly and/or general interest content; and
· Final approval of the version to be published; and
· Agreement to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved; and
- Ensuring that all third-party content is used lawfully, or where necessary, copyright clearance is obtained prior to publication.
AI and authorship guidelines:
- AI tools cannot meet the requirements for authorship as these tools cannot take responsibility for the scholarly work. As non-legal entities, AI tools cannot assert the presence or absence of conflicts of interest nor manage copyright and licence agreements. AI cannot be used to replace the core responsibilities of an author.
- An AI tool cannot be listed or cited as an author of a scholarly manuscript.
- Human authors are fully responsible for the content of their manuscript, even those parts produced by an AI tool, and are thus liable for any breach of publication and research ethics.
- Authors who use AI tools in content gathering, research processes, the writing of a manuscript, production of images or graphical elements of a paper, or in the collection and analysis of data, must be transparent and honest in disclosing how the AI tool was used and which tool was used.
- Authors submitting a manuscript in which a chatbot or AI tool was used to draft new text should note such use in the acknowledgment; all prompts used to generate new text, or to convert text or text prompts into tables or illustrations, should be specified.
- When an AI tool such as a chatbot is used to carry out or generate analytical work, help report results (e.g., generating tables or figures), or write computer codes, this should be stated in the body of the manuscript. In the interests of enabling scientific scrutiny, including replication and identifying falsification, the full prompt used to generate the research results, the time and date of query, and the AI tool used and its version, should be provided. (See Appendix B on how to declare details of AI use.)
- Authors should be aware of potential ethical issues and how the use of AI tools may impact the privacy of research participants in their studies. The use of AI tools to analyse qualitative data provided by research participants normally requires informed consent from those participants. Informed consent should include a statement by the researcher/s and an accompanying acknowledgement by participants that their data will be analysed with the assistance of AI tools.
- Authors are responsible for material provided by a chatbot or GenAI in their manuscript (including the accuracy of what is presented and the absence of plagiarism) and for appropriate attribution of all sources (including original sources for material generated by the chatbot).
- Guidance on how to cite AI tools can be found here:
Limitations and risks of GenAI and chatbots:
- It could produce text or summaries that do not align with an author’s writing style,
- It could produce material that is incorrect, incomplete, unsavoury, discriminatory, or biased,
- It may produce fake, incorrect, or incomplete references,
- It may not have access to the best academic resources and therefore content generated may be scientifically incorrect or questionable,
It may train on user data and prompts, thereby plagiarising or potentially compromising the originality of the author’s work, or causing possible copyright infringements or undisclosed use of participant data, which could have implications under the Protection of Personal Information Act 4 of 2013 (POPIA). Authors should opt out of allowing their data to be used for training purposes when using a chatbot or AI tool.
Acknowledgement:
Please complete, sign, and scan this acknowledgement and email to
I declare that I have read and understand the Ethical AI Use Guidelines, AI Usage Checklist (Appendix A), and Details of AI Use (Appendix B)
Manuscript title: ……………………………………………………………………………………………….
……………………………………………………………………………………………………………………….
Author: ……………………………………………………………………………………………
Date: ………………………………………………………………
Signature: …………………………………………………………
Contact e-mail address: ………………………………………………………………………
Credits
These Ethical AI Use Guidelines are modelled on the following resources:
COPE Council (2014). COPE position - Authorship and AI - English. Committee on Publication Ethics (CC BY-NC-ND 4.0) https://publicationethics.org. Available at: https://doi.org/10.24318/cCVRZBms (Accessed 26 June 2025).
Foltynek, T., Bjelobaba, S., Glendinning, I. et al. (2023). ENAI Recommendations on the ethical use of Artificial Intelligence in Education. Int J Educ Integr 19, 12 (2023). https://doi.org/10.1007/s40979-023-00133-4
International Committee of Medical Journal Editors’ (ICMJE) (2025). Defining the Role of Authors and Contributors. Available at: https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html (Accessed 26 June 2025).
Language Testing’s Guidelines on the Use of Generative AI (2024). Guidelines on the use of Generative Artificial Intelligence (AI) for the journal, Language Testing. Available at: https://journals.sagepub.com/pb-assets/cmscontent/ltj/LTJ%20Author%20guidelines%20on%20genAI_Oct_3_24-1731410134.pdf (Accessed 21 July 2025).
NAIN (2021). Academic Integrity: National Principles and Lexicon of Common Terms, Quality and Qualifications Ireland. Available at: https://www.qqi.ie/sites/default/files/2021-11/academic-integrity-nationalprinciples-and-lexicon-of-common-terms.pdf (Accessed 17 July 2025).
University of Queensland (2025). AI tools for assignments. Available at: https://guides.library.uq.edu.au/referencing/ai-tools-assignments (Accessed 19 June 2025)
Zielinski C, Winker MA, Aggarwal R, Ferris LE, Heinemann M, Lapeña JF, Pai SA, Ing E, Citrome L, Alam M, Voight M, Habibzadeh F, for the WAME Board. (2023, May 31). Chatbots, Generative AI, and Scholarly Manuscripts. WAME Recommendations on Chatbots and Generative Artificial Intelligence in Relation to Scholarly Publications. WAME. Available at: https://wame.org/page3.php?id=106 (Accessed 26 June 2025).
Appendix A: AI Usage Checklist
The following AI Usage Checklist and accompanying Details of AI Use (see Appendix B) may be used to guide the declaration of ethical use of AI tools in the production and writing of scholarly manuscripts:
This license enables re-users to distribute, remix, adapt, and build upon the AI Usage Checklist in any medium or format for non-commercial purposes only, and only so long as attribution is given to the creator/compiler.
See: https://creativecommons.org/licenses/by-nc/4.0/
On submission of your full manuscript to Unisa Press, you will be asked to sign off on the below checklist to verify that you have used AI ethically in all respects:
Place a tick mark next to each bullet below to confirm whether AI was used ethically in the production and writing of your manuscript.
To the best of my knowledge, I have used AI tools ethically and responsibility for:
- Idea generation – AI tools may be used to generate ideas related to methods, topic, theories, applications, structure, etc. While AI tools can help with ideas and suggestions, authors should not allow AI tools to introduce biased, foreign, misinformed, incorrect, or decontextualised ‘ideas’ or other content into their work. The author should use AI tools in a complementary manner, rather than outsourcing idea and other content generation entirely to an AI tool.
- Literature searches – AI tools may be used to assist with literature searches and discovery. AI tools may provide access to both reputable and questionable publications. Therefore the author should ensure that he/she verifies all the references produced by AI tools.
- Literature organisation – AI tools may be used to complement literature organisation, e.g., in the form of graphs and tables with summaries. The author should be aware that summaries or organising of literature produced should be verified against the original texts and papers.
- ‘Generating’ reference – AI tools may be used to generate references or lists of references. However, AI tools quite often produce fake references. Therefore, all references produced by AI tools should be verified.
- Summarising / paraphrasing – GenAI may be used to summarise and/or paraphrase text from papers or other sources. However, AI tools and chatbots may produce text that is incorrect, incomplete, unsavoury, discriminatory, or biased. For this reason, the author should verify all such summaries and paraphrased content.
- Generating introductions and conclusions – AI tools can be used to assist with generating or drafting introductions and conclusions. Authors should ensure that the text produced accurately represents manuscript content and aligns with their own style of writing.
- Reading assistance – AI tools may be used to assist with reading or explaining complex texts, concepts, papers, or documents. All explanations should be checked against the original documents and text for accuracy.
- Providing guidance on structure – AI tools may be used to provide guidance on how to structure specific sections, chapters, manuscripts, processes, etc.
- Copy editing / Proofreading assistance – AI tools (e.g., AI-powered writing tools[i]) may be used to assist with proofreading and copy editing. AI tools could produce text that dilutes, biases, or skews original meaning, and therefore text should be carefully checked by the author. The author should ensure that editing style and spelling conventions are in line with UK English (and not US) usage.
- Assistance with data analysis – AI tools may be used to assist with data analysis tasks. However, AI tools, even those designed for data analysis tasks, cannot be trusted entirely to produce consistent, systematic, and rigorous results for replication and corroboration by readers and reviewers. Therefore, the researcher remains responsible for the data analysis process throughout, and particularly for the interpretation and application of results (even if the services of a statistician were used), with AI tools used in a supportive manner. When using an advanced AI tool as a primary analysis tool, the results should be verified using non-AI statistical techniques, independent AI tools, and human oversight. A guiding principle should be that the analysis approach and method should be applied systematically and documented sufficiently to allow for replication or corroboration by other researchers.
- Generating data – It is only acceptable to use AI tools to generate data in projections or simulation studies, for example, computer experiments designed to evaluate statistical methods using synthetic data. If AI tools are used to generate data in simulation studies, authors should save and date all output and note which software programme and version were used (See Appendix B: Details of AI Use). AI tools must not be used to fabricate or falsify data as these are serious forms of research misconduct, and therefore generating data using AI tools is prohibited for other study types (adapted from Language Testing’s Guidelines on the Use of Generative AI, 2024).
- Explaining statistical concepts – AI tools may be used to explain statistical concepts or analyses, although it is expected of an author to paraphrase, or use his/her own words for such explanations within the final manuscript.
- AI detectors and humanizers[ii] – AI detectors and humanizers could be used during the writing process. However, these detectors and humanizers may be inconsistent, unreliable, biased, or incorrect, and authors should therefore remain in control of the research and writing process.
- Production of images, equations, graphics, artwork, or tables – AI tools may be used to fully or partially produce images, 3D models, equations, graphics, artwork, or tables. Each specific AI tool needs to be referenced in each case. (If necessary, Unisa Press can assist with original graphics during the typesetting process.)
- Opting out – Before using AI tools, the author needs to opt out of permitting his / her user data (e.g., interview data, copyrighted materials, prompts, etc.) to be used for training purposes. AI tools and chatbots may have limited standards in ensuring confidentiality or safe-guarding data/data protection. Authors should be aware of copyright restrictions before uploading any published or unpublished documents or extracts into AI tools.
- Other uses: (Please elaborate)………………………………………………………………………………….
Appendix B: Details of AI Use : Guidelines
This Details of AI Use Guidelines may be adapted and populated for the specifics of the scholarly work and discipline in which the author is working, and added as appendix to the academic book, paper, or manuscript.
Below are two examples of how you need to, for every AI tool used, track and document the following steps:
- Specify the tool and version.
- Describe how AI was used
- Prompt(s) or evidence of use
- Indicate the Section(s) where AI was used, or details of use explained
- Date(s) of each usage instance.
Examples:
Example 1: ChatGPT 4o Mini
2 Describe how AI was used:
To get guidance on:
- How to restructure the PhD for an academic book, including examples of how it might be presented
- Principles and examples of how to develop the structure of different chapters, including the introduction and conclusion chapters
- Principles and examples of how to develop and craft chapter and section headers
- Writing style principles for academic books
- Examples and principles for crafting book titles
- Guidance for choosing a cover image
- Guidance to write abstracts
- Principles and ideas for catchy chapter taglines
3 Prompts or evidence of usage
4 Indicate the Section(s) where AI was used, or details of use explained
Whole manuscript.
5 Date(s) of each usage instance: 16 & 20 June 2025
Example 2: Web of Science Research Assistant
2 Describe how AI was used:
To get guidance on: To identify relevant literature from Web of Science Core Collection
3 Prompts or evidence of usage
See prompts and keywords used in Appendix A and Chapter 2, Section 2.1
4 Indicate the Section(s) where AI was used, or details of use explained
Chapter 2, Sections 2.1, 2.2, 2.8
5 Date: 16 & 20 July 2025
ENDNOTES
[i] AI-powered writing tools utilise advanced algorithms to identify common errors in grammar, punctuation, and syntax and provide suggestions to improve clarity and style.
[ii] AI Humanizers alter AI-generated text to resemble authentic human writing.
[iii] An AI tool is any software application or system that uses artificial intelligence (AI) to perform tasks that usually require human intelligence.
[iv] GenAI or Generative AI is a subset of AI focused on generating new content. GenAI uses generative modelling and advances in deep learning to produce diverse content at scale by utilizing existing media such as text, graphics, audio, and video. In this context ‘AI tool’ incorporates both ideas – GenAI and AI.
[v] A chatbot is a computer programme designed to simulate conversation with human users. Chatbots typically uses generative AI.