Artificial Intelligence (AI) in Research Guidelines
Estimated Reading Time: 10 MinutesWhen used ethically and responsibly, AI-assisted technologies and tools can rapidly accelerate research progress and scholarly work. To protect the integrity of research, AI must be used in accordance with relevant discipline, journal, and/or sponsor policies and guidelines. AI misuse could cause concerns regarding research misconduct, research security, privacy, data confidentiality, and intellectual property rights.
Applies to
All individuals at The Ohio State University involved with research activities, including faculty, staff, students, research associates and fellows, post-doctoral fellows, other research trainees, and visitors.
General Principles
The Ohio State University is committed to providing support for the ethical adoption and use of AI tools and research using AI. While AI tools accelerate discovery and knowledge, when used improperly or unethically, they can cause integrity concerns. The Enterprise for Research, Innovation and Knowledge (ERIK) in collaboration with the University Research Committee has developed these guidelines for using AI in research.
Approved AI tools and Use
- The Office of Technology and Digital Innovation maintains a directory of approved AI tools for institutional use by university faculty and staff. Medical center employees must consult with medical center IT before the use of any of the cloud-based institutional AI tools/models due to different security requirements.
- Best practices involve the use of institutional AI-approved tools for S1 (public) and S2 (internal) institutional data. S3 or S4 institutional data can be included when necessary for education, business, or research. This guidance does not apply to those with access to Microsoft 365 Copilot, as this tool accesses institutional data across Office 365 applications.
- The Clinical and Translational Science Institute (CTSI) maintains a list of AI informatics tools.
- For externally funded projects, researchers and study teams should ensure that any AI use is allowable per sponsor regulations and policies.
- The National Institutes of Health (NIH) “will not consider applications that are either substantially developed by AI, or contain sections substantially developed by AI, to be original ideas of applicants.”
- The National Science Foundation states that “proposers are encouraged to indicate in the project description the extent to which, if any, generative AI technology was used and how it was used to develop their proposal.”
- Research involving entirely locally-hosted models is generally allowable, as long as all security framework controls are in place to protect the machines on which they run. Local IT should validate that the model/interface to the model does not include any agentic/web-search features that would extend the reach outside of the local model and that the model meets all legal requirements at Ohio State.
Human Subjects Research
Research involving AI raises concerns regarding privacy and data confidentiality in human subjects research. The following should be considered when using AI tools:
- AI as an intervention or for use in the administration of human subject research as a tool requires disclosure to the IRB in the protocol.
- The Health Insurance Portability and Accountability Act (HIPAA) protects the privacy of protected health information. Under HIPAA, researchers and/or research units wanting to use an external, commercial AI tool that involves the transfer of PHI in human subjects research must obtain approval from the IRB (acting as the Privacy board), an IT risk assessment, and legal review, as applicable.
- Use of AI tools on internal computing infrastructure, including cloud computing, where no PHI is transferred, still requires IRB approval and disclosure of its use in consent forms, if applicable.
- The Food and Drug Administration (FDA) regulations on mobile medical applications and devices may apply to AI tools in human subjects research and should be considered in protocol development.
- Intervention-based research involving AI tools should be thoroughly disclosed and monitored by the research team for both participant safety and data confidentiality.
- Questions about the use of AI in human subject research, both as an intervention and in support of the administration of the research, should be referred to the IRB.
Implications for Research Misconduct and Academic Misconduct
- If AI misuse includes a credible allegation of plagiarism or an allegation of fabrication and/or falsification of research data using generative artificial intelligence, the matter will be reviewed under the University Policy on Research Misconduct by the Office of Research Compliance.
- The undisclosed use of AI to create text for research publications, presentations, or grant documents is not an acceptable practice but in most cases does not meet the definition of plagiarism (the appropriation of the ideas, processes, results, or words of another person, without giving appropriate credit). Such allegations will be referred to the appropriate sponsor and/or college for their determination of any sponsor or publisher violations.
- Students should use GenAI tools only with the explicit permission and approved methods of each instructor. Concerns of improper AI use, including the use of AI generated hallucinated citations, in the academic or course setting, will be referred to the Ohio State Committee on Academic Misconduct.
- NIH guidance states that if “the detection of AI is identified post award, NIH may refer the matter to the Office of Research Integrity to determine whether there is research misconduct while simultaneously taking enforcement actions including but not limited to disallowing costs, withholding future awards, wholly or in part suspending the grant, and possible termination.”
- NSF policy states that research misconduct may occur "directly or through the use or assistance of other persons, entities, or tools, including artificial intelligence (AI)-based tools, in proposing or performing research funded by NSF, reviewing research proposals submitted to NSF, or in reporting research results funded by NSF." Specific and credible allegation(s) that fabrication, falsification, or plagiarism occurred through the use or assistance of AI-based tools in the research setting will be referred to the Office of Research Compliance for review under the University Policy on Research Misconduct.
- In alignment with the Research Data Policy, principal investigators are responsible for ensuring research is accurately reported, including verifying the accuracy of all AI-generated content, data, and results.
Authorship and Peer Review
- AI large language models (LLMs) cannot fulfill the criteria for authorship of scientific publications and therefore cannot be listed as an author.
- Any approved use of AI-assisted technology, including LLMs, should be accurately disclosed, often in the methodology section, and only used in accordance with institutional and journal standards.
- Many journals and most federal sponsors prohibit the use of AI tools in the peer review process and therefore, must be avoided.
Commercialization
- If an input to a publicly available AI tool shares details of a patentable invention, that use could be a public disclosure, resulting in the one-year year timeline to submit a patent application on file in the US, with most patent rights outside the US lost if no patent application has already been filed.
- For trade secrets, the owner must take reasonable steps to maintain the secrecy of that trade secret. Disclosing a trade secret as an input to an AI tool under terms that don’t include confidentiality could result in a loss of rights.
- While AI-assisted inventions are patentable, the AI tool is not named as a co-inventor. Researchers are required to share how AI was used in making the invention with your Licensing Officer from ERIK Innovation and Commercialization to ensure that appropriate disclosures are made to the U.S. Patent and Trademark Office when applying for an invention that was developed with the use of AI. Learn more about disclosing an invention.
- Provided that a human researcher selects, arranges, or meaningfully edits AI-generated software in a way that reflects creative choices, the resulting software code may be copyrightable, although only the human-authored elements may be copyrightable. ERIK Innovation and Commercialization can assist you in commercializing software you create as part of your research program, and advise on the best open source license to use to further your research objectives.
Copyrighted Content
- The intersection of copyright law and AI is an evolving landscape. When considering the use of copyrighted works in conjunction with AI tools, whether that be as part of a prompt, an AI-generated output, or an LLM training dataset, it is important to exercise the same caution that you would rely on in any setting to ensure that use of an AI tool does not infringe on a copyright owner’s rights.
- Open Access does not mean copyright-free. Using Open Access works in conjunction with AI tools requires careful consideration of licensing requirements and use restrictions.
- The fair use doctrine allows for the use of copyrighted works in certain circumstances, which is determined using a four-factor test that considers the purpose of the use, the nature of the copyrighted work, the amount and substantiality used, and the effect of the use on the market for the copyrighted work. To consider relying on fair use in the context of AI, you would need to perform a fair use analysis to determine if you can make a case for fair use. Researchers should note that the terms of service for content licensed by the library (which fall under contract law) can, and often do, override fair use rights (which fall under copyright law).
- AI-generated outputs must reflect sufficient human contribution to warrant copyright protection. Where AI is used as a tool, and where a human has been able to determine the expressive elements they contain, outputs may be copyrightable in whole or in part. Prompts alone are unlikely to satisfy those requirements. Sufficient human contribution is determined on a case-by-case basis and is still under debate.
- The U.S. Copyright Office offers additional information on copyright and AI in its three-part Report on Copyright and Artificial Intelligence.
- This guidance is for informational purposes only and should not be construed as legal advice. For questions about copyright in the context of AI, contact libcopyright@osu.edu.
Content Licensed by the Library
- Nearly all online content made available to the campus community is governed by license agreements. License terms cover how content can be used for teaching and research purposes. Functionally, AI technology often relies on text and data mining technology as an early step. Terms for each are usually addressed separately in our licenses. As with any new technology, it takes time to get integrated into licenses. Currently, few licenses have language explicit to AI use.
- Many of the resources licensed at Ohio State are made available through consortia partnerships with OhioLINK and the Big Ten Academic Alliance (BTAA). There may be some resources where license terms will vary.
- Where a license does not cover use of AI, the general process is to reach out to University Libraries via LIB-ResourceLicensing@osu.edu or your subject librarian for next steps. It may involve working with the content provider to obtain permission to use content or to work directly with them on obtaining the content.
Other Considerations
- Research involving export control projects must be described in a Technology Control Plan and may not use cloud-based systems or software, which includes AI agents, without the approval of the Export Control office.
- Consulting with China on anything related to AI should be submitted through the Outside Activity Approval Form (OAAF) process and escalated to the Conflict Approval Committee (for a conflict-of-interest review), Research Security, and Export Controls
- Integrating AI in coursework is vital in Ohio State’s “AI Fluency”. Expanded resources are available to assist in incorporating AI into teaching and learning.
- Research involving non-generative AI systems, such as machine learning, is generally allowable and is outside the scope of this guidance.
Resources
- Student Code of Conduct, Academic misconduct. 3335-23-04(A)(5).
- OTDI, Security and Privacy Statement on Artificial Intelligence. “
- Teaching & Learning Resource Center. Summary of AI and its use as it relates to teaching.
- Drake Institute for Teaching and Learning. Summary of suggestions for instructors on AI in the classroom.
- Office of Academic Affairs. References the Student Code of Conduct and links to the Teaching & Learning Resource Center and the Drake Institute
- OTDI Cloud Computing Guidelines
- NIH NOT-OD-25-132
- NSF Notice
- AI Fluency at Ohio State
- IRB Considerations on the Use of Artificial Intelligence in Human Subjects Research