AI and ChatGPT in Science and the Humanities - DFG Formulates Guidelines for Dealing with Generative Models

The Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) has formulated initial guidelines for dealing with generative models for text and image creation. A statement now published by the Executive Committee of the largest research funding organisation and central self-governing organisation for science and the humanities in Germany sheds light on the influence of ChatGPT and other generative AI models on science and the humanities and on the DFG's funding activities. As a starting point for continuous monitoring and support, the paper seeks to provide guidance for researchers in their work as well as for applicants to the DFG and those involved in the review, evaluation and decision-making process.

In the view of the DFG Executive Committee, AI technologies are already changing the entire work process in science and the humanities, knowledge production and creativity to a significant degree and are being used in various ways in the different research disciplines, albeit for differing purposes. In terms of generative models for text and image creation, this development is still very much in its infancy.

"In view of its considerable opportunities and development potential, the use of generative models in the context of research work should by no means be ruled out," says the paper: "However, certain binding framework conditions will be required in order to ensure good research practice and the quality of research results." Here, too, the standards of good research practice generally established in science and the humanities are fundamental.

In terms of concrete guidelines, the DFG Executive Committee says that when making their results publicly available, researchers should disclose whether or not they have used generative models and if so, which ones, for what purpose and to what extent. This also includes funding proposals submitted to the DFG. The use of such models does not relieve researchers of their own content-related and formal responsibility to adhere to the basic principles of research integrity.

Only the natural persons responsible may appear as authors in research publications, states the paper. "They must ensure that the use of generative models does not infringe anyone else’s intellectual property and does not result in scientific misconduct, for example in the form of plagiarism," the paper goes on.

The use of generative models based on these principles is to be permissible when submitting proposals to the DFG. In the preparation of reviews, on the other hand, their use is inadmissible due to the confidentiality of assessment process, states the paper, adding: "Documents provided for review are confidential and in particular may not be used as input for generative models."

Instructions to applicants and to those involved in the evaluation process are currently being added to the relevant documents and technical systems at the DFG Head Office.

Following on from these initial guidelines, the DFG intends to analyse and assess the opportunities and potential risks of using generative models in science and the humanities and in its own funding activities on an ongoing basis. A Senate Working Group on the Digital Turn is to address overarching epistemic and subject-specific issues in this context. Any possible impact in connection with acts of scientific misconduct are to be addressed by the DFG Commission on the Revision of the Rules of Procedure for Dealing with Scientific Misconduct. The DFG will also be issuing further statements in an effort to contribute to a "discursive and science-based process" in the use of generative models.

For the text of the statement, see the DFG website here

Most Popular Now

ChatGPT can Produce Medical Record Notes…

The AI model ChatGPT can write administrative medical notes up to ten times faster than doctors without compromising quality. This is according to a new study conducted by researchers at...

Can Language Models Read the Genome? Thi…

The same class of artificial intelligence that made headlines coding software and passing the bar exam has learned to read a different kind of text - the genetic code. That code...

Study Shows Human Medical Professionals …

When looking for medical information, people can use web search engines or large language models (LLMs) like ChatGPT-4 or Google Bard. However, these artificial intelligence (AI) tools have their limitations...

Bayer and Google Cloud to Accelerate Dev…

Bayer and Google Cloud announced a collaboration on the development of artificial intelligence (AI) solutions to support radiologists and ultimately better serve patients. As part of the collaboration, Bayer will...

Advancing Drug Discovery with AI: Introd…

A transformative study published in Health Data Science, a Science Partner Journal, introduces a groundbreaking end-to-end deep learning framework, known as Knowledge-Empowered Drug Discovery (KEDD), aimed at revolutionizing the field...

Shared Digital NHS Prescribing Record co…

Implementing a single shared digital prescribing record across the NHS in England could avoid nearly 1 million drug errors every year, stopping up to 16,000 fewer patients from being harmed...

Ask Chat GPT about Your Radiation Oncolo…

Cancer patients about to undergo radiation oncology treatment have lots of questions. Could ChatGPT be the best way to get answers? A new Northwestern Medicine study tested a specially designed ChatGPT...

North West Anglia Works with Clinisys to…

North West Anglia NHS Foundation Trust has replaced two, legacy laboratory information systems with a single instance of Clinisys WinPath. The trust, which serves a catchment of 800,000 patients in North...

Can AI Techniques Help Clinicians Assess…

Investigators have applied artificial intelligence (AI) techniques to gait analyses and medical records data to provide insights about individuals with leg fractures and aspects of their recovery. The study, published in...

AI Makes Retinal Imaging 100 Times Faste…

Researchers at the National Institutes of Health applied artificial intelligence (AI) to a technique that produces high-resolution images of cells in the eye. They report that with AI, imaging is...

Standing Up for Health Tech and SMEs: Sh…

AS the new chair of the health and social care council at techUK, Shane Tickell talked to Highland Marketing about his determination to support small and innovative companies, by having...

GPT-4 Matches Radiologists in Detecting …

Large language model GPT-4 matched the performance of radiologists in detecting errors in radiology reports, according to research published in Radiology, a journal of the Radiological Society of North America...