Today, NIH released Notice number NOT-OD-23-149, "The Use of Generative Artificial Intelligence Technologies is Prohibited for the NIH Peer Review Process". The Notice is linked to NOT-OD-22-044, released in December of 2021, "Maintaining Security and Confidentiality in NIH Peer Review: Rules, Responsibilities and Possible Consequences".
The latest notice was released to clarify that NIH reviewers are prohibited from using any natural language processing (NLP), large language model, or other Artificial Intelligence (AI) based technology to aid in their review and written critique of proposals. In other words, those harsh comments about your Approach from the dreaded Reviewer #2 should be written by an actual human scientific peer reviewer and not by a ChatGPT bot. (Although I do wonder how many times terms like "fishing expedition" and "overly ambitious" would be used by a model that is trained using summary statements from prior grant reviews.)
If you currently serve or will be serving on an NIH study section as a peer reviewer, you will be expected to adhere to revised Security, Confidentiality, and Non-disclosure Agreements for Peer Reviewers that are updated to incorporate the new restrictions regarding AI-based grant reviews. Any reviewer that uploads or pastes content from a grant application into a generative AI tool will be deemed in violation of NIH peer review confidentiality and integrity requirements.
This new requirement is designed to ensure fairness during the proposal review process. As recent controversial examples in banking and sports have shown, it is relatively easy to introduce bias into an AI based model. Although peer reviewers bring their own biases to the review process, the scoring and outcome are more transparent when written critiques and summary statements are prepared by human beings who are putting forth their best effort to review proposals based on their untainted scientific opinions.
No comments:
Post a Comment