Could an AI win a competitive research grant?
Could an AI win a competitive research grant?
Not without you.
Today we use AI to do a lot of heavy lifting, whether that be data processing at speeds humans can only dream of or using the Internet of Things to supercharge logistics. We at ResearchMaster certainly use technology to help streamline the processes that are at the core of the competitive grants process, like smart costing and pricing tools and automatically filling e-forms.
But the role of technology in knowledge work remains contested. Criticisms range from fears that AI will allow lazy shirkers to offload their work onto technology with nobody the wiser, to the evergreen anxiety that we will all soon be replaced by robots.
In the last couple of years this range of concerns has grown more robust, with commentators looking to chatbots like OpenAI’s ChatGPT as a key exemplar. The chief commissioner of the TEQSA was quick to remind us just last year of our obligations to academic integrity and ethics, citing, “the use of AI by researchers to write grant applications, analyse data or write scientific papers.”
While the regulator has since indicated that there is a role for generative AI to play in research, the team at ResearchMaster considers grant applications to be an interesting use case.
Could an AI chatbot really produce work of sufficient academic rigour and quality to pass muster for, say, a grant application to the ARC? Certainly there have been questions asked about whether or not ARC grant assessors may have used AI in writing their own responses.
Well… No. Not without you.
Here’s the thing: To meet with success here, a chatbot would need to outperform just under 80% of academics in their own areas of expertise.
The problem with that is that generative AI is not capable of demonstrating expertise. An AI does not understand what it’s “writing” the way a human being does. They produce statistically probable sentences without reference to significance or meaning—which was clearly illustrated to us all in the case of the New York lawyer whose brief cited a lengthy precedent of court cases that did not exist. The production of plausible-sounding text is unmoored from the consideration of whether that text is fact or fiction. An AI is also not great at making high-quality judgements about the value of the text it produces, as evidenced in this recent article where professional editors tested the story-editing capabilities of ChatGPT.
When an expert in his or her specific and narrow field must present an idea to other experts in that field, we can expect those experts to notice if the text is peppered with non-existent references or plausible-sounding but factually incorrect statements. So, when it comes to writing grants and research papers, an AI tool like a chatbot could occupy a useful assistive role — the kind exemplified in this recent AARE blog by Inger Mewburn (of Thesis Whisperer fame).
AI is capable of taking much of the heavy lifting out of an increasingly broad array of tasks, including those relating to knowledge work. But anxieties about academics using AI instead of actually completing their own research, or about the en-masse replacement of knowledge workers more generally, can probably be left on the shelf—at least for now.
ResearchMaster’s automations and workflow processes take the heavy lifting out of research management. To find out how, contact us.