A new ERA of research quality assessment exercises will demand future-ready universities
The recent release of the Australian Universities Accord Final Report has created a mix of consternation and cautious optimism among industry commentators. While the very welcome refrain of ‘more funding!’ echoes through the report like a drum beat, other elements, like the proposed Higher Education Futures Fund, have attracted criticism from universities.
Among the many items considered by the report is the future of research quality assessment exercises for Australia, an area which appears to have attracted relatively limited commentary from the mainstream media these past weeks. Much like the earlier Review of the Australian Research Council Act 2001, the Australian Universities Accord Final Report asserts that ERA and EI have run their course and served their purpose. But where the Sheil review advised that, “we are explicitly not recommending ERA and EI be replaced by a so-called light-touch metrics based exercise,” the Australian Universities Accord Final Report diverges to embrace an automated future: “Because the development of a new assessment system will take time and a raft of other major reform is already underway at ARC, a ‘lighter touch’ automated (where appropriate and feasible) approach to quality assessment could be developed in due course with a view to undertaking it in 2025.”
The Final Report recommends that the Australian Research Council (ARC), a new Australian Tertiary Education Commission, the Tertiary Education Quality and Standards Agency (TEQSA) and universities should create a National Research Evaluation and Impact Framework. Its goals are pretty lofty: to replace the notoriously burdensome ERA and EI assessments with something that provides a “clear evaluation” to government and taxpayers, demonstrates research quality and ROI, and levies less of a burden on universities “without affecting robustness”.
What a tall order! It’s one thing to require a clear and robust evaluation that demonstrates quality and ROI, and quite another to want it to be less burdensome than the existing ones. It is such a tall order, in fact, that it comes as little surprise to the ResearchMaster team that the Australian Universities Accord Final Report explicitly recommends that such a framework must be data driven and make use of intelligent technologies.
Here’s where we get down to the nitty-gritty. Australian universities excel at creating new information. But their internal systems are often plagued by information that’s poorly organised, not appropriately accessible, or outdated.
If the future of quality assessment exercises for university research in Australia is really to be data-driven, automated where possible, and less burdensome for administration, then there is a basic, underlying data sufficiency required to support light touch automation and intelligent technologies.
To ensure that underlying technological capability, organisations need to have a few different elements lined up.
Strategy and data governance
These two always go hand in hand.
Data governance is one of the ways organisations operationalise their strategies. Policies and structures relating to data must actively support organisational strategy priorities. Older systems are often troubled by information silos and technical debt that are deleterious to broader organisational priorities, and so these systems need regular review to ensure they’re working with you, not against you.
Technological systems that “speak” to each other and to relevant external databases
This means being able to link internal data about research projects not just to line items in the budget or staff information, but also to external datasets like ORCID. Because manual data handling takes time and introduces the risk of human error, the less human intervention necessary to bridge the gaps between these systems, the better. The gold standard is integrations that can “speak” to each other and pull data from separate systems into one unified view completely automatically.
The right information
This one’s obvious: you have to be collecting the correct data to support a data-driven assessment of research quality. But “collecting data” doesn’t always mean having the right data. It needs to be stored appropriately, in a way that’s both secure and accessible to the people who need it. Then, data can also degrade over time, and each collection usually represents only a snapshot, a moment in time — for example, it might degrade when researchers move between research institutions.
To attain a level of data and technological sufficiency to support the kind of automated, light-touch, data-driven work recommended by the Australian Universities Accord Final Report is certainly a challenge in the short term. However, complex and changing regulations always exert an evolutionary pressure on the higher education and research landscape in Australia. In the longer-term, research institutions stand to benefit from improving their internal data capabilities, and as regulations shift, these organisations will enjoy the advantage of being more flexible and future-proof than their peers.