top of page

Our Recent Posts

Archive

Tags

AI Safety & Sustainability: 
Insights from ANU Expert Panel Discussion

  • afranorg
  • Apr 16
  • 4 min read

On April 10th, AFRAN partnered with the ANU Integrated AI Network to host an expert panel discussion on AI Safety & Sustainability at the Agrifood Innovation Institute (AFII). Our goal was to explore how Australia's commitments at the recent AI Action Summit—alongside 61 other countries—translate into the everyday work of ANU researchers working on or with artificial intelligence.


The Shifting Landscape of AI Research: Insights from Natural Language Processing

Professor Jing Jiang (Intelligent Systems Cluster at the ANU School of Computing) opened the discussion with a striking observation about the transformation in natural language processing. Until recently, this field received limited attention, but the technological breakthrough behind text-based generative AI has fundamentally changed the research landscape. The most cutting-edge research now emerges from the private sector rather than academic computer science departments.

This shift has redirected academic researchers toward making these technologies more robust, safe and responsible. Jing’s work focuses on two critical challenges: 1/ Understanding the new behaviours of generative AI tools and 2/ Keeping humans in the loop of AI development and deployment.

She emphasized that problems like hallucinations and fabricated references require benchmark datasets for proper model evaluation. The ability of AI to persuade—even to debunk conspiracy theories—raises complex questions that demand multidisciplinary approaches beyond computing alone.


Returning to AI's Statistical Roots: A Mathematical Perspective

Katharine Turner, Associate Professor and Director of the Mathematical Data Science Centre, provided valuable historical context. She reminded the audience that AI emerged from statistics, but today's AI and machine learning have become too decorrelated from their statistical foundations—which partly explains current fairness and accuracy issues.

She advocated for stronger integration of statistics into AI conversations. Her work in topological data analysis—finding patterns and shapes in complex datasets—offers insights through rigorous mathematical analysis. As Katharine noted during the discussion, "We need to think about the structure of the data and understand the input. Often hallucinations result from wrong models or inappropriate methodologies that don't establish statistically relevant correlations." Addressing hallucinations requires rethinking our assumptions, methods, and understanding of input data structures.


Health Data Privacy and AI Ethics

Gaetan Burgio, MD, PhD, leads the Genome Editing and Microbial Immunity Group at the John Curtin School of Medical Research. He brought his background in statistical genetics—what he called the "prehistory of machine learning"—to bear on critical questions of data privacy. His work on CRISPR gene editing and research on malaria in Africa provided a fascinating parallel to the AI evolution. He drew a striking comparison to the 2018 CRISPR controversy involving gene-edited babies, noting that "AI will go as far as it can unless a significant event triggers regulatory intervention."

Two key concerns emerged from his presentation: 1/ Ensuring equality of access to these technologies, which are often prohibitively expensive, 2/Safety concerns around training datasets, particularly the "pillaging" of health data without adequate regulation

Gaetan highlighted disturbing examples: social media content on X being added to training sets of Grok AI chatbot by default, and the collapse of consumer genetic testing company 23andMe raising questions about where personal genomic data ultimately goes. These issues underscore the crucial importance of data hygiene and ethical considerations in AI development, particularly in sensitive medical contexts.


AI Sustainability and Social Cohesion: What does the future hold?

Rim El Kadi, PhD, Visiting Research Fellow at the ANU School of Cybernetics and AFRAN ACT Hub Leader, introduced a bit of prospective into the talk, sharing The Story of Ky, a piece of science fiction taking place 10 years from now in an AI-powered smart city.

She brought expertise in digital transformation for smart cities to the discussion. Her work explores how AI can make urban settings more efficient and improve citizen engagement while addressing the digital divide.

She raised a profound question: "What happens to social cohesion when people's AI agents start talking to each other?" As we introduce non-human layers of interaction, how might this affect the foundations of social trust that underpin sustainable communities?

Social capital and trust are fundamental components of functional urban environments. Rim cautioned that introducing autonomous AI systems could potentially damage these foundations unless carefully managed. She emphasized that principles of trust and social sustainability must be applied to AI-driven smart city initiatives, with agile methodologies enhancing implementation to ensure inclusivity.

A significant portion of the discussion focused on AI's environmental footprint, particularly energy and water usage. Panellists discussed potential solutions including engineering improvements for energy efficiency, better data compression techniques, game theory approaches to regulation and carbon credit-like systems for computing power. Panellists also pointed to Jevons Paradox, as efficiency improvements can lead to an increase in overall resource consumption rather than a decrease, as the cost of using the resource will decrease.


Summary prepared by Sarah Vallee, AFRAN AI Community Lead, with the help of generative AI tools.

From left to right: Charles Gretton, Rim El Kadi, Sarah Vallee, Gaetan Burgio, Katharine Turner and Jing Jiang
From left to right: Charles Gretton, Rim El Kadi, Sarah Vallee, Gaetan Burgio, Katharine Turner and Jing Jiang

Comentários


bottom of page