Article: Unlocking 9 promising Agentic AI applications while avoiding the risks
.png?width=686&height=529&name=header%20Eloy%20Gonzales%20(3).png)
"Autonomous malicious actions" is a phrase that will make any cybersecurity expert sit up and take notice. However, it’s not the only risk associated with the game-changing technology of Agentic AI.
Organisations stand on the brink of significant transformation as they increasingly adopt these systems, but this transition is fraught with challenges. As we approach the AI Governance Summit 2025, this article explores the key risks associated with Agentic AI and provides actionable strategies for organisations to navigate these challenges successfully.
Use cases for Agentic AI
“The most exciting aspects of Agentic AI lie in its ability to proactively streamline operations”, says Dr. Eloy Gonzales, Data Science Manager at Murrumbidgee Irrigation. “By automating repetitive tasks and providing real-time assistance for simpler functions like personal assistants or customer service, it frees up human potential. Crucially, its capacity for learning and adaptation from interactions in real-time promises increasingly efficient and personalised support, pushing AI beyond mere responsiveness to truly anticipatory and intelligent assistance.”
Here are nine promising applications that illustrate Agentic AI’s transformative power:
- Software development: AI agents are evolving into smarter coding assistants capable of writing and reviewing code autonomously. They can reverse engineer specifications and generate test cases, dramatically improving automation in software development.
- Robotic Process Automation (RPA): Described as “RPA on steroids”, AI agents enhance traditional RPA by taking on more complex decision-making tasks, moving beyond simple rule-based actions to adaptable, autonomous processes.
- Customer support automation: These agents allow customer service interactions to evolve into robust systems that handle complex queries and provide contextual answers.
- Automating enterprise workflows: AI agents can streamline processes by automating routine tasks, such as turning meeting notes into project tickets or triggering supplier orders based on predictions.
- Cybersecurity and threat detection: AI agents can autonomously detect, respond to, and mitigate security threats in near real-time, significantly enhancing overall security and reducing response times.
- Enhanced productivity: In environments such as law firms, AI agents can assist with various tasks, enabling professionals to complete workflows more efficiently and effectively.
- Generating reports: AI agents can accelerate the content creation process, producing detailed reports in minutes instead of days.
- HR: These agents can answer employee questions and assist with internal processes, improving efficiency in HR functions.
- Business intelligence: AI agents can provide deeper insights by integrating with BI solutions, enabling users to ask complex, natural language questions and receive actionable analytics.
Risk landscape
Agentic AI introduces several unique risks that organisations must address. One major concern is the loss of control and unpredictable behaviour of autonomous agents. These systems may pursue goals that misalign with organisational objectives.
Gonzales points out: “Agentic AI introduces unique risks for businesses due to its autonomy. The ability of these systems to operate independently can lead to outcomes that are not only unintended but potentially harmful.”
This leads us to the significant risk of autonomous malicious actions. A compromised AI could execute sophisticated attacks independently. For instance, if an AI system managing financial transactions is hacked, it could initiate fraudulent transfers without human oversight.
Determining accountability and responsibility also becomes complex when AI acts autonomously. In cases where an AI system makes an error, such as a misdiagnosis in healthcare, the issue of liability arises. Is it the developer, the organisation using the AI, or the AI itself that bears responsibility for the consequences?
Bias amplification is another critical concern. If an agent is trained on skewed data, it can propagate unfair outcomes at scale. For example, if an AI system used for recruitment learns from historical hiring data that reflects biased practices, it may continue to favour certain demographics over others. “Bias amplification can occur if an agent, trained on skewed data, propagates unfair outcomes at scale, intensifying ethical concerns”, cautions Gonzales.
Finally, there are data privacy risks. The vast amounts of sensitive information that these agents handle increase exposure to breaches and misuse. Gonzales highlights, “Data privacy risks are amplified as agents often handle vast amounts of sensitive information, increasing exposure to breaches.” This necessitates stringent data protection measures to prevent unauthorised access and ensure compliance with regulations.
Mitigation strategies
Robust governance frameworks are crucial for managing the risks associated with Agentic AI. Gonzales emphasises, “This involves clearly defining roles and responsibilities for AI systems and implementing rigorous risk management processes throughout their lifecycle.”
Organisations should implement a proactive risk identification strategy that includes regular audits of AI systems to assess their performance and alignment with organisational goals.
Engaging various stakeholders in discussions about AI deployment can help identify potential risks from multiple perspectives. Additionally, developing scenarios that explore possible adverse outcomes related to AI use can prepare teams for unexpected challenges.
Establishing strong governance frameworks involves defining clear roles and responsibilities for monitoring AI systems and making decisions based on their outputs. It is vital to ensure that mechanisms are in place for human oversight and control, allowing humans to intervene when necessary. Regular training for employees about the ethical implications and operational guidelines for using AI systems will further support this governance.
Safeguard against vulnerabilities by implementing strict access controls that limit access to sensitive data and AI systems to authorised personnel only. Regular testing and validation of AI systems will help ensure they function as intended, alongside the deployment of monitoring tools that observe AI actions in real-time and provide alerts for any anomalies or unexpected behaviours.
Fostering a culture of responsible AI can be achieved through comprehensive education and training for all staff to ensure they understand the implications of AI and how to use it responsibly. Showcasing responsible use cases within the organisation can build confidence in AI technologies, highlighting successful examples of ethical use. Moreover, embedding ethical principles throughout the organisation can cultivate a mindset where AI safety is regarded as everyone’s responsibility, promoting cross-functional collaboration between technical and business teams.
How can we encourage innovation while maintaining safety when adopting Agentic AI? This involves promoting experimentation with safety in mind, where organisations can encourage innovation while emphasising rigorous testing to ensure safety. Valuing human judgment remains central in critical decision-making processes, ensuring that human oversight is integral to responsible AI integration.
Learn more at the AI Governance Summit
Interested in learning more? Join Dr. Eloy Gonzales and other thought-leaders on 6-7th August for Quest Event’s AI Governance Summit 2025 at the Aerial UTS Function Centre in Sydney.
Download the Brochure
