In conversations with managed service providers across Australia, a consistent pattern is emerging. Almost every client organisation is now using generative AI tools in some form, yet few have implemented meaningful controls to govern how data is handled when those tools are used. This is rarely the result of deliberate risk-taking. In most cases, AI adoption has occurred organically, driven by productivity pressure rather than policy.
When combined with hybrid work environments, decentralised teams, and widespread reliance on cloud platforms, this shift represents one of the most significant changes in information security that Australian organisations have faced in years. The challenge is not simply that AI introduces new tools, but that it exposes the limitations of security models built around endpoints, networks, and perimeters.
For years, cybersecurity has been fought on familiar ground. Organisations have invested heavily in endpoint detection, firewalls, identity controls and network monitoring, operating on the assumption that if devices and systems were secured, data would remain protected. That assumption is increasingly difficult to sustain. The convergence of hybrid work and generative AI has not just expanded the attack surface but fundamentally changed how information moves.
The traditional office perimeter dissolved as work moved beyond corporate networks. Employees now access sensitive information from home offices, co-working spaces and personal devices, often through browser-based applications that sit outside traditional controls. Generative AI compounds this shift by enabling staff to copy, paste and upload data into external systems in seconds. When an employee pastes sensitive information into an AI tool to generate insights or summaries, that data has already left the organisation, even though no security control has technically failed.
A threat landscape defined by authorised access
According to the Australian Signals Directorate’s Australian Cyber Security Centre (ACSC), more than 84,700 cybercrime reports were received in the 2024–25 financial year, while the ACSC responded to more than 1,200 cyber security incidents, an 11 per cent increase year on year. For businesses, the average self-reported cost of cybercrime rose by 50 per cent to $80,850 per incident, with medium-sized organisations reporting even higher average losses.
The most commonly reported cybercrimes, such as email compromise, business email compromise fraud and identity fraud, typically exploit trusted access rather than breaching hardened infrastructure. AI-related data exposure follows the same pattern, arising through authorised users and legitimate tools that evade traditional detection. Experience across Ingram Micro’s MSP ecosystem suggests this misuse of trusted access is increasingly central to AI-related risk, even in environments with strong endpoint and identity controls.
This is why many modern data exposures no longer resemble traditional breaches. They do not involve malware, ransomware or system compromise, but instead arise through ordinary business activity that falls outside the assumptions of endpoint-centric security models.
Why endpoint-first security falls short
Endpoint and network controls remain essential, but they were designed for a world in which data stayed within known systems. Generative AI breaks that model by enabling data to move directly from internal systems to external platforms through approved browser sessions. An employee can be working on a fully managed device, protected by multi-factor authentication, and still expose sensitive information without triggering any alerts.
Hybrid work and bring-your-own-device practices further weaken the effectiveness of perimeter-based controls, particularly for knowledge workers who operate across locations and devices. In these environments, network boundaries offer limited protection against data exposure that occurs at the application and information layer.
While endpoint security can protect against unauthorised access and malicious code. It cannot prevent authorised users from making poor decisions with legitimate tools. Experience drawn from Ingram Micro’s engagement with Australian partners suggests that this gap is increasingly forcing organisations to reassess where security responsibility sits as AI becomes embedded in everyday workflows.
The governance problem hiding in plain sight
For many organisations, AI-related risk is not a technology problem but a governance problem. Policies often exist on paper, but enforcement at the data level is rare. Staff are rarely trained on what information can and cannot be shared with AI tools, and security teams lack visibility into how these tools are being used day to day.
The result is a growing disconnect between how organisations believe their data is protected and how it moves in practice.
Australian organisations are not facing a future risk. They are facing a present one. AI is already embedded in everyday workflows, and existing threat data shows that attackers continue to exploit identity, trust and authorised access rather than infrastructure weaknesses.
As Ingram Micro’s work across the Australian market increasingly reflects, the security challenge is shifting away from devices and networks and towards how information itself is governed, monitored and protected.
The shift from perimeter-centric to information-centric security is not coming. It is already underway. The question is whether organisations recognise it in time to respond.