What you can and can’t do in 2026
Artificial intelligence is becoming embedded across business environments at a pace that is reshaping executive responsibility. As systems begin to influence how information is handled and how decisions are formed, privacy is moving further into the sphere of governance and accountability.
Recent changes to Australia’s privacy landscape reinforce this shift. Enforcement powers have strengthened, and penalties have increased. The introduction of direct legal action for serious invasions of privacy means the way personal information is managed now carries greater legal and public consequence. Requirements to disclose automated decision-making bring AI directly into scope, making its role within organisations more visible.
For directors, this does not create a new category of duty. It does, however, change what existing responsibilities now require in practice.
What has changed
Privacy regulation is becoming more closely tied to how organisations operate day to day.
Reforms to the Privacy Act have increased expectations around transparency, particularly where automated systems are involved. Organisations are expected to explain how technology influences decision-making and how personal information is used within those processes. Regulatory attention is also focusing on whether organisations can demonstrate control across environments where data moves through multiple platforms.
The introduction of a statutory tort adds a direct avenue for legal action where harm occurs, extending privacy into a broader legal and reputational context.
Why this matters for directors
Director responsibilities have always centred on care, diligence, and acting in the best interests of the organisation. What is changing is how those duties apply in environments shaped by automated systems.
AI affects how decisions are produced and how information is interpreted. Oversight includes developing a clear understanding of how these systems operate within the organisation, without requiring deep technical expertise.
Regulators are placing greater attention on how organisational oversight aligns with privacy obligations. Where failures occur, the focus is on whether leadership had sufficient awareness of risk and whether that awareness translated into appropriate governance.
This brings privacy into the scope of board-level responsibility.
What directors can do
Directors can support the adoption of AI where there is clear oversight of how information is used and protected. Innovation remains important, although it must be supported by an understanding of how systems interact with personal information and where exposure may emerge.
Reliance on management and technical teams remains appropriate, provided leadership has visibility into how systems operate and how risk is monitored. This includes understanding how information moves through the organisation and how it is applied within AI-enabled processes.
Where oversight is strong, directors are able to explain how AI is used and how privacy risk is governed.
What directors can’t do
Risk increases when oversight is assumed instead of demonstrated. Directors cannot rely on the presence of systems or policies as evidence that privacy exposure is being managed. AI increases data use, allowing issues to develop before they are visible through traditional reporting.
What matters is an informed level of oversight, supported by visibility and a clear understanding of where the organisation may be exposed.
Responsibility also extends beyond internal systems. Where third-party platforms or offshore providers are involved, accountability for personal information remains with the organisation. Directors need confidence that these relationships are governed appropriately.
The AI impact
AI introduces complexity by changing the pace and structure of decision-making. Processes that were once easier to follow can become less transparent as systems operate continuously in the background.
As AI systems begin to shape internal reporting and influence how information is communicated and interpreted across the business, visibility into how outcomes are produced begins to reduce. The connection between the underlying data and the final result becomes less clear over time.
This creates a governance challenge, particularly where personal information contributes to outcomes that affect people or business decisions. Regulators are paying close attention to this shift.
What good looks like
A stronger approach begins with executive clarity. Organisations should be able to explain where AI is operating and how personal information is involved, supported by clear oversight as systems evolve.
Privacy should sit within governance from the outset so that accountability remains clear as adoption expands. Oversight becomes continuous, and leadership has enough visibility to respond with confidence when risk begins to emerge.
When this is done well, AI is governed with the same discipline expected in other areas of organisational risk.
Take the next step
AI adoption is continuing to expand across most organisations, often ahead of governance review and executive visibility. As privacy obligations evolve, understanding how these systems operate is becoming a leadership responsibility.
If this issue is already on your agenda, explore our executive resource here: https://corpit.net.au/ai-is-increasing-cyber-and-governance-risk/
AI will continue to shape how your business operates. Ensuring it does so in a controlled and accountable way remains with leadership.

