As artificial intelligence continues to evolve, so too do the tactics used by cyber attackers. In a feature published by the Australian Information Security Association (AISA), our Security Solutions Director, Ruchit Deshpande, explores how organisations can strengthen their cyber posture by empowering their people, not replacing them.

The AI-volution is no longer on the horizon, it’s already reshaping how we work, how we interact, and yes, how we’re attacked. As artificial intelligence becomes integral to business operations and cyber threats alike, one truth remains: your people are still the most targeted vulnerability in the chain.

In 2025, the data tells a familiar, concerning story. The Verizon Data Breach Investigations Report revealed that 68% of breaches involved the human element- errors, social engineering, and misuse of privileges. Attackers are not just cracking code; they’re manipulating cognition. And AI is helping them do it better.

Given the threat landscape has evolved, so too must our defences. That means strengthening the people using the technology to act as part of your defence.

This transformation doesn't happen through awareness campaigns alone. It requires a behavioural lens, embedded in your broader security architecture, supported by insight-driven controls and targeted reinforcement.

AI has changed the game, but humans are still in play

Phishing emails are no longer riddled with typos. Today’s attackers use generative AI to mimic internal tone, mirror legitimate workflows, and even simulate leadership voices using deepfake audio and video. The tactics have evolved, but the entry point is still behavioural.

According to Mimecast’s State of Human Risk 2025 report, just 8% of users are responsible for over 80% of human-activated attacks - a reminder that targeted behaviour change, not blanket training, is what moves the needle.

Common exploits include:

  • MFA fatigue: Repeated push notifications desensitise users into approving malicious logins.
  • Access approvals: AI can generate convincing access requests that slip through unchecked.
  • Data misclassification: Sensitive files may be shared too broadly or incorrectly labelled.
  • Alert overload: Security teams are flooded with false positives, increasing the risk of missing real threats, especially when AI-generated activity mimics normal user behaviour.

These breakdowns often stem from pressure, distraction, and gaps in workflow design. Meanwhile, attack methods are becoming more subtle and diverse, as attackers utilise AI tactics to exploit trust and urgency in daily tasks.

We’re seeing phishing emails that mimic internal communications, Business Email Compromise (BEC) scams impersonating executives or suppliers, and vishing calls that prompt credential sharing. Deepfake audio and video, powered by AI, are also used to simulate leadership and authorise fraudulent actions.

This isn't a people problem alone. It's a systemic disconnect between behaviour, process and AI-enhanced threat tactics.

AISA_The Missing Link article image
Building an effective human + AI security layer

Employees interact with data and systems constantly. They’re already part of your security posture- the question is whether they’re equipped to protect it. Spotting phishing attempts, flagging suspicious activity, and responding to access prompts requires more than standardised training, it demands intelligent, context-aware reinforcement.

Here’s the opportunity: by combining behavioural insight with machine intelligence, you can transform your human firewall from a risk vector into a powerful defence layer.

A few core principles underpin this shift:

  • Adaptive training: Use AI-driven platforms that deliver context-aware training based on individual roles and behaviours.
  • Intelligent workflows: Embed micro-interventions at decision points, like nudges before approving access or sharing data, guided by real-time behavioural analysis.
  • Automated escalation paths: Enable users to report suspicions instantly, while AI filters false flags and prioritises real threats.
  • Feedback loops: Use data from AI systems to continually evolve training, policy and controls so they reflect actual user behaviour, not theoretical risks.
  • Behavioural analytics: Regularly assess where AI and human behaviour intersect and clash so you can recalibrate your controls accordingly.

This is about making secure decisions easier in the moment and reinforcing a defence system where humans and AI collaborate, not collide.

At The Missing Link, we help organisations across sectors shift from reactive awareness to proactive resilience, using advanced analytics, human-centred design and integrated AI tooling.

From awareness to action: What leaders must do next

Understanding the risks people introduce is one thing. Acting on it is what separates reactive organisations from resilient ones.

Leaders can’t afford to treat AI as just another tool, or their people as the problem. The real risk lies in failing to understand the interaction between the two.

That starts by applying what you now know: how attackers target users, where vulnerabilities are likely to emerge, and how behaviours can shape outcomes. With that foundation, leaders can begin strengthening both your human firewall and the systems that support it.

Start with a Workforce Security Assessment
Get a clear view of how your people interact with systems, where behaviour introduces risk, and how that risk varies across teams. It highlights vulnerabilities like control bypassing or phishing-prone departments and replaces assumptions with real evidence.

Follow with a Security Controls Review
This shows whether your tools support the way people work. It helps uncover underused or misaligned controls and where friction is missing or misapplied. Together, these assessments reveal where behaviours and controls need realignment.

Turn findings into targeted interventions.
Use findings to build training programs that are based on real risks, not general topics. Adjust access policies, refine incident response expectations, and update controls where behaviour and technology don’t align.

When used together, these assessments create a cycle of improvement, helping you close gaps, and build a culture of informed, confident security decision-making.

The Missing Link’s perspective: Shaping your security with AI

At its best, AI enhances decision-making, reduces response time, and provides insight at speed and scale. But it doesn’t replace human judgement, curiosity or intuition. Those are the capabilities attackers exploit and they’re the ones we can sharpen.

That’s why we focus on augmenting your people, not automating them away. Through tailored assessments, intelligent automation, and behavioural analytics, we help you close the gap between security awareness and secure action.

Your strongest defence isn’t just smart systems- it’s smart people, empowered by smarter tools.

Ready to reinvent your human firewall?

The AI-volution isn’t just coming. It’s here. Is your security culture keeping up?

Talk to us at The Missing Link about aligning your people, processes, and AI tools to build a truly intelligent cyber defence strategy.

Author

Ruchit Deshpande

Ruchit Deshpande is the Security Solutions Director at The Missing Link, where he leads a team of talented Security Architects to help organisations build stronger, smarter cyber defences. With a lifelong passion for cyber security and over a decade of industry experience, Ruchit specialises in solving complex security challenges with practical, human-centred solutions. When he's not tackling emerging threats, you’ll likely find him playing or watching cricket or deep in thought over the latest cyber trends.