- Pro
Trust has become both the primary target and the most vital asset
Comments (0) ()When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
(Image credit: Shutterstock)
You don’t need me to remind you that AI is now everywhere and at the forefront of C-suite agendas globally. But a lesser discussed impact of this is how trust has been fundamentally transformed as a result - for both individuals and businesses.
What was once guided by instinct and intuition is now quantifiable, testable, and machine-analyzed. Yet, despite the rise of sophisticated technology, attackers are still targeting the most vulnerable link: humans.
Anna ChungSocial Links NavigationPrinciple Researcher at Palo Alto Networks Unit 42.
- Amazon Black Friday deals are live: here are our picks!
Our latest Global Incident Response Report: Social Engineering Edition reveals that 36% of all cyber incidents begin with social engineering - clear proof that the human element remains the favorite entry point for cybercriminals.
You may like-
How AI is supercharging social engineering - and what businesses can do about it
-
Cybersecurity: the unseen engine of the UK’s digital future
-
Agentic AI: cybersecurity’s friend or foe?
AI is rewriting the rules of this battlefield. It is clearly giving criminals unprecedented power to imitate human tone, timing, and emotion with uncanny precision, while simultaneously equipping defenders with advanced tools to detect deception and continuously validate integrity.
The result is a high-stakes struggle over trust itself: who controls it, who exploits it, and who safeguards it.
Resilience no longer hinges on blind faith in technology alone. It depends on how effectively businesses manage trust across people, processes, and intelligent systems that never rest.
Why trust is now the primary attack surface
Despite breakthroughs in automation and detection, most major breaches still begin with a single human decision. It might be a click, a shared credential, or a conversation that feels routine. Social engineering thrives in these everyday moments - where familiarity dulls caution and attackers disguise manipulation as trust.
Are you a pro? Subscribe to our newsletterContact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.Attackers are far from guessing; they’re studying organizational dynamics and individual behaviors with the enthusiasm of a PhD student - minus the ethics. Many campaigns now blend multiple tactics, from malvertizing and smishing to MFA (Multi Factor Authentication) bombing, to wear down vigilance.
Our research highlights that 65% of social engineering attacks used phishing tactics, with 66% targeting privileged accounts and 45% impersonating internal personnel.
This shows that while phishing remains dominant, the sophistication lies in the context: messages that sound like colleagues, mimic legitimate business or blend naturally into ongoing workflows.
You may like-
How AI is supercharging social engineering - and what businesses can do about it
-
Cybersecurity: the unseen engine of the UK’s digital future
-
Agentic AI: cybersecurity’s friend or foe?
What makes this wave so dangerous is its adaptability. Each failed attempt reinforces the next, teaching AI-driven adversaries how humans respond under pressure. These attacks allow threat actors to escalate privileges rapidly, sometimes moving from initial access to domain administrator in under 40 minutes, without deploying any malware.
The attacks largely exploit control gaps and alert fatigue: 13% of social engineering cases succeeded because critical alerts were missed or misclassified. This reality demands a stronger focus on behavioral detection rather than relying solely on technical controls.
Defending against social engineering requires far more than awareness training; it requires systems capable of detecting deviations before trust is exploited.
How AI-driven defense can identify behavioral anomalies before damage occurs
We are noticing that as AI attacks accelerate, defenders are responding in kind - harnessing machine intelligence to surface what the human eye can’t see. The next frontier of cybersecurity lies in behavioral analytics: detecting the subtle deviations that signal deception before damage occurs.
We should not overemphasize AI’s offensive potential. The real opportunity lies in building our own AI key guardian capabilities - systems that understand behavioral baselines, detect anomalies, and continuously validate identity in real time.
This is more than a defensive upgrade; it’s a governance transformation. It allows organizations to embed verification behind every act of trust and every access event, ensuring that trust is earned.
AI-driven defense tools now analyze everything from communication tone to login patterns, spotting inconsistencies that suggest manipulation or impersonation.
These systems continuously learn what “normal” looks like within an organization, such as how teams collaborate, when accounts log in, what language employees use, and flag anomalies in real time. To defend effectively, we must use AI to fight AI.
This proactive approach can strengthen security from a reactive shield into an anticipatory system. Instead of waiting for alerts after a breach, detection models surface early indicators of compromise, even when no technical exploit is visible.
In this way, AI acts as both a microscope and an alarm bell, guiding human analysts toward the moments where trust begins to fracture.
What a “trust governance” mindset means for enterprise security
Enterprises can no longer treat trust as a soft value. It must be managed like any other operational asset. A trust governance mindset reframes access, verification, and accountability as measurable elements of security posture. It means building systems where trust is earned, validated, and, when necessary, revoked automatically.
In practice, this involves applying zero-trust principles beyond networks and devices to include people and processes. Roles, behaviors, and relationships are continuously assessed against risk signals, ensuring that authorization aligns with real-time context.
Clear visibility across users, suppliers, and AI systems turns trust into something observable and auditable rather than assumed.
When organizations govern trust, they transform it from a vulnerability into a defense layer. It creates a living security fabric - one that adapts to new risks as quickly as attackers evolve their tactics.
Trust has become both the primary target and the most vital asset. AI doesn’t merely escalate risks by enabling more sophisticated attacks; it also empowers defenders to act faster and smarter.
By embedding AI-driven behavioral analytics and trust governance into security frameworks, organizations can shift from reacting to breaches to anticipating and preventing them.
Ultimately, resilience will hinge on our ability to govern trust continuously and intelligently - transforming it from a vulnerability into a dynamic defense that adapts in real time.
We've featured the best encryption software.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
TOPICS AI
Anna ChungSocial Links NavigationAnna Chung is a Principle Researcher at Palo Alto Networks Unit 42.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
Logout Read more
How AI is supercharging social engineering - and what businesses can do about it
Cybersecurity: the unseen engine of the UK’s digital future
Agentic AI: cybersecurity’s friend or foe?
Why our own clicks are often cybercrime's greatest allies
Hackers are stealing the keys and walking through the front door, and AI is helping them turn the handle
From resilience to antifragility: embracing a new era in cybersecurity
Latest in Pro
EU clamps down on online fraud and hidden fees affecting online payment platforms
This devious botnet tried a trial run during the recent AWS outage - so when will it be back?
Asahi confirms cyberattack leaked data on 1.5 million customers
OpenAI now lets business customers choose where their ChatGPT data is hosted
Looking for an amazing saving? This exclusive IDrive deal is the biggest cloud storage discount we've seen this Black Friday
Need more space for your photos? This Carbonite cloud storage offer is a steal this Black Friday at 60% off
Latest in Opinion
The war on trust: how AI is rewriting the rules of cyber resilience
Sam Altman wants his AI device to feel like 'sitting in the most beautiful cabin by a lake,' but it sounds more like endless surveillance
Please don't date your AI because it will never love you or pick up the check
ChatGPT’s new voice integration feels like the missing piece in AI chat
The Trump Administration just launched its own plan for global AI dominance and what could go wrong?
The three speeds of zero trust
LATEST ARTICLES- 1The best camera for photography 2025: we've tested stellar snappers for every budget
- 2These are the best vacuum cleaners for every household – tested and compared by an expert
- 3This devious botnet tried a trial run during the recent AWS outage - so when will it be back?
- 4The best wireless headphones, chosen by our experts for all budgets
- 5UBTech strikes deal with China to assist at border crossings, and this isn't a dystopian nightmare at all