The Risk of AI in the Financial Services Sector
AI-Generated Voices: The New Breach of Trust in Financial Services Communication Channels

At Peoplebank, our Financial Services recruitment specialists work daily with banks, super funds, insurers, and fintech teams hiring across cybersecurity, data, and engineering. One industry message is becoming clear: voice is no longer a trusted channel. AI-generated speech, voice cloning and text-to-speech, is enabling attackers to impersonate staff, bypass internal processes, and trigger actions that put customer data and funds at risk (Yamagishi et al., 2021). This is where Peoplebank’s Financial Services domain expertise matters.
Why this matters now for banks, super funds, and insurers and how we can help
We’ve seen attackers increasingly target the human layer around critical systems; service desks, contact centres, identity recovery and approval workflows, where persuasion tactics such as authority and urgency can increase compliance. In deepfake-driven social engineering, a convincing impersonation can override standard protocols, resulting in unauthorised actions, fraud, data exposure, and operational disruption. (Hatfield, 2020; Pedersen et al., 2025)
Because Peoplebank’s Financial Services team understands how these environments operate; regulated workflows, separation of duties, audit trails, and strict change Pcontrols, we also see the practical challenge: fixing this isn’t just a security policy update. It’s a combination of process design, technology controls, and specialist capability.
Can AI detect AI voice? The Engineering Problem
Financial services organisations are increasingly exploring audio deepfake detection (ADD), models trained to distinguish bona fide speech from synthetic or converted speech. Modern approaches leverage robust speech representations and are tested against evolving spoofing methods, including conditions like compression and channel noise typical of telephone calls (Yi et al., 2022). Research also shows the hardest part is generalising across new voice-generation techniques, which is why detection needs continuous improvement, not “set and forget” deployment (Cross-Domain Audio Deepfake Detection, 2024).
In practice, effective controls often look like a layered system:
- Real-time call risk scoring for inbound/outbound calls
- Step-up verification for high-risk requests (e.g., callback to a known directory number, secure messaging confirmation)
- Agent and employee warnings embedded in softphones/CRM tools when AI likelihood is high
- SOC/Fraud monitoring and playbooks for flagged repeat fraudulent calls
How this becomes a hiring and capability issue
Implementing voice authenticity at scale requires specialist talent across cybersecurity, identity and access management (IAM), telephony/contact-centre platforms, data engineering, ML engineering, and MLOps. It also requires governance to manage false positives and ensure controls don’t harm customer experience.
Hiring for voice-security programs isn’t generic tech hiring, it requires recruiters who understand regulated FS delivery, security operating models, and the skills that translate into production outcomes.
If you’re uplifting resilience against AI-driven impersonation, submit a role today and speak with a Peoplebank Financial Services specialist to find the right cyber, ML/AI, and data engineering talent.
References
- Yamagishi, J., Wang, X., Todisco, M., et al. “ASVspoof 2021: accelerating progress in spoofed and deepfake speech detection.” Accessed March 17, 2026. https://arxiv.org/abs/2109.00537
- Yi, J., Fu, R., Tao, J., et al. “ADD 2022: the First Audio Deep Synthesis Detection Challenge.” Accessed March 17, 2026. https://cisaad.umbc.edu/add-2022/
- Cross-Domain Audio Deepfake Detection: Dataset and Analysis. “Cross-Domain Audio Deepfake Detection: Dataset and Analysis.” Accessed March 17, 2026. https://arxiv.org/abs/2404.04904
- Hatfield, J. “How social engineers use persuasion principles during vishing attacks.” Accessed March 17, 2026. https://doi.org/10.1108/ICS-07-2020-0113
- Pedersen, K. T., Pepke, L., Stærmose, T., Papaioannou, M., Choudhary, G., & Dragoni, N. “Deepfake-Driven Social Engineering: Threats, Detection Techniques, and Defensive Strategies in Corporate Environments.” Accessed March 17, 2026. https://doi.org/10.3390/jcp5020018






















