A major telecom company recently suffered a data breach in its AI call assistant. The system had just been marketed as "extra secure" because the AI supposedly ran on the device itself. However, it turned out that sensitive customer data was leaked via the network, raising many questions about the actual architecture and security.
This is interesting for any company considering voicebots, chatbots, or AI assistants in customer service. It demonstrates how marketing claims about "on-device AI" or "privacy-friendly AI" are insufficient. The real question is: how does the chain of data, models, and infrastructure work, and who bears what responsibility?
In this blog, we look at what exactly happened, what lessons you can learn from it as a Flemish or European SME, and how you can use AI in a safe, human-centric, and sustainable way.
What exactly is going on
A major telecom operator deployed an AI-powered call assistant to handle customer calls. The solution was positioned as a secure alternative because the AI would run "on the employee's device." This implied that call data rarely leaves the device and is therefore better protected.
However, over time, it came to light that a data breach had occurred. Data from customer conversations turned out to be more widely accessible than anticipated, via the underlying infrastructure. This raised questions about:
- how “on-device” the AI really was in practice;
- which data was nevertheless sent to servers or cloud environments;
- whether the security measures and access rights were sufficiently strict.
Regulators, media, and customers scrutinized the security claims. The crux of the criticism: the technical architecture and security were more complex—and more vulnerable—than the simple marketing message suggested.
Impact on people and society
AI call assistants can reduce wait times, relieve staff, and help customers faster. But when something goes wrong, the consequences are very human:
- Customer trust takes a hit when confidential information – such as name and address details, contract information or complaints – may end up on the street.
- Staff feel insecure: can they still use the system with confidence, and what do they say to the customer if they have questions about privacy?
- Organizations face reputational damage, potential fines and additional audits, which takes time and budget away from real innovation.
At the same time, this kind of incident presents an opportunity. It forces companies to be more mindful of AI in customer service: not just focusing on efficiency and cost savings, but also on the quality of customer relationships. AI then becomes not a black box, but a tool that can be used in an explainable and responsible way.
Ethical and sustainable considerations
This case touches on a range of themes that are becoming increasingly important for European SMEs and regulations (such as the GDPR and the AI Act):
- Ethics & honestyIf you claim an AI solution is safe and "on-device," it must be true. Honest communication with customers isn't a marketing detail, but an ethical obligation.
- TransparencyCustomers and employees have the right to know what data is collected, where it is stored, who has access to it, and how long it is retained. A clear privacy policy and easy-to-understand explanations in plain language are essential.
- SafetyAI systems often process sensitive data. Security by design—encryption, access control, audit logs, system segmentation—should be built into the design from day one, not an added layer afterward.
- Bias & fair treatmentVoice and language models may have difficulty understanding certain accents, languages, or customer groups. This can lead to unequal treatment and frustration. Testing with diverse users is therefore crucial.
- Sustainability & energy consumptionLarge AI models require significant computing power. On-device solutions can save energy (reducing data transfer), but only if the architecture is well-designed. Unnecessary data storage and dual-layered processing (both device and cloud) increase your ecological footprint.
Truly sustainable AI is therefore more than just green hardware: it is a combination of ethical choices, data minimization and efficiently designed infrastructure.
Safety and risk dimension
The case shows some typical risks that you as an organization should take into account:
- Data leaksCall recordings, transcripts, and metadata (who called and when, and about what) are extremely valuable. Poor segmentation or weak access tokens can be enough to leak large amounts of data.
- Hacking & abuseAI systems are often interlinked with other internal tools (CRM, ticketing, invoicing). A single vulnerable AI component can thus become a gateway to broader systems.
- PrivacyWithout clear rules, an AI call assistant can quickly collect more data than necessary. This is not only unnecessarily risky, but can also violate the GDPR (data minimization, purpose limitation).
- Incorrect configurationMany incidents are not caused by "super hackers," but by poorly configured servers, demo environments that remained online, or test data that was never deleted.
The key is a sensible, step-by-step approach: assess risks upfront, choose the architecture consciously, and test and audit regularly. Don't panic, but also don't blindly trust default settings or marketing claims.
What does this mean for your business?
When you think about an AI assistant in your contact center, on your website or in your internal processes, the key message is: AI is not a separate toy, but a full-fledged part of your IT and governance landscape.
In concrete terms this means:
- Involve from the start IT, security, legal, privacy and business around the same table. Don't let AI be an isolated experiment.
- Ask your suppliers to <strongconcrete architectuurdiagrammen, not just brochures. Where is which data stored, for how long, and who manages which component?
- Make sure you know internally which data are essential for good service, and which information you should not collect or anonymize quickly.
For Flemish and European SMEs, this isn't a distant prospect. Even smaller AI projects fall under the same basic principles of safety, ethics, and sustainability. The advantage: if you design well from the start, you won't have to put out expensive fires later.
3 concrete recommendations for SMEs
- 1. Create an AI data flow map
Easily map out: which data flows from the customer to the AI system, to which servers, which tools are reading it (e.g., CRM), and where the backup is located. This doesn't have to be a lengthy report, but it provides quick insight into risks. - 2. Demand clarity from your suppliers
Ask about: storage location (EU or non-EU), encryption, retention periods, audit logs, and penetration testing. Don't be fobbed off with general statements like "bank-grade security." Ask for concrete, verifiable measures. - 3. Start small, test with real people
Start with a limited use case (e.g., frequently asked questions) and test with a diverse group of customers and employees. Consider not only speed, but also comprehensibility, privacy perception, and error handling.
Conclusion: Technology that works for people
AI in customer service can be a boon for both customers and employees: shorter wait times, less repetitive work, and more time for complex, human conversations. But this only works if safety, ethics, and sustainability are factored in from day one.
At Canyon Clan, we build AI solutions that are clearly explained, handle data carefully, and comply with European regulations. We help your company make choices around architecture (cloud, on-device, or hybrid), security-by-design, and practical governance, without hype or doom-mongering.
Want to explore how you can safely and human-centrically deploy AI assistants in your organization? Feel free to contact Canyon Clan for an exploratory conversation. Together, we'll build technology that truly works for people.
