More and more companies are experimenting with generative AI: from copywriting to customer service and data analysis. At the same time, governments and regulators are warning of risks related to bias, data breaches, and geopolitical tensions, especially when using foreign AI systems. That sounds drastic, but it raises very concrete questions: what happens to the data you feed into an AI tool? What biases are embedded in the model? And what does that mean for your customers, employees, and reputation?
In this article, we take a sobering look at recent warnings about generative AI and what you, as a Flemish or European SME, can do with it in practice. No hype, no doom-mongering – just clear tools for deploying AI safely, human-centrically, and sustainably.
What exactly is going on
A national security agency recently warned citizens and organizations about the use of certain generative AI language models hosted and developed in another country. The message is twofold:
- There are concerns about biasThe AI models could provide information and answers that are biased by the political, economic, or ideological interests of the country of origin. This can happen subtly, for example, in how certain events are described or which sources are prioritized.
- There are concerns about data leaks and privacySensitive or confidential information that users enter into these AI systems could be stored, analyzed, or even shared in non-transparent ways. This could pose a risk to governments and businesses, especially if the data is strategically or commercially valuable.
So the warning isn't: "never use generative AI again," but rather: be aware of where the technology comes from, what rules apply there, and what can happen to your data.
Impact on people and society
Generative AI offers many opportunities: time savings, better service, and greater data insights. But if we handle tools carelessly, the drawbacks can quickly become apparent:
- To trust: If customers fear that their data will end up in the wrong hands through an AI tool, trust in your organization and in digital services in general will decrease.
- Information qualityWhen AI systems exhibit bias or systematically favor certain perspectives, it influences how people view the world and make decisions. This applies to citizens, but also to executives who use AI as input for policy or strategy.
- Digital inequalityCompanies with limited knowledge or resources are more likely to use "free" tools without fully understanding the true price they're paying—with their data. This can widen the gap between organizations with a robust digital foundation and the rest.
At the same time, this social debate can have a positive effect: it forces us to deal with technology more consciously, make clear agreements and use AI in a way that fits our values.
Ethical and sustainable considerations
Your choice of an AI tool is more than a technical decision. It directly impacts ethics, sustainability, and transparency.
- Ethics & honestyIf an AI system structurally disadvantages certain groups or systematically reinforces a single narrative, that's an ethical issue. Companies bear some responsibility in this regard: which tools do you choose, how do you verify the outcomes, and who is allowed to use the results?
- TransparencyUsers and customers have the right to know how their data is processed. Where are the servers hosted? Which parties have access? How long is the data retained? Without clear answers, there is no true transparency.
- BiasEvery AI model is trained on data that is never completely neutral. This bias can be particularly sensitive to models from countries with strong state control or clear political interests. This isn't necessarily a reason to ban everything, but it is a reason to be critical and consult independent sources where possible.
- SafetyData passing through AI systems can be targeted for espionage, hacking, or misuse. This isn't unique to any one country or model type, but it emphasizes the importance of strong security, encryption, and clear contracts.
- Sustainability & energy consumptionLarge AI models require significant computing power and energy. By making conscious choices (e.g., efficient models, appropriate hosting locations, limiting unnecessary queries), your company can reduce its ecological footprint. Sustainability isn't a marketing layer here, but a design choice.
Safety and risk dimension
The warnings essentially revolve around a few risks that are relevant to every SME, regardless of the origin of the AI tool:
- Data leaks: If you put confidential customer data, business plans, or internal documents into a generative AI tool, that data could end up in logs, training data, or environments outside of your control.
- Hacking and abuseAI systems themselves can be targeted for hacking, but can also be used for (phishing) emails, social engineering, or automated attacks. A poorly secured tool in your processes can thus become an entry point.
- Privacy & regulationsEuropean companies must comply with GDPR and other privacy regulations. If data ends up in countries with different laws—for example, those involving extensive state interference—this can conflict with your obligations as a data controller.
- Geopolitical riskTechnology isn't neutral. In times of tension, data infrastructure and AI systems can play a role in economic or political pressure. This doesn't mean you should hide in a bunker, but it does mean you should be mindful of your supplier selection and data flows.
The bottom line: risk management, don't panic. Just as you would with your accountant, cloud provider, or insurer, you should critically review your AI vendors and establish clear agreements.
What does this mean for your business?
For Flemish and European SMEs that want to work with AI, the message is clear: AI can Create a lot of value, provided you handle it wisely. That starts with three things:
- Awareness: Know which AI tools are being used in your organization, by whom, and with what types of data. Employees' "experimental" tools are also taken into account.
- Making choices: determine which types of data may never end up in public or foreign AI systems (e.g. medical data, personnel files, strategic plans) and which may, under certain conditions.
- Setting up governance: establish clear internal guidelines on the use of AI: which tools have been approved, how do you handle results, who is responsible, how do you monitor new risks?
This way, you can build an AI strategy step by step that fits your values, your customers and the European context in which you operate.
3 concrete recommendations for Flemish SMEs
- 1. Check where your data goes
Ask your supplier explicitly about data location (EU or non-EU), data processing agreements, retention periods, and whether your input will be used to further train the model. Preferably, choose solutions that support data minimization and European hosting. - 2. Create a simple AI usage guideline
Define on a maximum of one or two pages: which AI tools employees are allowed to use, what types of information they should never enter (e.g., personal data, confidential contracts), and how they should verify AI results. Keep it concrete and understandable. - 3. Combine human judgment with AI
Use AI as an assistant, not as the final decision-maker. Always have sensitive decisions (about people, money, security) reviewed by a human. Systematically incorporate human checks into your processes at critical points.
Conclusion: technology at the service of people
Generative AI doesn't have to be a source of fear. With clear choices, healthy critical questions, and a human-centric approach, technology can actually make your organization more resilient, efficient, and sustainable. It's not about magically eliminating all risks, but about consciously managing them.
At Canyon Clan, we help SMEs and organizations implement AI safely, ethically, and without nonsense: from developing a data policy and AI guidelines to selecting or building solutions that align with European regulations and your values. Want to explore what this could mean for your business? Feel free to contact us – we're happy to brainstorm with you, in plain language and with a long-term perspective.
