EXPERT COLUMN
By Sindhu Kashyap, our Senior Content Strategist
In this column, we’ ll be discussing the latest tech trends that are getting everyone talking. If you’ d like to get in touch, email sindhu @ lynchpinmedia. com
DEPLOYING AI? WHY LICENSING, OWNERSHIP AND ACCOUNTABILITY CAN’ T BE AFTERTHOUGHTS
From licensing models and IP disputes to algorithmic bias and SLA loopholes, AI deployment comes with a host of challenges. Here’ s what every tech partner in MENA should prepare for.
Across the Middle East and North Africa, the momentum for AI is unmistakable. Governments are investing heavily, enterprises are rolling out automation, and cloud infrastructure is expanding fast. But as excitement builds, so do the risks – especially for tech partners helping to deploy AI solutions on the ground.
“ AI isn’ t just plug-and-play,” said Mostafa Kabel, CTO at Mindware.“ You’ re not just deploying software. You’ re reshaping decision-making processes, working with sensitive data, and creating outputs that could carry legal or ethical implications. That requires a lot more caution than many realise.”
Kabel emphasised that partners must first address licensing, ensuring models are authorised for commercial use and compliant with legal standards. This includes navigating copyright, data privacy and country-specific laws, such as data sovereignty. In the diverse MENA region, even hardware choices, such as restricted GPUs, can pose legal risks.
“ Export controls and manufacturer restrictions on AI infrastructure are real,” Kabel explained.“ If you’ re working in a sensitive sector or jurisdiction, it’ s wise to get a legal opinion before making technical decisions.”
Ownership is another area that can trip people up. Many AI solutions today are co-developed – either white-labelled or built jointly using client data.
Without an explicit agreement, determining ownership can become a minefield. Kabel recommended that partners break it down from the start: who owns the model, who contributed the data, who trained or fine-tuned it, and what rights each party has over redistribution or branding.
It’ s not just legal exposure that’ s at stake. There’ s also the ethical weight of using AI in roles that influence people’ s lives – think hiring, credit scoring, or even healthcare. There is a need for safeguards that go beyond technical performance.
“ You can’ t just say the algorithm made a decision,” he said.“ Clients should be able to question those outcomes, demand explanations, and request fixes if there’ s unintended harm.”
He advised that these ethical expectations be built directly into service agreements – through clauses on fairness audits, transparency and human oversight.
Service-level agreements( SLAs) also need a rethink. Generative AI is not always predictable. It can produce inaccurate, biased, or out-of-context responses. Partners must be upfront about these limitations, update performance metrics to reflect AI-specific behaviour, and ensure regular model updates and reviews are included in the contract.
Transparency, Kabel added, is the best way to maintain trust. If a third-party model is being used, clients should know exactly what they’ re getting: where it comes from, what data it was trained on, what it can and can’ t do, and what changes have been made.
Looking ahead, Kabel believed that the partner ecosystem in the MENA region must evolve at the same pace as the regulatory landscape. That means building in standardised AI clauses, investing in governance tools, and participating in global discussions around responsible AI.
“ Clients want more than innovation,” he says.“ They want assurance. And partners who can give them that – legally, ethically, and technically – will be the ones they rely on.” •
INTELLIGENT TECH CHANNELS 19