Artificial intelligence is increasingly shaping decision-making processes across industries, raising concerns about transparency and accountability. Deeploy, a Utrecht-based Machine Learning Operations (MLOps) platform, has received up to €7.5 million in funding from the European Innovation Council (EIC) to address these concerns. The funding will help Deeploy enhance its explainable AI (XAI) solutions, ensuring that AI models deployed in Europe adhere to ethical and regulatory standards. As AI adoption grows, the demand for clarity in machine-driven decisions has become a priority for businesses and regulators.
In previous funding rounds, Deeploy has consistently emphasized the importance of AI transparency. The company has positioned itself as a key player in developing tools that allow organizations to deploy AI responsibly. This recent funding follows a broader European effort to regulate AI through initiatives such as the EU AI Act, which calls for stricter oversight on AI models used in sectors like healthcare, banking, and government services. The financial support from the EIC further solidifies Deeploy’s role in shaping AI governance frameworks.
How Will Deeploy Utilize the Funding?
Deeploy will allocate the €2.5 million grant and a potential €5 million in equity towards expanding its explainability methods for complex AI models. The company aims to strengthen its platform’s compliance capabilities, ensuring alignment with EU regulations and ISO standards. A portion of the funds will also be used to establish partnerships across Europe, reinforcing AI sovereignty in the region.
“With AI adoption accelerating, companies need practical solutions to ensure model risks can be controlled. The EIC funding enables us to push AI governance beyond check-the-box compliance,” said Tim Kleinloog, CTO and Co-founder of Deeploy.
What Makes Deeploy’s AI Platform Stand Out?
Deeploy’s platform is designed to provide organizations with tools to monitor AI decisions and document compliance. It includes built-in explainability methods, risk and bias detection, and automated documentation for regulatory requirements. The company’s privacy-by-design approach ensures that data remains within organizational infrastructure, addressing security concerns often associated with AI models.
“Europe has set a global standard for AI ethics and governance, and we are proud to be at the forefront of this movement,” said Maarten Stolk, CEO and Co-founder of Deeploy.
The EIC recognized Deeploy’s alignment with European AI principles, selecting the company for funding under the “Human-Centric Generative AI Made in Europe” challenge. Deeploy was chosen from over 4,000 applicants, highlighting the demand for trustworthy AI solutions.
As AI regulations tighten, there is growing pressure on businesses to ensure their models are interpretable and fair. Deeploy’s approach focuses on making AI decisions traceable, reducing risks associated with automated processes. By integrating compliance tools directly into its platform, the company aims to simplify adherence to evolving legal frameworks.
For organizations operating in regulated markets, explainability in AI is becoming a necessity rather than an option. Deeploy’s technology offers a structured way to track AI decision-making, which is particularly relevant for industries where accountability is critical. With this funding, the company is positioned to support enterprises seeking to deploy AI while maintaining regulatory compliance.