In the face of ambiguous threats and unpredictable behavior patterns, global trust in artificial intelligence (AI) has become increasingly complex. Ulla Coester, a project director at Fresenius University of Applied Sciences, emphasizes the importance of establishing adaptable governance to foster a shared understanding and consistent expectations for responsible AI development across societies. Coester suggests that governance must be flexible to accommodate the evolving nature of AI behavior, posing a critical question: “What is at risk of being undermined – trust in AI or the trust in society’s resilience and self-understanding?” She highlights the EU AI Act as a pivotal measure for guiding the direction of AI development.
The pursuit of aligning AI with global values necessitates cross-disciplinary collaboration. Coester insists that ethical evaluation should commence with assessing the AI solution’s criticality level. From this foundation, teams must develop strategies that consider cultural and regulatory variances from the outset. Such collaboration ensures that the development of AI technologies is aligned with a broad spectrum of ethical and societal values, promoting more inclusive and responsible AI deployment.
During a video interview at the RSAC Conference 2025 with Information Security Media Group, Coester further elaborated on several key themes. She stressed that trust in AI is contingent upon societal expectations as much as it is on technological advancements. Moreover, Coester discussed the EU AI Act’s significant role in establishing common governance principles that guide ethical AI development. She also outlined how interdisciplinary teams are crucial in ensuring that AI initiatives adhere to both ethical and legal standards.
Coester’s work is centered on cultivating corporate cultures that harmonize with a company’s ethical standards and the broader societal expectations in the digital transformation era. Her approach includes offering targeted support for digitalization projects, with a particular focus on AI-driven applications. By incorporating ethical considerations into these projects, Coester aims to enhance user acceptance and trust in AI technologies, thereby fostering a more ethical and sustainable digital future.
Through her insights and initiatives, Coester underscores the necessity of integrating ethical design and governance in AI development. By addressing both technological and societal dimensions, she advocates for a balanced approach to AI, one that not only advances technological capabilities but also reinforces societal trust and ethical integrity.