Many companies see tangible scaling opportunities with this technology and it is to be expected that, in the future, most products and services will either be AI-based or artificial intelligence will have been used in their development or manufacturing. Therefore, the question is: how can we assure AI security and enable trust in the emerging technology?
Dr. Tarek Besold Head of Strategic AI
The use of AI harbors enormous potential for companies, which is shown by the figures relating to a key sub-segment of AI: machine learning (ML) and self-learning systems. According to studies, the global ML market is expected to grow ten-fold from the current level of around USD 15.5 billion to more than USD 150 billion in 2028. Many companies see tangible scaling opportunities with this technology and it is to be expected that, in the future, most products and services will either be AI-based or artificial intelligence will have been used in their development and/or manufacture.
However, when it comes to machine learning and other areas of AI, there is a growing focus on one aspect: trust in their security. This not only applies to the companies that use AI-based processes, but also to the users, who only accept innovations and new technologies if they are convinced they are secure. At the moment, AI is still just a buzzword for many people, and the extent to which this technology is already used in our everyday lives is generally underestimated.
As a result, the TIC* sector (*Testing, Inspection & Certificiation) is also facing new challenges. For example, the use of AI-based systems is generating new customer and market needs, which still have not been adequately defined in regulatory terms. For these reasons, DEKRA is investing in the strategic expansion of the DEKRA AI Hub so that it can assume a pioneering role as a digital TIC player.
Dr. Xavier Valero González & Dr. Tarek Besold having a virtual meeting
DEKRA DIGITAL is responsible for the operational management of the DEKRA AI Hub. The management team, consisting of Dr. Tarek Besold (Head of Strategic AI) and Dr. Xavier Valero González (Head of Applied AI), has the vision of shaping regulatory issues and trialling these under practical conditions. For example, as experts, they are supporting the evolution of the regulatory and security environment for AI in Germany and internationally.
They are committed to clear standards that are being adapted continually to reflect the state of the art. The same applies to the inspection and certification of these AI standards. Alongside its visionary work, the DEKRA AI Hub also serves as a central resource for AI initiatives within the DEKRA Group. At present, the focus is on creating an AI ecosystem concentrated on the aspects of quality and security in conjunction with manufacturers, users, and regulatory bodies.
This ecosystem can be used as the basis for implementing AI projects, optimizing existing processes or products, and creating new and more secure products and services that satisfy DEKRA’s high quality standards.
In the following video, we talk to Tarek Besold about how to build trust in AI systems and what role DEKRA can play in this.
As a cooperation partner, DEKRA DIGITAL was part of the world’s largest mobility trade fair this year. At IAA Mobility, which was held in Munich for the first time this year, we hosted a panel discussion entitled “Tech vs. Trust – How to ensure Safety in Future Mobility”.
On September 9, we debated the question of how technologies in the mobility industry can be made safe in the future – with international experts in an on-site panel and broadcasted live. The discussion included topics like automated driving and regulatory challenges, as well as artificial intelligence and cybersecurity.
We want to raise awareness for making connectivity and data transmission, especially in cars, secure against external threats. The secure networking of technologies forms the basis for a safe and secure future of mobility. That is why it is crucial to test vehicles even more intensively for their security and safety requirements.