Jump directly to: Content

    Ai_O — Observatory of Artificial Intelligence in Work and Society

    Knowledge

    Scenarios developed for the “ExamAI – AI Testing & Auditing” research project

    Published on 20 Nov 2020

    In May 2020 a joint project of the AI Observatory of the Policy Lab Digital, Work & Society was launched under the auspices of the German Informatics Society (GI) with the purpose of finding out how AI-based applications can be made trustworthy and secure for employees and consumers. The researchers have now defined scenarios for the use of artificial intelligence, thus laying the foundations for the research project.

    Various research institutions have been collaborating on the joint “ExamAI – AI Testing & Auditing” project since May 2020, together exploring the question of how testing and auditing procedures can be developed and implemented for certifying AI systems. The project partners include Fraunhofer IESE, the Algorithm Accountability Lab at TU Kaiserslautern, the Institute of Legal Informatics at Saarland University, Stiftung Neue Verantwortung, and the German Informatics Society, which is heading the project. Initiated by the AI Observatory of the Policy Lab, the research project will last 20 months.

    Application scenarios providing the basis

    The first step saw the research institutions define specific scenarios highlighting the use of AI as well as the potential it offers and the challenges it poses. These scenarios form the basis for further research. In a second step, the project will identify suitable testing and auditing practices for using AI on this basis. Recommendations will be formulated for stakeholders and policymakers in the third and final step. These aim to provide an indication of how the defined standards could be appropriately embedded in the near future – e.g. on an institutional level.

    Scenario development focusing on two areas in which AI can be utilised

    In developing the application scenarios, the researchers focused on two areas: firstly, human/machine interaction in industrial production and, secondly, personnel and talent management as well as recruiting. The thinking behind this is that the use of AI particularly affects employees’ rights in both areas.

    The researchers defined different scenarios for each aspect. With respect to human/machine interaction in industrial production, for example, they simulated the scenario of an intelligent “cobot” that assembles the air conditioning compressors in automotive production incorrectly. A cobot is a collaborative robot that can relieve people of activities that are very monotonous, ergonomically repetitive or even hazardous to their health. Using this scenario as a basis, the researchers subsequently discussed the harm that using AI could cause as well as the potential cobots offer if they are controlled by AI, e.g. in terms of safety.

    The scientists also developed various scenarios for personnel and talent management as well as recruiting. In one scenario, for example, AI is used to perform a background check on a candidate for a job vacancy to verify the information given by the candidate. Among other things, this AI-based process takes the strain off the human resources department as it performs this check, thus saving resources. On the other hand, however, AI might possibly mistake the applicant for another person when performing the check. This is known as the “entity recognition problem”, which may have detrimental consequences for the applicant – after all, the background check does not look at their details, but those of the other person which the AI system mistakes them for. Accordingly, this scenario also offers opportunities and risks, thus forming the basis for further research during the project.

    Next step: testing and auditing procedures for trusted AI

    With the “ExamAI – AI Testing & Auditing” project, the researchers are seeking to answer various questions. Among other things, they want to find out what form of testing and auditing procedures need to take to ensure the non-discriminatory use of artificial intelligence and what legal and technical requirements are needed to achieve it . They also want to learn how transparency, traceability, fairness, accountability, reliability, and data protection can be put into practice.

    In conclusion, the purpose is to identify the criteria that must be satisfied for artificial intelligence to be used in a trustworthy way. “We know that trust is of elementary importance – not least of all in operations and companies as well as in a wide range of work contexts,” explained State Secretary Björn Böhning as the project launched, highlighting a basic prerequisite for the use of AI. “If we want to make use of the benefits of AI processes in the long term and continue expanding this technology, we also need to think about the question of acceptance. Employees should always be aware of what AI solutions to use and when, why, and how to use them. They should be able to rely on the AI systems being subject to high quality standards,” he continued.