Project Description
The growing use of AI-based management and orchestration systems (cloud/edge virtualized infrastructures, private and public 5G, etc.) creates new attack surfaces for hosting frameworks and AI/ML algorithms. AI-based components are often black boxes, making them prime targets for threat actors who have developed techniques to impair the robustness of AI systems with adversarial AI attacks, violate the integrity of AI models (e.g., by poisoning training data), and bypass or disable the models by querying them with malicious input. Such, threat actors also weaponize AI technology to effectively perform attacks, ranging from automated individual threat actions, such as injection generators1,2, information gathering and exploitation3,4 to complete and sophisticated attacks, (such as AI-based emulation of Advanced Persistent Threats (APT) attacks5). To address this challenge
AIAS project aims to perform in-depth research on adversarial AI to design and develop an innovative AI-based security platform for the protection of AI systems’ technical robustness and AI-based operations of organisations, relying on Adversarial AI defence methods (e.g., adversarial training, adversarial AI attack detection), deception mechanisms (e.g., high-interaction honeypots, digital twins, virtual personas) as well as on explainable AI solutions (XAI) that empower security teams to materialise the concept of “AI for Cybersecurity” (i.e., AI/ML-based tools to enhance the detection performance, defence and response to cyberattacks) and “Cybersecurity for AI” (i.e., protection of AI systems against adversarial AI attacks).
AIAS envisions a sustainable secure environment for AI systems employing a long-term, international and cross discipline research scheme to develop a novel platform for securing industries against AI adversarial attacks. AIAS aspires to design, develop and validate a holistic platform for monitoring, protecting and improving the robustness of AI systems against threat actors, while providing comprehensible and transparent explanations to administrators (i.e., via XAI) to configure their AI systems and mitigate adversarial AI attacks. The proposed solution aims to solidify the strategies to improve AI system’s resilience to cyberattacks safeguarding the confidentiality, integrity, and availability of their operations. AIAS’s training aims at providing the opportunity to ERs/ESRs transfer knowledge (ToK), and practical skills that will stimulate their professional growth through carefully planned cross-partner training activities and interactions with the partner organisations and their respective networks in the markets of cybersecurity and AI. The ERs/ESRs will dive into SotA and innovative research challenges, collaborate in a multicultural environment and develop new hard and soft skills to become leaders in the employment market.
Methodology
AIAS follows a well-structured approach consisting of three main phases. The programme begins and ends with consortium-wide joint technical work, to maximize the transfer-of-knowledge (ToK) across the consortium and to define later common scenarios, business models, architecture designs and performance evaluation methodologies throughout the AIAS lifespan.
- Phase 1 (M1-M18): System requirements and platform’s main components identifications. During Phase 1, academic and industrial partners will collaborate to: i) identify and define the security, privacy, functional and ethical requirements that not only the AIAS platform as whole, ii) perform SotA reviews in the key programme areas, deception methods, AI-driven detection and mitigation methods. iii) the existing and emerging challenges in the AI-based cybersecurity ecosystem will be recognized, and a justification that analyses the way that each AIAS module can confront each challenge, iii) specify the tools and applications that will be used for the implementation of each AIAS module.
- Phase 2 (M19-M35): Implementation and validation of the main platform components. During Phase 2, the AIAS partners will collaborate to design and develop all AIAS modules. Each module will be designed following well-known techniques, implementing related methods and guidelines provided by the EU, as well as using cutting-edge technologies. Academic partners that are responsible for the delivery of modules will be supported by the industrial partners on a continuous basis to meet the required functionalities and conduct a thorough validation of their modules in relevant environments. The latter will be established and set-up by the experienced industrial partners. Evaluation of each module will safeguard that all requirements defined in Phase 1 are satisfied through comprehensive assessment by both industrial and academic partners. Overall, the main output of this phase will be the individual AIAS modules that will be fully functional and they will be used as input in Phase 3.
- Phase 3 (M36 -M48): Integration, proof-of-concept study and real-life assessment. Phase 3 includes actions related to the integration and adaptation for all individual modules, APIs and algorithms defined and developed. The main objective in this Phase is to deliver the AIAS platform and the modules that comprise it shall be functional and seamlessly work. In particular, the AIAS platform will arise from the integration of each individual module. Once the platform integration is completed, all partners will assess the platform through carefully selected real-life pilot use cases. This task may include modifications of the platform based on the feedback acquired during the experiments. After conducting the assessment, the results may lead to refinements; in this case, the changes will be applied to the modules immediately. Then, the platform will be evaluated to ensure that (i) the modification has delivered the required outcome, and (ii) the rest of the platform is behaving as expected without malfunctions.