WO2023082112A1 - Appareil, procédés et programmes informatiques - Google Patents

Appareil, procédés et programmes informatiques Download PDF

Info

Publication number
WO2023082112A1
WO2023082112A1 PCT/CN2021/129895 CN2021129895W WO2023082112A1 WO 2023082112 A1 WO2023082112 A1 WO 2023082112A1 CN 2021129895 W CN2021129895 W CN 2021129895W WO 2023082112 A1 WO2023082112 A1 WO 2023082112A1
Authority
WO
WIPO (PCT)
Prior art keywords
security threat
risk
pipeline
security
model
Prior art date
Application number
PCT/CN2021/129895
Other languages
English (en)
Inventor
Iris ADAM
Tejas SUBRAMANYA
Jing PING
Original Assignee
Nokia Shanghai Bell Co., Ltd.
Nokia Solutions And Networks Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Shanghai Bell Co., Ltd., Nokia Solutions And Networks Oy filed Critical Nokia Shanghai Bell Co., Ltd.
Priority to PCT/CN2021/129895 priority Critical patent/WO2023082112A1/fr
Publication of WO2023082112A1 publication Critical patent/WO2023082112A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/50Monitoring users, programs or devices to maintain the integrity of platforms, e.g. of processors, firmware or operating systems
    • G06F21/57Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
    • G06F21/577Assessing vulnerabilities and evaluating computer system security
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/094Adversarial learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1433Vulnerability analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0475Generative networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions

Definitions

  • the present disclosure relates to apparatus, methods, and computer programs, and in particular but not exclusively to apparatus, methods and computer programs for network apparatuses.
  • a communication system can be seen as a facility that enables communication sessions between two or more entities such as user terminals, access nodes and/or other nodes by providing carriers between the various entities involved in the communications path.
  • a communication system can be provided for example by means of a communication network and one or more compatible communication devices.
  • the communication sessions may comprise, for example, communication of data for carrying communications such as voice, electronic mail (email) , text message, multimedia and/or content data and so on.
  • Content may be multicast or uni-cast to communication devices.
  • a user can access the communication system by means of an appropriate communication device or terminal.
  • a communication device of a user is often referred to as user equipment (UE) or user device.
  • the communication device may access a carrier provided by an access node and transmit and/or receive communications on the carrier.
  • the communication system and associated devices typically operate in accordance with a required standard or specification which sets out what the various entities associated with the system are permitted to do and how that should be achieved. Communication protocols and/or parameters which shall be used for the connection are also typically defined.
  • UTRAN 3G radio
  • Another example of an architecture that is known is the long-term evolution (LTE) or the Universal Mobile Telecommunications System (UMTS) radio-access technology.
  • LTE long-term evolution
  • UMTS Universal Mobile Telecommunications System
  • Another example communication system is so called 5G system that allows user equipment (UE) or user device to contact a 5G core via e.g. new radio (NR) access technology or via other access technology such as Untrusted access to 5GC or wireline access technology.
  • NR new radio
  • One of current approaches being employed is closed-loop automation and machine learning which can be built into self-organizing networks (SON) enabling an operator to automatically optimize every cell in the radio access network.
  • SON self-organizing networks
  • an apparatus for an artificial intelligence, AI, security risk and threat management function located outside at least one AI pipeline executing or configured to execute at least part of an AI model comprising means for: performing a first security threat and risk analysis associated with executing the AI model by the at least one AI pipeline to obtain a first value for a security threat and/or risk parameter; performing a second security threat and risk analysis associated with executing the AI model by the at least one AI pipeline to obtain a second value for the security threat and/or risk parameter; and determining whether the change from the first value to the second value causes and/or would cause a predetermined tolerable risk for the security threat and/or risk parameter to be exceeded; and signalling the result of the determination to a security threat and risk management coordinator located outside of the at least one pipelines.
  • the apparatus may comprise means for determining that an entity in the AI model and/or the at least one AI pipeline has changed since the first security threat and risk analysis was performed, and wherein said means for performing the second security threat and risk analysis performs said new security threat and risk analysis in response to determining that said change has occurred.
  • the signalling may comprise an indication that the predetermined tolerable risk for the security threat and/or risk parameter has exceeded or will exceed an execution condition for the AI model.
  • the means for performing the second security threat and risk analysis for executing the AI model by the at least one AI pipeline may comprise means for: signalling, to security threat and risk manager (s) associated with respective AI trust management functions of each of the at least one AI pipelines, a request for the security threat and risk manager (s) to perform a security threat and risk assessment for their associated AI pipeline; and receiving, from the respective security threat and risk manager (s) , first security threat and risk indication of a respective security threat and risk associated with each of the at least one AI pipelines.
  • the means for performing the second security threat and risk analysis for executing the AI model by the at least one AI pipeline may comprise means for: signalling, to the security threat and risk management coordinator, a request for the security threat and risk management coordinator to perform a security threat and risk assessment for at least one AI trust management function executing the AI model by the at least one AI pipeline; and receiving, from the security threat and risk management coordinator, a second security threat and risk indication of perform a security threat and risk assessment for at least one AI trust management function executing the AI model.
  • the apparatus may comprise means for: aggregating the first and second security threat and risk indications to form an aggregated security threat and risk; and performing said determining whether to cause an execution condition of the AI model to change in dependence on the aggregated security threat and risk.
  • the means for determining whether the change from the first value to the second value causes and/or would cause a predetermined tolerable risk for the security threat and/or risk parameter to be exceeded may comprise means for: comparing the aggregated security threat and risk to at least one predetermined parameter associated with an acceptable security threat and risk; and wherein the means for signalling the result of the determination to a security threat and risk management coordinator may comprise means for: signalling to the security threat and risk management coordinator a report indicating whether aggregated security threat and risk is considered to be acceptable in dependence on the result of this determining.
  • the second security threat and risk indication may be associated with at least one of: a customer intent, a network operator intent, security constraints for a current network infrastructure, and/or an ownership of the AI model.
  • the apparatus may comprise means for receiving, from said security threat and risk management coordinator, at least one security condition associated with a quality of trust condition for executing the first and/or second security threat and risk analysis; and causing the at least one quality of trust condition to be used as at least one security condition to be fulfilled when the first and/or second security threat and risk analysis is executed.
  • an apparatus for an artificial intelligence, AI, security risk and threat management function associated with a trust management function of an AI pipeline comprising means for: receiving, from an AI security threat and risk analysis function located outside of the AI pipeline, a request for a security threat and risk analysis to be performed for an AI trust management function associated with executing at least part of an AI model in the AI pipeline; signalling a result of the first security threat and risk analysis to the AI security threat and risk analysis function located outside of the AI pipeline; receiving, from the AI security threat and risk analysis function located outside of the AI pipeline, a request for a new security threat and risk analysis to be performed for the AI trust management function associated with executing at least part of an AI model in the AI pipeline; and signalling a result of the new security threat and risk analysis to the AI security threat and risk analysis function located outside of the AI pipeline.
  • the apparatus may comprise means for: determining whether the AI model and/or an entity associated with executing the at least part of the AI model in the AI pipeline has changed since a last time a security threat and risk analysis was performed for the AI model; and when it is determined that the AI model and/or associated entity has changed, performing a new security threat and risk analysis associated with executing the at least part of the AI model in the AI pipeline.
  • the means for determining whether the AI model and/or AI pipeline has changed since the last time a security threat and risk analysis was performed for the AI model may comprise means for determining whether training data used for training the AI model has been changed since the last time the security threat and risk analysis was performed for the AI model, and/or means for collecting information from a pipeline orchestrator.
  • an apparatus for a security threat and risk management coordinator located outside of at least one artificial intelligence, AI, pipeline comprising means for: receiving, from an AI security threat and risk analysis function located outside of the at least one AI pipeline, a request for a new security threat and risk analysis to be performed for an AI trust management function executing an AI model using at least one AI pipeline; performing a security threat and risk analysis by determining security issues and/or vulnerabilities that exist when the AI trust management function executes the AI model in a current network infrastructure; and signalling a result of the new security threat and risk analysis to the AI security threat and risk analysis function.
  • the apparatus may comprise means for: signalling, to the AI security threat and risk analysis function, a request for the AI security threat and risk analysis function to determine a security threat and risk associated with executing the AI model using the at least one pipeline before said receiving said request; and receiving, from the AI security threat and risk analysis function, an indication of a security threat and risk associated with said executing.
  • the apparatus may comprise means for: configuring at least one security condition associated with quality of trust condition for an AI security threat and risk analysis performed by the AI security threat and risk analysis fu nction; and signalling the configured at least one security condition to the AI security threat and risk analysis function.
  • the apparatus may comprise means for: receiving, from the AI security threat and risk analysis function, an aggregated security threat and risk associated with executing the AI model, wherein the aggregate security threat and risk comprises at least one value that represents security vulnerabilities both inside and outside of the at least one AI pipeline; and determining whether to change at least one current security constraint in a network infrastructure in response to determining aggregate security threat and risk.
  • an apparatus for an artificial intelligence, AI, security risk and threat management function located outside at least one AI pipeline executing or configured to execute at least part of an AI model comprising: at least one processor; and at least one memory comprising code that, when executed by the at least one processor, causes the apparatus to: perform a first security threat and risk analysis associated with executing the AI model by the at least one AI pipeline to obtain a first value for a security threat and/or risk parameter; perform a second security threat and risk analysis associated with executing the AI model by the at least one AI pipeline to obtain a second value for the security threat and/or risk parameter; determine whether the change from the first value to the second value causes and/or would cause a predetermined tolerable risk for the security threat and/or risk parameter to be exceeded; and signal the result of the determination to a security threat and risk management coordinator located outside of the at least one pipelines.
  • the apparatus may be caused to determine that an entity in the AI model and/or the at least one AI pipeline has changed since the first security threat and risk analysis was performed, and wherein said performing the second security threat and risk analysis performs said new security threat and risk analysis in response to determining that said change has occurred.
  • the signalling may comprise an indication that the predetermined tolerable risk for the security threat and/or risk parameter has exceeded or will exceed an execution condition for the AI model.
  • the performing the second security threat and risk analysis for executing the AI model by the at least one AI pipeline may comprise: signalling, to security threat and risk manager (s) associated with respective AI trust management functions of each of the at least one AI pipelines, a request for the security threat and risk manager (s) to perform a security threat and risk assessment for their associated AI pipeline; and receiving, from the respective security threat and risk manager (s) , first security threat and risk indication of a respective security threat and risk associated with each of the at least one AI pipelines.
  • the performing the second security threat and risk analysis for executing the AI model by the at least one AI pipeline may comprise: signalling, to the security threat and risk management coordinator, a request for the security threat and risk management coordinator to perform a security threat and risk assessment for at least one AI trust management function executing the AI model by the at least one AI pipeline; and receiving, from the security threat and risk management coordinator, a second security threat and risk indication of perform a security threat and risk assessment for at least one AI trust management function executing the AI model.
  • the apparatus may be caused to: aggregate the first and second security threat and risk indications to form an aggregated security threat and risk; and perform said determining whether to cause an execution condition of the AI model to change in dependence on the aggregated security threat and risk.
  • the determining whether the change from the first value to the second value causes and/or would cause a predetermined tolerable risk for the security threat and/or risk parameter to be exceeded may comprise: comparing the aggregated security threat and risk to at least one predetermined parameter associated with an acceptable security threat and risk; and wherein the signalling the result of the determination to a security threat and risk management coordinator may comprise: signalling to the security threat and risk management coordinator a report indicating whether aggregated security threat and risk is considered to be acceptable in dependence on the result of this determining.
  • the second security threat and risk indication may be associated with at least one of: a customer intent, a network operator intent, security constraints for a current network infrastructure, and/or an ownership of the AI model.
  • the apparatus may be caused to receive, from said security threat and risk management coordinator, at least one security condition associated with a quality of trust condition for executing the first and/or second security threat and risk analysis; and causing the at least one quality of trust condition to be used as at least one security condition to be fulfilled when the first and/or second security threat and risk analysis is executed.
  • an apparatus for an artificial intelligence, AI, security risk and threat management function associated with a trust management function of an AI pipeline comprising: at least one processor; and at least one memory comprising code that, when executed by the at least one processor, causes the apparatus to: receive, from an AI security threat and risk analysis function located outside of the AI pipeline, a request for a security threat and risk analysis to be performed for an AI trust management function associated with executing at least part of an AI model in the AI pipeline; signal a result of the first security threat and risk analysis to the AI security threat and risk analysis function located outside of the AI pipeline; receive, from the AI security threat and risk analysis function located outside of the AI pipeline, a request for a new security threat and risk analysis to be performed for the AI trust management function associated with executing at least part of an AI model in the AI pipeline; and signal a result of the new security threat and risk analysis to the AI security threat and risk analysis function located outside of the AI pipeline.
  • the apparatus may be caused to: determine whether the AI model and/or an entity associated with executing the at least part of the AI model in the AI pipeline has changed since a last time a security threat and risk analysis was performed for the AI model; and when it is determined that the AI model and/or associated entity has changed, perform a new security threat and risk analysis associated with executing the at least part of the AI model in the AI pipeline.
  • the determine whether the AI model and/or AI pipeline has changed since the last time a security threat and risk analysis was performed for the AI model may comprise determining whether training data used for training the AI model has been changed since the last time the security threat and risk analysis was performed for the AI model, and/or collect information from a pipeline orchestrator.
  • an apparatus for a security threat and risk management coordinator located outside of at least one artificial intelligence, AI, pipeline comprising: at least one processor; and at least one memory comprising code that, when executed by the at least one processor, causes the apparatus to: receive, from an AI security threat and risk analysis function located outside of the at least one AI pipeline, a request for a new security threat and risk analysis to be performed for an AI trust management function executing an AI model using at least one AI pipeline; perform a security threat and risk analysis by determining security issues and/or vulnerabilities that exist when the AI trust management function executes the AI model in a current network infrastructure; and signal a result of the new security threat and risk analysis to the AI security threat and risk analysis function.
  • the apparatus may be caused to: signal, to the AI security threat and risk analysis function, a request for the AI security threat and risk analysis function to determine a security threat and risk associated with executing the AI model using the at least one pipeline before said receiving said request; and receive, from the AI security threat and risk analysis function, an indication of a security threat and risk associated with said executing.
  • the apparatus may be caused to: configure at least one security condition associated with quality of trust condition for an AI security threat and risk analysis performed by the AI security threat and risk analysis function; and signal the configured at least one security condition to the AI security threat and risk analysis function.
  • the apparatus may be caused to: receive, from the AI security threat and risk analysis function, an aggregated security threat and risk associated with executing the AI model, wherein the aggregate security threat and risk comprises at least one value that represents security vulnerabilities both inside and outside of the at least one AI pipeline; and determine whether to change at least one current security constraint in a network infrastructure in response to determining aggregate security threat and risk.
  • a method for an apparatus for an artificial intelligence, AI, security risk and threat management function located outside at least one AI pipeline executing or configured to execute at least part of an AI model comprising: performing a first security threat and risk analysis associated with executing the AI model by the at least one AI pipeline to obtain a first value for a security threat and/or risk parameter; performing a second security threat and risk analysis associated with executing the AI model by the at least one AI pipeline to obtain a second value for the security threat and/or risk parameter; and determining whether the change from the first value to the second value causes and/or would cause a predetermined tolerable risk for the security threat and/or risk parameter to be exceeded; and signalling the result of the determination to a security threat and risk management coordinator located outside of the at least one pipelines.
  • the method may comprise determining that an entity in the AI model and/or the at least one AI pipeline has changed since the first security threat and risk analysis was performed, and wherein said performing the second security threat and risk analysis performs said new security threat and risk analysis in response to determining that said change has occurred.
  • the signalling may comprise an indication that the predetermined tolerable risk for the security threat and/or risk parameter has exceeded or will exceed an execution condition for the AI model.
  • the performing the second security threat and risk analysis for executing the AI model by the at least one AI pipeline may comprise: signalling, to security threat and risk manager (s) associated with respective AI trust management functions of each of the at least one AI pipelines, a request for the security threat and risk manager (s) to perform a security threat and risk assessment for their associated AI pipeline; and receiving, from the respective security threat and risk manager (s) , first security threat and risk indication of a respective security threat and risk associated with each of the at least one AI pipelines.
  • the performing the second security threat and risk analysis for executing the AI model by the at least one AI pipeline may comprise: signalling, to the security threat and risk management coordinator, a request for the security threat and risk management coordinator to perform a security threat and risk assessment for at least one AI trust management function executing the AI model by the at least one AI pipeline; and receiving, from the security threat and risk management coordinator, a second security threat and risk indication of perform a security threat and risk assessment for at least one AI trust management function executing the AI model.
  • the method may comprise: aggregating the first and second security threat and risk indications to form an aggregated security threat and risk; and performing said determining whether to cause an execution condition of the AI model to change in dependence on the aggregated security threat and risk.
  • the determining whether the change from the first value to the second value causes and/or would cause a predetermined tolerable risk for the security threat and/or risk parameter to be exceeded may comprise: comparing the aggregated security threat and risk to at least one predetermined parameter associated with an acceptable security threat and risk; and wherein the signalling the result of the determination to a security threat and risk management coordinator may comprise: signalling to the security threat and risk management coordinator a report indicating whether aggregated security threat and risk is considered to be acceptable in dependence on the result of this determining.
  • the second security threat and risk indication may be associated with at least one of: a customer intent, a network operator intent, security constraints for a current network infrastructure, and/or an ownership of the AI model.
  • the method may comprise receiving, from said security threat and risk management coordinator, at least one security condition associated with a quality of trust condition for executing the first and/or second security threat and risk analysis; and causing the at least one quality of trust condition to be used as at least one security condition to be fulfilled when the first and/or second security threat and risk analysis is executed.
  • a method for an apparatus for an artificial intelligence, AI, security risk and threat management function associated with a trust management function of an AI pipeline comprising: receiving, from an AI security threat and risk analysis function located outside of the AI pipeline, a request for a security threat and risk analysis to be performed for an AI trust management function associated with executing at least part of an AI model in the AI pipeline; signalling a result of the first security threat and risk analysis to the AI security threat and risk analysis function located outside of the AI pipeline; receiving, from the AI security threat and risk analysis function located outside of the AI pipeline, a request for a new security threat and risk analysis to be performed for the AI trust management function associated with executing at least part of an AI model in the AI pipeline; and signalling a result of the new security threat and risk analysis to the AI security threat and risk analysis function located outside of the AI pipeline.
  • the method may comprise: determining whether the AI model and/or an entity associated with executing the at least part of the AI model in the AI pipeline has changed since a last time a security threat and risk analysis was performed for the AI model; and when it is determined that the AI model and/or associated entity has changed, performing a new security threat and risk analysis associated with executing the at least part of the AI model in the AI pipeline.
  • the determining whether the AI model and/or AI pipeline has changed since the last time a security threat and risk analysis was performed for the AI model may comprise determining whether training data used for training the AI model has been changed since the last time the security threat and risk analysis was performed for the AI model, and/or collecting information from a pipeline orchestrator.
  • a method for an apparatus for a security threat and risk management coordinator located outside of at least one artificial intelligence, AI, pipeline comprising: receiving, from an AI security threat and risk analysis function located outside of the at least one AI pipeline, a request for a new security threat and risk analysis to be performed for an AI trust management function executing an AI model using at least one AI pipeline; performing a security threat and risk analysis by determining security issues and/or vulnerabilities that exist when the AI trust management function executes the AI model in a current network infrastructure; and signalling a result of the new security threat and risk analysis to the AI security threat and risk analysis function.
  • the method may comprise: signalling, to the AI security threat and risk analysis function, a request for the AI security threat and risk analysis function to determine a security threat and risk associated with executing the AI model using the at least one pipeline before said receiving said request; and receiving, from the AI security threat and risk analysis function, an indication of a security threat and risk associated with said executing.
  • the method may comprise: configuring at least one security condition associated with quality of trust condition for an AI security threat and risk analysis performed by the AI security threat and risk analysis function; and signalling the configured at least one security condition to the AI security threat and risk analysis function.
  • the method may comprise: receiving, from the AI security threat and risk analysis function, an aggregated security threat and risk associated with executing the AI model, wherein the aggregate security threat and risk comprises at least one value that represents security vulnerabilities both inside and outside of the at least one AI pipeline; and determining whether to change at least one current security constraint in a network infrastructure in response to determining aggregate security threat and risk.
  • an apparatus for an artificial intelligence, AI, security risk and threat management function located outside at least one AI pipeline executing or configured to execute at least part of an AI model comprising: performing circuitry for performing a first security threat and risk analysis associated with executing the AI model by the at least one AI pipeline to obtain a first value for a security threat and/or risk parameter; performing circuitry for performing a second security threat and risk analysis associated with executing the AI model by the at least one AI pipeline to obtain a second value for the security threat and/or risk parameter; and determining circuitry for determining whether the change from the first value to the second value causes and/or would cause a predetermined tolerable risk for the security threat and/or risk parameter to be exceeded; and signalling circuitry for signalling the result of the determination to a security threat and risk management coordinator located outside of the at least one pipelines.
  • the apparatus may comprise determining circuitry for determining that an entity in the AI model and/or the at least one AI pipeline has changed since the first security threat and risk analysis was performed, and wherein said performing circuitry for performing the second security threat and risk analysis performs said new security threat and risk analysis in response to determining that said change has occurred.
  • the signalling may comprise an indication that the predetermined tolerable risk for the security threat and/or risk parameter has exceeded or will exceed an execution condition for the AI model.
  • the performing circuitry for performing the second security threat and risk analysis for executing the AI model by the at least one AI pipeline may comprise: signalling circuitry for signalling, to security threat and risk manager (s) associated with respective AI trust management functions of each of the at least one AI pipelines, a request for the security threat and risk manager (s) to perform a security threat and risk assessment for their associated AI pipeline; and receiving circuitry for receiving, from the respective security threat and risk manager (s) , first security threat and risk indication of a respective security threat and risk associated with each of the at least one AI pipelines.
  • the performing circuitry for performing the second security threat and risk analysis for executing the AI model by the at least one AI pipeline may comprise: signalling circuitry for signalling, to the security threat and risk management coordinator, a request for the security threat and risk management coordinator to perform a security threat and risk assessment for at least one AI trust management function executing the AI model by the at least one AI pipeline; and receiving circuitry for receiving, from the security threat and risk management coordinator, a second security threat and risk indication of perform a security threat and risk assessment for at least one AI trust management function executing the AI model.
  • the apparatus may comprise: aggregating circuitry for aggregating the first and second security threat and risk indications to form an aggregated security threat and risk; and performing circuitry for performing said determining whether to cause an execution condition of the AI model to change in dependence on the aggregated security threat and risk.
  • the determining circuitry for determining whether the change from the first value to the second value causes and/or would cause a predetermined tolerable risk for the security threat and/or risk parameter to be exceeded may comprise: comparing circuitry for comparing the aggregated security threat and risk to at least one predetermined parameter associated with an acceptable security threat and risk; and wherein the signalling circuitry for signalling the result of the determination to a security threat and risk management coordinator may comprise: signalling circuitry for signalling to the security threat and risk management coordinator a report indicating whether aggregated security threat and risk is considered to be acceptable in dependence on the result of this determining.
  • the second security threat and risk indication may be associated with at least one of: a customer intent, a network operator intent, security constraints for a current network infrastructure, and/or an ownership of the AI model.
  • the apparatus may comprise receiving circuitry for receiving, from said security threat and risk management coordinator, at least one security condition associated with a quality of trust condition for executing the first and/or second security threat and risk analysis; and causing circuitry for causing the at least one quality of trust condition to be used as at least one security condition to be fulfilled when the first and/or second security threat and risk analysis is executed.
  • an apparatus for an artificial intelligence, AI, security risk and threat management function associated with a trust management function of an AI pipeline comprising: receiving circuitry for receiving, from an AI security threat and risk analysis function located outside of the AI pipeline, a request for a security threat and risk analysis to be performed for an AI trust management function associated with executing at least part of an AI model in the AI pipeline; signalling circuitry for signalling a result of the first security threat and risk analysis to the AI security threat and risk analysis function located outside of the AI pipeline; receiving circuitry for receiving, from the AI security threat and risk analysis function located outside of the AI pipeline, a request for a new security threat and risk analysis to be performed for the AI trust management function associated with executing at least part of an AI model in the AI pipeline; and signalling circuitry for signalling a result of the new security threat and risk analysis to the AI security threat and risk analysis function located outside of the AI pipeline.
  • the apparatus may comprise: determining circuitry for determining whether the AI model and/or an entity associated with executing the at least part of the AI model in the AI pipeline has changed since a last time a security threat and risk analysis was performed for the AI model; and when it is determined that the AI model and/or associated entity has changed, performing circuitry for performing a new security threat and risk analysis associated with executing the at least part of the AI model in the AI pipeline.
  • the determining circuitry for determining whether the AI model and/or AI pipeline has changed since the last time a security threat and risk analysis was performed for the AI model may comprise determining circuitry for determining whether training data used for training the AI model has been changed since the last time the security threat and risk analysis was performed for the AI model, and/or collecting information from a pipeline orchestrator.
  • an apparatus for a security threat and risk management coordinator located outside of at least one artificial intelligence, AI, pipeline comprising: receiving circuitry for receiving, from an AI security threat and risk analysis function located outside of the at least one AI pipeline, a request for a new security threat and risk analysis to be performed for an AI trust management function executing an AI model using at least one AI pipeline; performing circuitry for performing a security threat and risk analysis by determining security issues and/or vulnerabilities that exist when the AI trust management function executes the AI model in a current network infrastructure; and signalling circuitry for signalling a result of the new security threat and risk analysis to the AI security threat and risk analysis function.
  • the apparatus may comprise: signalling circuitry for signalling, to the AI security threat and risk analysis function, a request for the AI security threat and risk analysis function to determine a security threat and risk associated with executing the AI model using the at least one pipeline before said receiving said request; and receiving circuitry for receiving, from the AI security threat and risk analysis function, an indication of a security threat and risk associated with said executing.
  • the apparatus may comprise: configuring circuitry for configuring at least one security condition associated with quality of trust condition for an AI security threat and risk analysis performed by the AI security threat and risk analysis function; and signalling circuitry for signalling the configured at least one security condition to the AI security threat and risk analysis function.
  • the apparatus may comprise: receiving circuitry for receiving, from the AI security threat and risk analysis function, an aggregated security threat and risk associated with executing the AI model, wherein the aggregate security threat and risk comprises at least one value that represents security vulnerabilities both inside and outside of the at least one AI pipeline; and determining circuitry for determining whether to change at least one current security constraint in a network infrastructure in response to determining aggregate security threat and risk.
  • non-transitory computer readable medium comprising program instructions for causing an apparatus for an artificial intelligence, AI, security risk and threat management function located outside at least one AI pipeline executing or configured to execute at least part of an AI model to perform at least the following: perform a first security threat and risk analysis associated with executing the AI model by the at least one AI pipeline to obtain a first value for a security threat and/or risk parameter; perform a second security threat and risk analysis associated with executing the AI model by the at least one AI pipeline to obtain a second value for the security threat and/or risk parameter; determine whether the change from the first value to the second value causes and/or would cause a predetermined tolerable risk for the security threat and/or risk parameter to be exceeded; and signal the result of the determination to a security threat and risk management coordinator located outside of the at least one pipelines.
  • the apparatus may be caused to determine that an entity in the AI model and/or the at least one AI pipeline has changed since the first security threat and risk analysis was performed, and wherein said performing the second security threat and risk analysis performs said new security threat and risk analysis in response to determining that said change has occurred.
  • the signalling may comprise an indication that the predetermined tolerable risk for the security threat and/or risk parameter has exceeded or will exceed an execution cond ition for the AI model.
  • the performing the second security threat and risk analysis for executing the AI model by the at least one AI pipeline may comprise: signalling, to security threat and risk manager (s) associated with respective AI trust management functions of each of the at least one AI pipelines, a request for the security threat and risk manager (s) to perform a security threat and risk assessment for their associated AI pipeline; and receiving, from the respective security threat and risk manager (s) , first security threat and risk indication of a respective security threat and risk associated with each of the at least one AI pipelines.
  • the performing the second security threat and risk analysis for executing the AI model by the at least one AI pipeline may comprise: signalling, to the security threat and risk management coordinator, a request for the security threat and risk management coordinator to perform a security threat and risk assessment for at least one AI trust management function executing the AI model by the at least one AI pipeline; and receiving, from the security threat and risk management coordinator, a second security threat and risk indication of perform a security threat and risk assessment for at least one AI trust management function executing the AI model.
  • the apparatus may be caused to: aggregate the first and second security threat and risk indications to form an aggregated security threat and risk; and perform said determining whether to cause an execution condition of the AI model to change in dependence on the aggregated security threat and risk.
  • the determining whether the change from the first value to the second value causes and/or would cause a predetermined tolerable risk for the security threat and/or risk parameter to be exceeded may comprise: comparing the aggregated security threat and risk to at least one predetermined parameter associated with an acceptable security threat and risk; and wherein the signalling the result of the determination to a security threat and risk management coordinator may comprise: signalling to the security threat and risk management coordinator a report indicating whether aggregated security threat and risk is considered to be acceptable in dependence on the result of this determining.
  • the second security threat and risk indication may be associated with at least one of: a customer intent, a network operator intent, security constraints for a current network infrastructure, and/or an ownership of the AI model.
  • the apparatus may be caused to receive, from said security threat and risk management coordinator, at least one security condition associated with a quality of trust condition for executing the first and/or second security threat and risk analysis; and causing the at least one quality of trust condition to be used as at least one security condition to be fulfilled when the first and/or second security threat and risk analysis is executed.
  • non-transitory computer readable medium comprising program instructions for causing an apparatus for an artificial intelligence, AI, security risk and threat management function associated with a trust management function of an AI pipeline to perform at least the following: receive, from an AI security threat and risk analysis function located outside of the AI pipeline, a request for a security threat and risk analysis to be performed for an AI trust management function associated with executing at least part of an AI model in the AI pipeline; signal a result of the first security threat and risk analysis to the AI security threat and risk analysis function located outside of the AI pipeline; receive, from the AI security threat and risk analysis function located outside of the AI pipeline, a request for a new security threat and risk analysis to be performed for the AI trust management function associated with executing at least part of an AI model in the AI pipeline; and signal a result of the new security threat and risk analysis to the AI security threat and risk analysis function located outside of the AI pipeline.
  • the apparatus may be caused to: determine whether the AI model and/or an entity associated with executing the at least part of the AI model in the AI pipeline has changed since a last time a security threat and risk analysis was performed for the AI model; and when it is determined that the AI model and/or associated entity has changed, perform a new security threat and risk analysis associated with executing the at least part of the AI model in the AI pipeline.
  • the determine whether the AI model and/or AI pipeline has changed since the last time a security threat and risk analysis was performed for the AI model may comprise determining whether training data used for training the AI model has been changed since the last time the security threat and risk analysis was performed for the AI model, and/or collect information from a pipeline orchestrator.
  • non-transitory computer readable medium comprising program instructions for causing an apparatus for a security threat and risk management coordinator located outside of at least one artificial intelligence, AI, pipeline, to perform at least the following: receive, from an AI security threat and risk analysis function located outside of the at least one AI pipeline, a request for a new security threat and risk analysis to be performed for an AI trust management function executing an AI model using at least one AI pipeline; perform a security threat and risk analysis by determining security issues and/or vulnerabilities that exist when the AI trust management function executes the AI model in a current network infrastructure; and signal a result of the new security threat and risk analysis to the AI security threat and risk analysis function.
  • the apparatus may be caused to: signal, to the AI security threat and risk analysis function, a request for the AI security threat and risk analysis function to determine a security threat and risk associated with executing the AI model using the at least one pipeline before said receiving said request; and receive, from the AI security threat and risk analysis function, an indication of a security threat and risk associated with said executing.
  • the apparatus may be caused to: configure at least one security condition associated with quality of trust condition for an AI security threat and risk analysis performed by the AI security threat and risk analysis function; and signal the configured at least one security condition to the AI security threat and risk analysis function.
  • the apparatus may be caused to: receive, from the AI security threat and risk analysis function, an aggregated security threat and risk associated with executing the AI model, wherein the aggregate security threat and risk comprises at least one value that represents security vulnerabilities both inside and outside of the at least one AI pipeline; and determine whether to change at least one current security constraint in a network infrastructure in response to determining aggregate security threat and risk.
  • a computer program product stored on a medium that may cause an apparatus to perform any method as described herein.
  • an electronic device that may comprise apparatus as described herein.
  • a chipset that may comprise an apparatus as described herein.
  • Figures 1A and 1B show a schematic representation of a 5G system
  • Figure 2 shows a schematic representation of a network apparatus
  • Figure 3 shows a schematic representation of a user equipment
  • Figure 4 shows a schematic representation of a non-volatile memory medium storing instructions which when executed by a processor allow a processor to perform one or more of the steps of the methods of some examples;
  • Figure 5 shows a schematic representation of a network
  • Figures 6 and 7 show a schematic representation of architectures with respect to Trustworthy Artificial Intelligence Framework for Cognitive Autonomous Networks
  • Figure 8 illustrates example signalling that may be performed by entities described herein.
  • FIGS 9 to 11 illustrate example operations that may be performed by apparatus described herein.
  • FIG. 1A shows a schematic representation of a 5G system (5GS) 100.
  • the 5GS may comprise a user equipment (UE) 102 (which may also be referred to as a communication device or a terminal) , a 5G access network (AN) (which may be a 5G Radio Access Network (RAN) or any other type of 5G AN such as a Non-3GPP Interworking Function (N3IWF) /a Trusted Non3GPP Gateway Function (TNGF) for Untrusted /Trusted Non-3GPP access or Wireline Access Gateway Function (W-AGF) for Wireline access) 104, a 5G core (5GC) 106, one or more application functions (AF) 108 and one or more data networks (DN) 110.
  • UE user equipment
  • AN which may be a 5G Radio Access Network (RAN) or any other type of 5G AN such as a Non-3GPP Interworking Function (N3IWF) /a Trusted Non3GPP Gateway Function (TNGF
  • the 5G RAN may comprise one or more gNodeB (gNB) distributed unit functions connected to one or more gNodeB (gNB) unit functions.
  • the RAN may comprise one or more access nodes.
  • the 5GC 106 may comprise one or more Access and Mobility Management Functions (AMF) 112, one or more Session Management Functions (SMF) 114, one or more authentication server functions (AUSF) 116, one or more unified data management (UDM) functions 118, one or more user plane functions (UPF) 120, one or more unified data repository (UDR) functions 122, one or more network repository functions (NRF) 128, and/or one or more network exposure functions (NEF) 124.
  • AMF Access and Mobility Management Functions
  • SMF Session Management Functions
  • AUSF authentication server functions
  • UDM unified data management
  • UPF user plane functions
  • UPF user plane functions
  • URF unified data repository
  • NRF network repository functions
  • NEF network exposure functions
  • the 5GC 106 also comprises a network data analytics function (NWDAF) 126.
  • NWDAF network data analytics function
  • the NWDAF is responsible for providing network analytics information upon request from one or more network functions or apparatus within the network.
  • Network functions can also subscribe to the NWDAF 126 to receive information therefrom.
  • the NWDAF 126 is also configured to receive and store network information from one or more network functions or apparatus within the network.
  • the data collection by the NWDAF 126 may be performed based on at least one subscription to the events provided by the at least one network function.
  • the network may further comprise a management data analytics service (MDAS) .
  • MDAS may provide data analytics of different network related parameters including for example load level and/or resource utilisation.
  • the MDAS for a network function (NF) can collect the NF’s load related performance data, e.g., resource usage status of the NF.
  • the analysis of the collected data may provide forecast of resource usage information in a predefined future time. This analysis may also recommend appropriate actions e.g., scaling of resources, admission control, load balancing of traffic, etc.
  • Figure 1B shows a schematic representation of a 5GC 106’ represented in current 3GPP specifications.
  • FIG. 1B shows a UPF 120’ connected to an SMF 114’ over an N4 interface.
  • the SMF 114’ is connected to each of a UDR 122’, an NEF 124’, an NWDAF 126’, an AF 108’, a Policy Control Function (PCF) 130’, an AMF 112’, and a Charging function 132’ over an interconnect medium that also connects these network functions to each other.
  • PCF Policy Control Function
  • 3GPP refers to a group of organizations that develop and release different standardized communication protocols. 3GPP is currently developing and publishing documents related to Release 16, relating to 5G technology, with Release 17 currently being scheduled for 2022.
  • 3GPP is looking to integrate more self-organizing networks (SON) and cognitive autonomous networks (CAN) into communication network topographies.
  • SON self-organizing networks
  • CAN cognitive autonomous networks
  • an SON automatically interprets events to determine cause-effect relations, selects and executes of actions, while a CAN reasons to formulate decision/actions, although actions may require operator approval before execution.
  • SON self-organizing networks
  • CAN cognitive autonomous networks
  • An SON is an automation technology designed to make the planning, configuration, management, optimization and healing of mobile radio access networks simpler and faster.
  • an SON comprises a communication and sensing functionality (for obtaining local information through message exchange with neighboring nodes and performing an observation of the environmental condition by probing or sensing, for example) , a knowledge database for storing the sensed/observed conditions, and a self-organization engine functionality for determining and causing optimal parameters to be implemented in a network. Initial parameter settings may thus later be improved in the self-optimization process.
  • Closed-loop automation and machine learning can be built into SONs and/or CANs for enabling an operator to automatically optimize the performance of at least one cell in the radio access network.
  • an Artificial Intelligence (AI) or Machine Learning (ML) pipeline helps to automate AI/ML workflows by splitting them into independent, reusable and modular components that can then be pipelined together to create a model.
  • the AI/ML pipeline is iterative where each step is repeated to continuously improve the accuracy of the model.
  • An example AI/ML workflow in an AI pipeline comprises the following three components:
  • a Data Source Manager is configured to implement such functions as data collection and data preparation.
  • AI Training Managers (see below description of AI Training Managers 611A/611B in Figure 6) : An AI Training Manager is configured to implement functions such as hyperparameter tuning) .
  • AI Inference Managers (see below description of AI Inference Managers 612A/612B in Figure 6) : An AI Inference Manager is configured to implement functions such as model evaluation.
  • TIF Trustworthy Artificial Intelligence Framework
  • CAN Cognitive Autonomous Networks
  • AI/ML model trustworthiness e.g., fairness, explainability, robustness
  • PCT/EP2021/062396 PCT/EP2021/062396
  • FIG. 6 shows a policy manager 601 that receives constraints from a network operator 602 and that receives a customer intent 603 (e.g. to handover to a new access point, to change a user plane path, etc. ) .
  • the policy manager 601 may determine policies in dependence on the received constraints and the customer intents, and send at least one of these policies to an AI trust engine 604, an AI pipeline orchestrator 605 and a service management and orchestration entity 606.
  • the policy manager 601 may signal a quality of trust policy to the AI trust engine 604, an AI quality of service policy to the AI Pipeline Orchestrator 605, and a service quality of service policy to the service management and orchestration entity 606.
  • the AI trust engine may further receive information from the network operator 602.
  • the service Management and orchestration entity 605 provides service quality of service information and/or instructions to a resource manager 607.
  • FIG. 6 further illustrates a first AI pipeline 608A and a second AI pipeline 608B.
  • Each of these AI Pipelines comprises respective AI trust managers 609A/609B, AI Data Source Managers 610A/610B, AI Training Managers 611A/611B, and AI Inference Managers 612A/612B.
  • the AI Data Source Managers 610A/610B, AI Training Managers 611A/611B, and AI Inference Managers 612A/612B pass information between them in a loop fashion within each pipeline, as illustrated by the arrows of Figure 6.
  • AI Data Source Managers 610A/610B, AI Training Managers 611A/611B, and AI Inference Managers 612A/612B each exchange information with their respective trust manager 608A/608B, and each receive quality of service information from the AI pipeline orchestrator 605.
  • the AI trust engine 604 provides AI trust information to each of the AI trust managers 608A/608B for use in controlling their respective pipelines.
  • a service definition or the business/customer intent received at 603 comprises AI/ML trustworthiness requirements in addition to the Network/AI Quality of Service (QoS) requirements.
  • the TAIF is used to configure the requested AI/ML trustworthiness and to monitor and assure its fulfilment.
  • the TAIF introduces two novel management functions, the AI Trust Engine (one per management domain) and the AI Trust Manager (one per AI/ML Pipeline) , and introduces a concept called AI Quality of Trustworthiness (AI QoT) to define AI/ML model Trustworthiness in a unified way covering factors such as, for example, Fairness, Explainability, and Robustness.
  • the policy manager 601 is configured to receive information from the network operator 602. Furthermore the policy manager 601 is configured to receive or otherwise obtain a service definition or a business/customer intent.
  • the service definition or the business/customer intent may include AI/ML trustworthiness requirements in addition to the Network/AI Quality of Service (QoS) requirements, and the TAIF is used to configure the requested AI/ML trustworthiness and to monitor and assure its fulfilment.
  • QoS Network/AI Quality of Service
  • the system can comprise a service management and orchestration 606 function configured to receive the service quality of service (QoS) from the policy manager 601.
  • the service management and orchestration 606 is configured to control at least one of a domain manager, an element manager, a virtual network function (VNF) manager, and/or resource manager 607 based on the output of the service management and orchestration function 606.
  • a domain manager allows for service management and orchestration to be performed across multiple domains. It is understood that references in the following to any single one of these entities also refers to the other entities in this list.
  • the system can comprise an AI pipeline orchestrator 605.
  • the AI pipeline orchestrator 605 is configured to obtain or receive an AI QoS from the policy manager 601 and based on this, be configured to control the operations of the Data Source Manager 610A, 610B, Model Training Manager 611A, 611B, and Model Inference Manager 612A, 612B for the AI pipeline 1 608A and AI pipeline 2 608B.
  • the TAIF introduced two further management functions, the AI Trust Engine (trustworthiness function) 604 (one per management domain) and the AI Trust Manager 609A, 609B (one per AI/ML Pipeline 608A, 609B) and six new interfaces (T1-T6) that are configured to support the interactions in the TAIF.
  • the AI Trust Engine 604 is configured to function as a center for managing all AI trustworthiness related components in the network, whereas the AI Trust Managers 609A, 609B are use case and often vendor specific, with knowledge of the AI use case and how it is implemented.
  • the example TAIF also employs the concept of AI Quality of Trustworthiness (AI QoT) to define AI/ML model trustworthiness in a unified way covering multiple factors such as, for example, fairness, explainability and robustness.
  • AI QoT AI Quality of Trustworthiness
  • the AI QoT for example is passed from the policy manager 601 to the AI trust engine function 604 and is similar to how QoS is used for network performance.
  • An example QoT can be shown by the following table
  • the customer intent 603 may be provided to the policy manager function 601. Additionally is shown the network operator (via the policy manager function 601) specifying, for example over the T1 interface, the required AI QoT (use case-specific based on risk levels) to the AI Trust Engine 604.
  • the AI Trust Engine 604 translates the AI QoT into specific AI trustworthy (i.e., fairness, explainability and robustness) requirements and identifies the affected use-case-specific AI Trust Manager (s) . Using the T2 interface, The AI Trust Engine 604 configures the AI Trust Managers 609A, 609B.
  • the use case specific and implementation-aware AI Trust Manager 609A, 609B is configured to configure, monitor, and measure AI trustworthy requirements for AI Data Source Manager 610A, 610B over the T3 interface.
  • AI Trust Manager 609A, 609B is configured to configure, monitor, and measure AI trustworthy requirements for AI Training Manager 611A, 611B over the T4 interface.
  • the use case specific and implementation-aware AI Trust Manager 609A, 609B is configured to configure, monitor, and measure AI trustworthy requirements for AI Inference Manager 612A, 612B over the T5 interface.
  • the measured or collected TAI metrics and/or TAI explanations from the AI Data Source Manager 610A, 610B, AI Training Manager 611A, 611B and AI Inference Manager 612A, 612B regarding the AI Pipeline are pushed to the AI Trust Manager 507 over T3, T4 and T5 interfaces.
  • the AI Trust Manager 609A, 609B pushes the TAI metrics and/or TAI explanations to the AI Trust Engine 603, over the T2 interface, based on the reporting mechanisms configured by the AI Trust Engine (trustworthiness function) .
  • the network operator 602 can request and receive the TAI metrics/explanations of an AI Pipeline from the AI Trust Engine over the T6 interface.
  • the Network Operator may decide to update the policy via Policy/Intent Manager.
  • the example TAI Framework thus enables various telco stakeholders (e.g., Cognitive Network Function vendors, network operators, regulators, end users) to trust the decisions/predictions made by AI/ML models in the network.
  • telco stakeholders e.g., Cognitive Network Function vendors, network operators, regulators, end users
  • each AI/ML workflow component may be abstracted into an independent service that relevant stakeholders (for example data engineers, data scientists) can independently work on.
  • relevant stakeholders for example data engineers, data scientists
  • an AI/ML Pipeline Orchestrator (an example of which is provided by Kubeflow) can manage the AI/ML Pipelines' lifecycle. This management may comprise, for example, managing the stages in the lifecycle of commissioning, scaling, decommissioning.
  • AI/ML systems For AI/ML systems to be widely accepted, they should be trustworthy in addition to their performance (e.g., accuracy) .
  • Legal bodies are proposing frameworks on AI/ML applications, for example the European Commission has proposed the first-ever legal framework on AI. This legal framework presents new rules for AI to be Trustworthy and which mission-critical AI-based systems must adhere to in the near future.
  • the High-level Expert Group (HLEG) group on AI has developed the European Commission's Trustworthy AI (TAI) strategy.
  • TAI European Commission's Trustworthy AI
  • the transparency requirement includes traceability, explainability and communication.
  • the accountability requirement includes auditability, minimization and reporting of negative impact, trade-offs and redress.
  • the human agency and oversight requirement includes fundamental rights, human agency and human oversight.
  • Fairness is the process of understanding bias introduced in the data, and ensuring the model provides equitable predictions across all demographic groups. It is important to apply fairness analysis throughout the entire AI/ML Pipel ine, making sure to continuously re-evaluate the models from the perspective of fairness and inclusion. This is especially important when AI/ML is deployed in critical business processes that affect a wide range of end users. There are three broad approaches to detect bias in the AI/ML model:
  • Quantification of Fairness There are several metrics that measure individual and group fairness. For example, Statistical Parity Difference, Average Odds Difference, Disparate Impact and Theil Index.
  • Pre-modelling explainability To understand or describe data used to develop AI/ML models. For example, using algorithms such as ProtoDash and Disentangled Inferred Prior Variational Autoencoder Explainer.
  • explanations can be local (i.e., explaining a single instance/prediction or global (i.e., explaining the global AI/ML model structure/predictions, e.g., based on combining many local explanations of each prediction)) .
  • Evasion attacks involves carefully perturbing the input samples at test time to have them misclassified. For example, using techniques such as Shadow attack and Threshold attack.
  • Poisoning is adversarial contamination of training data.
  • Machine learning systems can be re-trained using data collected during operations. An attacker may poison this data by injecting malicious samples during operation that subsequently disrupt retraining. For example, using techniques such as Backdoor attack and Adversarial backdoor embedding.
  • Extraction attacks aim to duplicate a machine learning model through query access to a target model. For example, using techniques such as KnockoffNets and Functionally equivalent extraction.
  • Inference attacks determine if a sample of data was used in the training dataset of an AI/ML model. For example, using techniques such as Membership inference black-box and attribute inference black-box.
  • Trainer For example, using techniques such as General adversarial training and Madry’s protocol.
  • Transformer For example, using techniques such as Defensive distillation and Neural cleanse.
  • Detector For example, using techniques such as Detection based on activations analysis and Detection based on spectral signatures.
  • TARA Threat Analysis and Risk Assessment
  • AI Trust Managers face a certain level of risk associated with various threats for AI pipelines and the AI Trust Engine faces certain level of risk associated with various threats for Trust Managers.
  • the risk level may also depend on the security mechanisms applied to environment used for data collection, model training and AI Trust Manager. For example, the risks for incorrect or manipulated data is much lower if data is sampled from a protected environment.
  • Some studies have considered a few attack scenarios applicable to AI/ML models.
  • the following attack types have been identified as relevant for AI pipelines: Poisoning attacks and backdoor attacks in training phase, and model stealing attacks and data extraction attacks in inference phase. These, and other relevant types of attacks are discussed above.
  • the total list of threats may comprise AI/ML-specific attacks (e.g. data and model poisoning, model stealing, etc. ) , as well as traditional attacks like denial of service attacks or attacks due to insecure storage.
  • current studies have not considered threats arising from context/environment information for specific deployment like ownership of models, number of allowed queries, model knowledge etc..
  • the attack scenarios considered have concentrated on "offline” and "static" analysis.
  • ZSM Zero touch network and Service Management
  • E2E end-to-end
  • This study proposed configuring domain analytics services to provide domain-specific insights and to generate domain-specific predictions based on data collected by domain data collection services and other data (e.g. data collected by other domains or provided by data services) .
  • E2E service analytics services have been configured to be responsible for handling E2E service impact analysis and root cause analysis and generate service-specific predictions. Also, the verification of Service Level Specifications (SLSs) and monitoring of Key Performance Indicators (KPIs) that are included in E2E service analytics.
  • SLSs Service Level Specifications
  • KPIs Key Performance Indicators
  • security regarding AI/ML pipelines comprises considerations of, for example, security of data supply chain, model supply chain, model deployed in shared framework, interaction between multiple domains, trust between AI/ML service producer and customer, etc.
  • DDoS distributed denial of service
  • the model may be trained via a training dataset, with the trained model being tested using both a functional test and an adversarial test.
  • an adversarial test comprises testing the robustness of a trained model with respect to adversarial examples, while a functional test simply determines whether the model functions as expected.
  • Adversarial testing allows quantifying the robustness of a model to the considered attacks. If the testing results show that the robustness of the model is low, the model can be hardened more thoroughly, using adversarial training.
  • models can be dynamically updated during operation.
  • a model update may be initiated by an update of data as a pair of inference input and user feedback. This data update can be seen as new training data to refine the model. In this case, testing may be not feasible due to constraints like latency.
  • the distribution of data may change over time (dimensional shift) and frequent model updates may be used to keep up with this change.
  • procuring trustworthy data sets for training takes a lot of effort.
  • protecting data/model supply chain may take substantial effort and can even be infeasible e.g., due to scarce resources.
  • AI model vendors may be not aware of the security risk arising from inference data and execution environment and vice versa. Further, AI model customers may be not aware of the security risk arising from constraints of the development environment, such as attackers obtaining model parameters by vulnerabilities of the system where the model is located (model stealing attack) .
  • Adversarial training only guarantees a certain level of robustness against evasion attacks in inference stage, which is not sufficient for some applications.
  • adversarial training also increases the run-time for training by a certain factor that depends on the level of robustness required by Quality of Trust policies. Thus adversarial training may impact the performance of the network and overall accuracy of complex tasks.
  • the following proposes a mechanism for enabling an automated and dynamic threat and risk analysis during operation of services/models to determine the risk that a threat turns into a successful attack, to determine the impact of the risk and then to apply security controls accordingly.
  • threats may be the results of accidents or intentional acts to cause them.
  • risk may be assessed as a combination of the damage potential of the risk (via the result of the impact assessment) and the likelihood of the risk actually occurring (via the result of the threat assessment) .
  • a dynamic risk analysis may be considered to be the continuous process of identifying and assessing risk, proposing, triggering, and/or enabling at least one action to eliminate or reduce risk, monitoring and reviewing, in the rapidly changing circumstances.
  • the following introduces a new logical entity/manager for threat and risk analysis.
  • a purpose of this new entity is to help enable that security controls for AI pipelines and AI Trust Managers protect against misuse and misoperations or otherwise reduce the security risk to an acceptable level.
  • the results from security threat and risk analysis may serve as input for the AI Trust Engine and for AI Trust Managers to apply security mechanisms based on the AI QoT and security requirements. For example, monitoring mechanisms for investigating inference data samples to detect if they are manipulated may be implemented.
  • the new logical entity/manager may be part of AI Trust Engine and/or AI Trust Manager in TAI framework.
  • a new interface (T8sec) between AI Trust Engine and (Security) Management &Orchestration may be defined to transfer security requirements.
  • a new interface (T9sec) between AI Pipeline Orchestrator and AI Trust Manager may be defined to transfer performance requirements.
  • interface extensions interfaces T2, T3, T4 and T5 (currently shown in Figure 6) may be extended to support configuration/monitoring of security mechanisms e.g., extend T6 interface to request input for business impact evaluation.
  • the new logical entity/manager (in AI Trust Engine and/or AI Trust Manager) may perform threat and risk analysis for AI pipeline /AI pipeline phase, and AI Trust Manager, respectively.
  • the new logical entity/manager may propose countermeasures based on tolerable security risk (if risk is higher than a tolerable risk then it should be mitigated) .
  • the AI Trust Manager may apply or trigger the configuration/monitoring of security controls (for robustness) of AI pipeline/AI pipeline phase according to threat and risk assessment. This may be initiated by the AI Trust Engine.
  • the AI Trust Engine may apply or trigger the configuration/monitoring of security controls (for robustness) of AI Trust Manager itself according to threat and risk assessment.
  • Figure 7 is a schematic diagram of an example architecture.
  • FIG 7 illustrates a policy manager 701 that is configured to receive both constraints from a network operator 702 and at least one customer intent 703.
  • the policy manager provides respective policies to an AI trust engine 704, an AI pipeline orchestrator 705, and a security management and orchestration entity 706.
  • the interface between the policy manager 701 and the AI trust engine 704 is labelled as a T1 interface.
  • T6 interface between the network operator 702 and the AI trust engine 704.
  • the AI trust engine 704 is further shown as comprising a first security threat and risk analysis manager 707.
  • This security threat and risk analysis manager 707 is configured to receive signalling from the AI Pipeline Orchestrator 705 and to exchange signalling with the security management and orchestration entity 706. This latter interface is labelled as a T8sec interface.
  • Figure 7 further shows a single AI pipeline 708. It is understood that although only a single AI pipeline is shown, this is merely for clarity and brevity, and that analogous techniques may be applied when there are multiple AI pipelines operating simultaneously.
  • the AI pipeline 708 may comprise an AI trust manager 709 that comprises a second security threat and risk analysis manager 710, an AI data source manager 711, an AI training manager 712, and AI Inference manager 713.
  • the second security threat and risk analysis manager may comprise an interface with the AI Pipeline Orchestrator 705. This interface is labelled as a T9sec interface.
  • Figure 7 further shows T2 to T5 interfaces, which correspond to at least the functionality of those interfaces in Figure 6.
  • the security threat and risk analysis managers may be configured to perform at least one of the following functions.
  • the first to third relate to the identification of threats, risks and/or vulnerabilities of the system being assessed
  • the fourth to fifth relate to a quantification of these risks
  • the sixth relates to a comparison of the quantified risks relative to an associated “tolerable” risk level.
  • These functions may be broadly categorized into two separate groups: 1. identifying and analyzing potential (future) events that may negatively impact individuals, assets, and/or the environment (i.e. hazard analysis) ; and 2. making judgments on the tolerability of the risk on the basis of a risk analysis while considering influencing factors (i.e. risk evaluation) .
  • a security threat and risk analysis managers may be configured to identify threats that are relevant to the assets. This may be done in a plurality of different ways.
  • the security threat and risk analysis manager may be configured to identify a threat using historical security incident data.
  • the security threat and risk analysis manager may be configured to collect and analyze former security testing information/results (and/or cause the collection and/or analysis of such information in another entity) .
  • a security threat and risk analysis manager may be configured to identify vulnerabilities and threat surfaces. Again, this may be performed in a plurality of different ways.
  • the security threat and risk analysis manager may be configured to analyze how the environment of the ML model development/deployment contributes to the vulnerabilities of a system. For example, encrypted data stored on some public storage devices would be an increased the risk relative to the data being stored on private storage devices. This may cause the overall risk to be elevated above a threshold level of “tolerable risk” that is used by the security management and orchestration.
  • the security threat and risk analysis manager may determine whether the determined overall risk is higher than the “tolerable risk” and send an indication of the result of this determination to the security management and orchestration entity.
  • the data determined for assessing an overall risk may be expressed as at least one value that can be compared to a threshold risk level preconfigured in the security threat and risk analysis manager.
  • the corresponding data for assessing the risk may be collected from at least one of AI pipeline orchestrator, AI data source manager, AI Training manager and AI Inference Manager.
  • the security threat and risk analysis manager may identify vulnerabilities by collecting relevant domain information e.g., security capabilities/constraints of the infrastructure. This may be collected from, for example, the security management and orchestration entity.
  • the security threat and risk analysis manager may identify vulnerabilities by considering at least one of the ownership of the models, the number of queries one can make within a time period, use case, service type, available knowledge (full, half, none) of the models, etc. At least some of this information may be obtained from at least one of the AI pipeline orchestrator, the AI Training Manager, and the AI Inference Manager.
  • the security threat and risk analysis manager may be configured to identify the existing controls and their effect on the vulnerabilities and threats identified.
  • the security threat and risk analysis manager may be configured to collect information on applied security measures (e.g., Firewall (s) , Intrusion Detection System (s) (IDS) , etc. ) . This information may be obtained from, for example, a security management and orchestration entity.
  • the security threat and risk analysis manager may consider the QoT requirements already ensured in AI pipeline (s) from the policy control entity.
  • the security threat and risk analysis manager may be configured to quantify the probability of these potential threats occurring. For example, the security threat and risk analysis manager may assess threat intelligence collected in the system or from a 3rd party. This information may be provided from, for example, a security management and orchestration entity.
  • the security threat and risk analysis manager may be configured to quantify a business impact of these potential threats. This may be obtained via an input from a network administrator or a mobile network manager. This may be obtained via an input from the T6 interface.
  • the security threat and risk analysis manager may be configured to determine an acceptable risk and/or security controls. This may be determined, for example, following a consideration of at least one of: the security policies/intents from operator/customer (e.g., confidentiality captured as security goal) (this information may be received from a security management and orchestration entity) , the QoT requirements (e.g., desired adversarial robustness) , and an evaluation of a security risk threshold level received from an administrator or manager.
  • the security policies/intents from operator/customer e.g., confidentiality captured as security goal
  • this information may be received from a security management and orchestration entity
  • the QoT requirements e.g., desired adversarial robustness
  • an evaluation of a security risk threshold level received from an administrator or manager.
  • the threat and risk analysis may be performed based on existing methods such as, for example, by MoRA (Modular Risk Assessment) .
  • MoRA has been developed at the Fraunhofer AISEC (Fraunhofer-Institut für Angewandte und Integrator structural) .
  • Other popular approaches are often based on attack trees, which can be used in conjunction with MoRA or independently.
  • the basic steps of a MoRA workflow comprise defining the model to be assessed, identifying assets within the model that are to be protected, analyzing the risks and threats to the protected within that model, and determining actions that may be performed that can control and/or reduce the risks and/or threats to the protected assets.
  • a security threat and risk analysis manager may determine a tolerable risk based on QoT (robustness/security) requirements and/or security policies/intents.
  • the determined tolerable risk may be sent to a security management and orchestration, which may determine if any parameters and/or entities associated with executing the model are to change with the aim of reducing and/or otherwise mitigating at least one of the determined vulnerabilities, securities and/or threats.
  • Figure 8 shows an example workflow of how threats may be dynamically assessed within an AI trust framework.
  • a security threat and risk analysis managers that is part of each of the AI trust engine and the AI trust manager.
  • at least part of the security threat and risk analysis manager (s) may be implemented as a separate logical entity and/or manager.
  • Figure 8 illustrates signalling that may be performed by a network operator and/or a customer 801, a security management and orchestration entity 802, a first security threat and risk manager 803, a second security threat and risk manager 804 that may comprised within, or otherwise associated with, a trust manager associated with an AI pipeline, and at least one data source 805.
  • the network operator and/or customer 801 signals a request to the security management and orchestration entity 802.
  • This request may comprise, for example, an indication of at least one security goal for a service.
  • This request may comprise an identifier for the service to which the request relates.
  • This request may comprise an indication of (or otherwise a pointer to) the at least one security goal.
  • the at least one security goal may comprise a declarative intent to be achieved by the service.
  • the security management and orchestration entity 802 signals the first security threat and risk manager 803.
  • This signalling may comprise a request for a security threat and risk assessment of at least one AI pipeline and/or at least one AI pipeline phase associated with the service.
  • the signalling may optionally comprise a request for a security threat and risk assessment of an AI trust manager (s) associated with the service.
  • the security management and orchestration entity may determine to evaluate the security threats and risks of the network to apply security controls accordingly.
  • the security management and orchestration entity requests corresponding information from an AI Trust Engine in response to this determination.
  • the request may include the security requirements and the tolerable security risk.
  • the security management and orchestration entity may interpret and translate the security intent/requirements to security requirements for the AI Trust Engine.
  • the security management and orchestration entity may also transfer the translated security requirements to the AI Trust Engine as part of QoT parameters.
  • the first security threat and risk manager 803 signals, for each AI pipeline indicated in the signalling of 8002, the second security threat and risk manager 804.
  • This signalling may comprise a request for a security threat and risk assessment of the AI pipeline and/or AI pipeline phase associated with the service.
  • the first security threat and risk manager as part of the AI Trust Engine performs security threat and risk analysis for each AI Trust Manager if needed.
  • the first security threat and risk manager may perform this security threat and risk analysis of the AI Trust engine based on information of QoT robustness requirements, security intents/policies from network operators or customer, known threats, historical security incident data correlated with data like security capabilities/constraints and applied security measures in infrastructure, threat intelligence from 3rd parties, use case and service information from management database/security management and orchestration.
  • the first security threat and risk manager of the AI Trust Engine determines the threats and the type of security risk. When the first security threat and risk manager as part of the AI Trust Engine determines that this analysis does not need to be performed, steps 8004 and 8005 are not performed.
  • the first security threat and risk manager 803 signals the security management and orchestration entity 802 to collect data for threat and risk analysis for the AI trust manager and/or AI pipeline that is part of the signalling of 8003.
  • the information received by the first security threat and risk manager 803 during 8004 may be as described in the above-examples of types of information that may be collected.
  • the first security threat and risk manager 803 and/or the trust engine associated with the first security threat and risk manager 803 determines a threat and/or risk for the AI trust manager and/or AI pipeline using the information collected and/or received during 8004.
  • 8006 to 8008 relate to a second security threat and risk analysis entity of an second security threat and risk manager checking the security status of the AI pipeline/AI pipeline phase and determining whether a threat and risk analysis should be performed.
  • the second security threat and risk analysis entity of the AI Trust Managers imports information from management database or requests information from AI pipeline orchestrator (last update of pipeline/pipeline phase, type of modification, etc. ) .
  • a security threat and risk analysis may be triggered by a notification from AI pipeline orchestrator about changes/updates of AI pipeline (s) .
  • the second security threat and risk analysis entity of the trust manager 804 signals a request to the at least one data source 805.
  • This request may be a request for information relating to an AI pipeline status.
  • This AI pipeline status may indicate whether the AI pipeline model has been retrained since the last threat and risk assessment has been performed.
  • the second security threat and risk analysis entity of the trust manager 804 receives, from the at least one data source 805, an indication of the request AI pipeline status.
  • the first security threat and risk manager 803 analyses all of the data received and/or stored to determine whether a new threat and risk analysis is required (e.g. for updated training phase) . When it is determined that a new threat and risk analysis is required, the system proceeds to 8009.
  • the second security threat and risk analysis entity of the trust manager 804 and the at least one data source 805 exchanges signalling.
  • This signalling may relate to the collection of data for a particular AI pipeline.
  • This data may relate to, for example, a particular AI pipeline identifier and/or a particular AI pipeline phase.
  • This data may to, for example, former testing that has been performed, security measures that are in place, and/or specific model knowledge for the identifier AI pipeline and/or pipeline phase.
  • second security threat and risk analysis entit (y/ies) of the AI Trust Manager collects and/or imports data for threat and risk analysis from diverse sources.
  • the following illustrates the type of information that may be collected/imported during this phase: the ownership of the models (e.g., trusted 3rd parties) , the number of queries one can make within a time period, available knowledge (full, half, none) of the models from AI Training/Inference Manager/AI pipeline orchestrator, security capabilities/constraints and applied security measures in infrastructure from Security Threat and Risk Manager of AI Trust Engine (e.g., existing mitigation strategies for backdoor attacks (detection and removal) are based on different knowledge requirements of training data sets and models) , use case and service information from management database, and/or performance requirements from AI pipeline orchestrator.
  • the ownership of the models e.g., trusted 3rd parties
  • the number of queries one can make within a time period available knowledge (full, half, none) of the models from AI Training/Inference Manager/AI pipeline orchestrat
  • the second security threat and risk manager 803 performs threat and risk analysis. This analysis may be performed in dependence on context and/or model data obtained during the above-mentioned signalling steps.
  • the second security threat and risk analysis entity of the AI Trust entity may perform threat and risk analysis based on information of QoT robustness requirements, security intents/policies from network operators or customer, known threats, former security and adversarial testing results, historical security incident data correlated with the collected data.
  • the second security threat and risk analysis entity of the AI trust manager determine which components need to be protected and/or the type of security risk.
  • the risk is higher than the tolerable risk (which may be quantified with respect to at least one threshold associated with a respective type of threat) .
  • the threat will be mitigated by countermeasures.
  • encrypted data stored on public storage devices may be higher than the tolerable risk.
  • the location for storing the encrypted may be changed to a more secure location.
  • the risk is lower than the tolerable risk.
  • the threat is not mitigated by countermeasures.
  • the AI trust manager and/or the security management and orchestration entity does not cause any action to be taken to mitigate at least one of the identified risks.
  • the second security threat and risk analysis entity of the AI trust manager, and/or the first security threat and risk analysis entity of the AI trust engine, and/or the security management and orchestration entity may be configured to not cause any action to be taken to mitigate effects of the identified risks that fall below their associated tolerable risk threshold.
  • the second security threat and risk analysis entity of the AI Trust Manager 804 provides a report for a particular AI pipeline to the first security threat and risk manager 803.
  • the first security threat and risk manager 803 provides a custom report to the security management and orchestration entity.
  • This custom report may collate together data from multiple security threat and risk managers for conveyance in a summary form.
  • the security management and orchestration entity may propose countermeasures to overcome or reduce the security risk.
  • Recommendations/proposed mitigation strategies may comprise at least one of: model retraining for restoring or treat inference data sample separately in case of an anticipated backdoor attack, model hardening in the case of an anticipated evasion attack, training the model with privacy guarantees in the case of an anticipated data extraction attack, and securing model (hyper-) parameters or deploying secure hardware or apply encryption schemes in the case of an anticipated model stealing attack.
  • the first security threat and risk manager of the AI Trust Engine may use the analysis of the anticipated threats and/or risks respectively to initiate and/or update the security measures on AI pipeline and/or AI Trust Managers, depending on where and what the determined risk is.
  • the custom security threat and risk report may be used by AI Trust Engine to determine robustness requirements for AI trustworthiness (AI QoT) and to send it to the AI Trust Managers of the AI pipeline over the T2 interface.
  • AI Trust Managers configure, monitor and measure AI robustness requirements for AI Data Source Manager, AI Training Manager and AI Inference Manager over T3, T4 and T5 interfaces respectively.
  • Figures 9 to 11 are flow charts illustrating potential operations that may be performed by apparatus described herein.
  • the following illustrates certain aspects of the examples discussed above, and that consequently features discussed above may also be incorporated into the following. It is further understood that the apparatus of Figures 9 to 11 may be configured to interact with each other where applicable.
  • Figure 9 illustrates operations that may be performed by an apparatus for an artificial intelligence, AI, security risk and threat management function located outside at least one AI pipeline executing or configured to execute at least part of an AI model.
  • the apparatus of Figure 9 may interact with the apparatus of Figure 10.
  • the apparatus of Figure 9 may interact with the apparatus of Figure 11.
  • the apparatus performs a first security threat and risk analysis associated with executing the AI model by the at least one AI pipeline to obtain a first value for a security threat and/or risk parameter.
  • the apparatus performs a second security threat and risk analysis associated with executing the AI model by the at least one AI pipeline to obtain a second value for the security threat and/or risk parameter.
  • the apparatus determines whether the change from the first value to the second value causes and/or would cause a predetermined tolerable risk for the security threat and/or risk parameter to be exceeded.
  • the apparatus signals the result of the determination to a security threat and risk management coordinator located outside of the at least one pipelines.
  • the apparatus may determine that an entity in the AI model and/or the at least one AI pipeline has changed since the first security threat and risk analysis was performed. In such a case, the performing the second security threat and risk analysis may be performed in response to determining that said change has occurred.
  • the signalling may comprise an indication that the predetermined tolerable risk for the security threat and/or risk parameter has exceeded, or will exceed an execution condition for the AI model.
  • the performing the second security threat and risk analysis for executing the AI model by the at least one AI pipeline may comprise: signalling, to security threat and risk manager (s) associated with respective AI trust management functions of each of the at least one AI pipelines, a request for the security threat and risk manager (s) to perform a security threat and risk assessment for their associated AI pipeline; and receiving, from the respective security threat and risk manager (s) , first security threat and risk indication of a respective security threat and risk associated with each of the at least one AI pipelines.
  • the performing the second security threat and risk analysis for executing the AI model by the at least one AI pipeline may comprise: signalling, to the security threat and risk management coordinator, a request for the security threat and risk management coordinator to perform a security threat and risk assessment for at least one AI trust management function executing the AI model by the at least one AI pipeline; and receiving, from the security threat and risk management coordinator, a second security threat and risk indication of perform a security threat and risk assessment for at least one AI trust management function executing the AI model.
  • the apparatus may aggregate the first and second security threat and risk indications to form an aggregated security threat and risk; and perform said determining whether to cause an execution condition of the AI model to change in dependence on the aggregated security threat and risk.
  • the determining whether the change from the first value to the second value causes and/or would cause a predetermined tolerable risk for the security threat and/or risk parameter to be exceeded may comprise: comparing the aggregated security threat and risk to at least one predetermined parameter associated with an acceptable security threat and risk; and wherein the signalling the result of the determination to a security threat and risk management coordinator comprises: signalling to the security threat and risk management coordinator a report indicating whether aggregated security threat and risk is considered to be acceptable in dependence on the result of this determining.
  • the second security threat and risk indication may be associated with at least one of: a customer intent, a network operator intent, security constraints for a current network infrastructure, and/or an ownership of the AI model.
  • the apparatus may receive, from said security threat and risk management coordinator, at least one security condition associated with a quality of trust condition for executing the first and/or second security threat and risk analysis; and cause the at least one quality of trust condition to be used as at least one security condition to be fulfilled when the first and/or second security threat and risk analysis is executed.
  • Figure 10 illustrates potential operations that may be performed by an apparatus for an artificial intelligence, AI, security risk and threat management function associated with a trust management function of an AI pipeline.
  • the apparatus of Figure 10 may interact with the apparatus of Figure 9.
  • the apparatus receives, from an AI security threat and risk analysis function located outside of the AI pipeline (e.g. from the apparatus of Figure 9) , a request for a security threat and risk analysis to be performed for an AI trust management function associated with executing at least part of an AI model in the AI pipeline.
  • an AI security threat and risk analysis function located outside of the AI pipeline (e.g. from the apparatus of Figure 9) .
  • the apparatus signals a result of the first security threat and risk analysis to the AI security threat and risk analysis function located outside of the AI pipeline.
  • the apparatus receives, from the AI security threat and risk analysis function located outside of the AI pipeline, a request for a new security threat and risk analysis to be performed for the AI trust management function associated with executing at least part of an AI model in the AI pipeline.
  • the new security threat and risk assessment used synonymously with the phrase second security threat and risk analysis.
  • the apparatus signals a result of the new security threat and risk analysis to the AI security threat and risk analysis function located outside of the AI pipeline.
  • the apparatus may determine whether the AI model and/or an entity associated with executing the at least part of the AI model in the AI pipeline has changed since a last time a security threat and risk analysis was performed for the AI model; and, when it is determined that the AI model and/or associated entity has changed, perform a new security threat and risk analysis associated with executing the at least part of the AI model in the AI pipeline.
  • the determining whether the AI model and/or AI pipeline has changed since the last time a security threat and risk analysis was performed for the AI model may comprise determining whether training data used for training the AI model has been changed since the last time the security threat and risk analysis was performed for the AI model, and/or collecting information from a pipeline orchestrator.
  • Figure 11 illustrates potential operations that may be performed by an apparatus for a security threat and risk management coordinator located outside of at least one artificial intelligence, AI, pipeline.
  • the apparatus of Figure 11 may interact with the apparatus of Figure 9.
  • the apparatus receives, from an AI security threat and risk analysis function located outside of the at least one AI pipeline (e.g. from the apparatus of Figure 9) , a request for a new security threat and risk analysis to be performed for an AI trust management function executing an AI model using at least one AI pipeline.
  • an AI security threat and risk analysis function located outside of the at least one AI pipeline (e.g. from the apparatus of Figure 9) , a request for a new security threat and risk analysis to be performed for an AI trust management function executing an AI model using at least one AI pipeline.
  • the apparatus performs a security threat and risk analysis by determining security issues and/or vulnerabilities that exist when the AI trust management function executes the AI model in a current network infrastructure.
  • the apparatus signals a result of the new security threat and risk analysis to the AI security threat and risk analysis function.
  • the apparatus may signal, to the AI security threat and risk analysis function, a request for the AI security threat and risk analysis function to determine a security threat and risk associated with executing the AI model using the at least one pipeline before said receiving said request; and receive, from the AI security threat and risk analysis function, an indication of a security threat and risk associated with said executing.
  • the apparatus may configure at least one security condition associated with quality of trust condition for an AI security threat and risk analysis performed by the AI security threat and risk analysis function; and signal the configured at least one security condition to the AI security threat and risk analysis function.
  • the apparatus may receive, from the AI security threat and risk analysis function, an aggregated security threat and risk associated with executing the AI model, wherein the aggregate security threat and risk comprises at least one value that represents security vulnerabilities both inside and outside of the at least one AI pipeline; and determine whether to change at least one current security constraint in a network infrastructure in response to determining aggregate security threat and risk.
  • Figure 2 shows an example of a control apparatus for a communication system, for example to be coupled to and/or for controlling a station of an access system, such as a RAN node, e.g. a base station, gNB, a central unit of a cloud architecture or a node of a core network such as an MME or S-GW, a scheduling entity such as a spectrum management entity, or a server or host, for example an apparatus hosting an NRF, NWDAF, AMF, SMF, UDM/UDR etc.
  • the control apparatus may be integrated with or external to a node or module of a core network or RAN.
  • base stations comprise a separate control apparatus unit or module.
  • control apparatus can be another network element such as a radio network controller or a spectrum controller.
  • the control apparatus 200 can be arranged to provide control on communications in the service area of the system.
  • the apparatus 200 comprises at least one memory 201, at least one data processing unit 202, 203 and an input/output interface 204. Via the interface the control apparatus can be coupled to a receiver and a transmitter of the apparatus.
  • the receiver and/or the transmitter may be implemented as a radio front end or a remote radio head.
  • the control apparatus 200 or processor 201 can be configured to execute an appropriate software code to provide the control fu nctions.
  • a possible wireless communication device will now be described in more detail with reference to Figure 3 showing a schematic, partially sectioned view of a communication device 300.
  • a communication device is often referred to as user equipment (UE) or terminal.
  • An appropriate mobile communication device may be provided by any device capable of sending and receiving radio signals.
  • Non-limiting examples comprise a mobile station (MS) or mobile device such as a mobile phone or what is known as a ’s mart phone’, a computer provided with a wireless interface card or other wireless interface facility (e.g., USB dongle) , personal data assistant (PDA) or a tablet provided with wireless communication capabilities, or any combinations of these or the like.
  • MS mobile station
  • PDA personal data assistant
  • a mobile communication device may provide, for example, communication of data for carrying communications such as voice, electronic mail (email) , text message, multimedia and so on. Users may thus be offered and provided numerous services via their communication devices. Non-limiting examples of these services comprise two-way or multi-way calls, data communication or multimedia services or simply an access to a data communications network system, such as the Internet. Users may also be provided broadcast or multicast data. Non-limiting examples of the content comprise downloads, television and radio programs, videos, advertisements, various alerts and other information.
  • a wireless communication device may be for example a mobile device, that is, a device not fixed to a particular location, or it may be a stationary device.
  • the wireless device may need human interaction for communication, or may not need human interaction for communication.
  • the terms UE or “user” are used to refer to any type of wireless communication device.
  • the wireless device 300 may receive signals over an air or radio interface 307 via appropriate apparatus for receiving and may transmit signals via appropriate apparatus for transmitting radio signals.
  • transceiver apparatus is designated schematically by block 306.
  • the transceiver apparatus 306 may be provided for example by means of a radio part and associated antenna arrangement.
  • the antenna arrangement may be arranged internally or externally to the wireless device.
  • a wireless device is typically provided with at least one data processing entity 301, at least one memory 302 and other possible components 303 for use in software and hardware aided execution of tasks it is designed to perform, including control of access to and communications with access systems and other communication devices.
  • the data processing, storage and other relevant control apparatus can be provided on an appropriate circuit board and/or in chipsets. This feature is denoted by reference 704.
  • the user may control the operation of the wireless device by means of a suitable user interface such as keypad 305, voice commands, touch sensitive screen or pad, combinations thereof or the like.
  • a display 308, a speaker and a microphone can be also provided.
  • a wireless communication device may comprise appropriate connectors (either wired or ⁇ wireless) to other devices and/or for connecting external accessories, for example hands-free equipment, thereto.
  • Figure 4 shows a schematic representation of non-volatile memory media 400a (e.g. computer disc (CD) or digital versatile disc (DVD) ) and 400b (e.g. universal serial bus (USB) memory stick) storing instructions and/or parameters 402 which when executed by a processor allow the processor to perform one or more of the steps of the methods of Figure 9 and/or Figure 10 and/or Figure 11.
  • CD computer disc
  • DVD digital versatile disc
  • USB universal serial bus
  • some embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof.
  • some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although embodiments are not limited thereto.
  • firmware or software which may be executed by a controller, microprocessor or other computing device, although embodiments are not limited thereto. While various embodiments may be illustrated and described as block diagrams, flow charts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
  • the embodiments may be implemented by computer software stored in a memory and executable by at least one data processor of the involved entities or by hardware, or by a combination of software and hardware.
  • any procedures e.g., as in Figure 9 and/or Figure 10 and/or Figure 11, may represent program steps, or interconnected logic circuits, blocks and functions, or a combination of program steps and logic circuits, blocks and functions.
  • the software may be stored on such physical media as memory chips, or memory blocks implemented within the processor, magnetic media such as hard disk or floppy disks, and optical media such as for example DVD and the data variants thereof, CD.
  • the memory may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor-based memory devices, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory.
  • the data processors may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) , application specific integrated circuits (AStudy ItemC) , gate level circuits and processors based on multi-core processor architecture, as non-limiting examples.
  • circuitry may be configured to perform one or more of the functions and/or method steps previously described. That circuitry may be provided in the base station and/or in the communications device.
  • circuitry may refer to one or more or all of the following:
  • circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware.
  • circuitry also covers, for example integrated device.
  • UMTS universal mobile telecommunications system
  • UTRAN wireless local area network
  • WiFi wireless local area network
  • WiMAX worldwide interoperability for microwave access
  • PCS personal communications services
  • WCDMA wideband code division multiple access
  • UWB ultra-wideband
  • sensor networks sensor networks
  • MANETs mobile ad-hoc networks
  • IMS Internet Protocol multimedia subsystems
  • Figure 5 depicts examples of simplified system architectures only showing some elements and functional entities, all being logical units, whose implementation may differ from what is shown.
  • the connections shown in Figure 5 are logical connections; the actual physical connections may be different. It is apparent to a person skilled in the art that the system typically comprises also other functions and structures than those shown in Figure 5.
  • the example of Figure 5 shows a part of an exemplifying radio access network.
  • the radio access network may support sidelink communications described below in more detail.
  • Figure 5 shows devices 500 and 502.
  • the devices 500 and 502 are configured to be in a wireless connection on one or more communication channels with a node 504.
  • the node 504 is further connected to a core network 506.
  • the node 504 may be an access node such as (e/g) NodeB serving devices in a cell.
  • the node 504 may be a non-3GPP access node.
  • the physical link from a device to a (e/g) NodeB is called uplink or reverse link and the physical link from the (e/g) NodeB to the device is called downlink or forward link.
  • (e/g) NodeBs or their functionalities may be implemented by using any node, host, server or access point etc. entity suitable for such a usage.
  • a communications system typically comprises more than one (e/g) NodeB in which case the (e/g) NodeBs may also be configured to communicate with one another over links, wired or wireless, designed for the purpose. These links may be used for signalling purposes.
  • the (e/g) NodeB is a computing device configured to control the radio resources of communication system it is coupled to.
  • the NodeB may also be referred to as a base station, an access point or any other type of interfacing device including a relay station capable of operating in a wireless environment.
  • the (e/g) NodeB includes or is coupled to transceivers. From the transceivers of the (e/g) NodeB, a connection is provided to an antenna unit that establishes bi-directional radio links to devices.
  • the antenna unit may comprise a plurality of antennas or antenna elements.
  • the (e/g) NodeB is further connected to the core network 506 (CN or next generation core NGC) .
  • the (e/g) NodeB is connected to a serving and packet data network gateway (S-GW +P-GW) or user plane function (UPF) , for routing and forwarding user data packets and for providing connectivity of devices to one or more external packet data networks, and to a mobile management entity (MME) or access mobility management function (AMF) , for controlling access and mobility of the devices.
  • S-GW +P-GW serving and packet data network gateway
  • UPF user plane function
  • MME mobile management entity
  • AMF access mobility management function
  • Examples of a device are a subscriber unit, a user device, a user equipment (UE) , a user terminal, a terminal device, a mobile station, a mobile device, etc.
  • UE user equipment
  • the device typically refers to a mobile or static device (e.g. a portable or non-portable computing device) that includes wireless mobile communication devices operating with or without an universal subscriber identification module (USIM) , including, but not limited to, the following types of devices: mobile phone, smartphone, personal digital assistant (PDA) , handset, device using a wireless modem (alarm or measurement device, etc. ) , laptop and/or touch screen computer, tablet, game console, notebook, and multimedia device. It should be appreciated that a device may also be a nearly exclusive uplink only device, of which an example is a camera or video camera loading images or video clips to a network.
  • a mobile or static device e.g. a portable or non-portable computing device
  • USB universal subscriber identification module
  • a device may also be a nearly exclusive uplink only device, of which an example is a camera or video camera loading images or video clips to a network.
  • a device may also be a device having capability to operate in Internet of Things (IoT) network which is a scenario in which objects are provided with the ability to transfer data over a network without requiring human-to-human or human-to-computer interaction, e.g. to be used in smart power grids and connected vehicles.
  • IoT Internet of Things
  • the device may also utilise cloud.
  • a device may comprise a user portable device with radio parts (such as a watch, earphones or eyeglasses) and the computation is carried out in the cloud.
  • the device illustrates one type of an apparatus to which resources on the air interface are allocated and assigned, and thus any feature described herein with a device may be implemented with a corresponding apparatus, such as a relay node.
  • a relay node is a layer 3 relay (self-backhauling relay) towards the base station.
  • the device (or, in some examples, a layer 3 relay node) is configured to perform one or more of user equipment functionalities.
  • CPS cyber-physical system
  • ICT interconnected information and communications technology
  • devices sensors, actuators, processors microcontrollers, etc.
  • Mobile cyber physical systems in which the physical system in question has inherent mobility, are a subcategory of cyber-physical systems. Examples of mobile physical systems include mobile robotics and electronics transported by humans or animals.
  • apparatuses have been depicted as single entities, different units, processors and/or memory units (not all shown in Figure 5) may be implemented.
  • 5G enables using multiple input –multiple output (MIMO) antennas, many more base stations or nodes than the LTE (a so-called small cell concept) , including macro sites operating in co-operation with smaller stations and employing a variety of radio technologies depending on service needs, use cases and/or spectrum available.
  • MIMO multiple input –multiple output
  • 5G mobile communications supports a wide range of use cases and related applications including video streaming, augmented reality, different ways of data sharing and various forms of machine type applications (such as (massive) machine-type communications (mMTC) , including vehicular safety, different sensors and real-time control) .
  • 5G is expected to have multiple radio interfaces, e.g.
  • 5G is planned to support both inter-RAT operability (such as LTE-5G) and inter-RI operability (inter-radio interface operability, such as below 6GHz –cmWave, 6 or above 24 GHz –cmWave and mmWave) .
  • inter-RAT operability such as LTE-5G
  • inter-RI operability inter-radio interface operability, such as below 6GHz –cmWave, 6 or above 24 GHz –cmWave and mmWave
  • One of the concepts considered to be used in 5G networks is network slicing in which multiple independent and dedicated virtual sub-networks (network instances) may be created within the same infrastructure to run services that have different requirements on latency, reliability, throughput and mobility.
  • the current architecture in LTE networks is fully distributed in the radio and fully centralized in the core network.
  • the low latency applications and services in 5G require to bring the content close to the radio which leads to local break out and multi-access edge computing (MEC) .
  • 5G enables analytics and knowledge generation to occur at the source of the data. This approach requires leveraging resources that may not be continuously connected to a network such as laptops, smartphones, tablets and sensors.
  • MEC provides a distributed computing environment for application and service hosting. It also has the ability to store and process content in close proximity to cellular subscribers for faster response time.
  • Edge computing covers a wide range of technologies such as wireless sensor networks, mobile data acquisition, mobile signature analysis, cooperative distributed peer-to-peer ad hoc networking and processing also classifiable as local cloud/fog computing and grid/mesh computing, dew computing, mobile edge computing, cloudlet, distributed data storage and retrieval, autonomic self-healing networks, remote cloud services, augmented and virtual reality, data caching, Internet of Things (massive connectivity and/or latency critical) , critical communications (autonomous vehicles, traffic safety, real-time analytics, time-critical control, healthcare applications) .
  • technologies such as wireless sensor networks, mobile data acquisition, mobile signature analysis, cooperative distributed peer-to-peer ad hoc networking and processing also classifiable as local cloud/fog computing and grid/mesh computing, dew computing, mobile edge computing, cloudlet, distributed data storage and retrieval, autonomic self-healing networks, remote cloud services, augmented and virtual reality, data caching, Internet of Things (massive connectivity and/or latency critical)
  • the communication system is also able to communicate with other networks 512, such as a public switched telephone network, or a VoIP network, or the Internet, or a private network, or utilize services provided by them.
  • the communication network may also be able to support the usage of cloud services, for example at least part of core network operations may be carried out as a cloud service (this is depicted in Figure 5 by “cloud” 514) . This may also be referred to as Edge computing when performed away from the core network.
  • the communication system may also comprise a central control entity, or a like, providing facilities for networks of different operators to cooperate for example in spectrum sharing.
  • Edge computing may be brought into a radio access network (RAN) by utilizing network function virtualization (NFV) and software defined networking (SDN) .
  • RAN radio access network
  • NFV network function virtualization
  • SDN software defined networking
  • Using the technology of edge cloud may mean access node operations to be carried out, at least partly, in a server, host or node operationally coupled to a remote radio head or base station comprising radio parts. It is also possible that node operations will be distributed among a plurality of servers, nodes or hosts.
  • Application of cloudRAN architecture enables RAN real time functions being carried out at or close to a remote antenna site (in a distributed unit, DU 508) and non-real time functions being carried out in a centralized manner (in a centralized unit, CU 510) .
  • 5G may also utilize satellite communication to enhance or complement the coverage of 5G service, for example by providing backhauling.
  • Possible use cases are providing service continuity for machine-to-machine (M2M) or Internet of Things (IoT) devices or for passengers on board of vehicles, Mobile Broadband, (MBB) or ensuring service availability for critical communications, and future railway/maritime/aeronautical communications.
  • Satellite communication may utilise geostationary earth orbit (GEO) satellite systems, but also low earth orbit (LEO) satellite systems, in particular mega-constellations (systems in which hundreds of (nano) satellites are deployed) .
  • GEO geostationary earth orbit
  • LEO low earth orbit
  • mega-constellations systems in which hundreds of (nano) satellites are deployed
  • Each satellite in the mega-constellation may cover several satellite-enabled network entities that create on-ground cells.
  • the on-ground cells may be created through an on-ground relay node or by a gNB located on-ground or in a satellite
  • the depicted system is only an example of a part of a radio access system and in practice, the system may comprise a plurality of (e/g) NodeBs, the device may have an access to a plurality of radio cells and the system may comprise also other apparatuses, such as physical layer relay nodes or other network elements, etc. At least one of the (e/g) NodeBs or may be a Home (e/g) nodeB. Additionally, in a geographical area of a radio communication system a plurality of different kinds of radio cells as well as a plurality of radio cells may be provided.
  • Radio cells may be macro cells (or umbrella cells) which are large cells, usually having a diameter of up to tens of kilometers, or smaller cells such as micro-, femto-or picocells.
  • the (e/g) NodeBs of Figure 5 may provide any kind of these cells.
  • a cellular radio system may be implemented as a multilayer network including several kinds of cells. Typically, in multilayer networks, one access node provides one kind of a cell or cells, and thus a plurality of (e/g) NodeBs are required to provide such a network structure.
  • a network which is able to use “plug-and-play” (e/g) NodeBs includes, in addition to Home (e/g) NodeBs (H (e/g) nodeBs) , a home node B gateway, or HNB-GW (not shown in Figure 5) .
  • HNB-GW HNB Gateway
  • a HNB Gateway (HNB-GW) which is typically installed within an operator’s network may aggregate traffic from a large number of HNBs back to a core network.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

Un appareil pour une fonction de gestion de risques et de menaces de sécurité, à intelligence artificielle, situé à l'extérieur d'au moins un pipeline AI s'exécutant ou configuré pour exécuter au moins une partie d'un modèle AI est amené à : effectuer une première analyse de menaces et de risques de sécurité associée à l'exécution du modèle AI par le ou les pipelines AI pour obtenir une première valeur pour un paramètre de menace et/ou de risque de sécurité (901) ; effectuer une seconde analyse de menaces et de risques de sécurité associée à l'exécution du modèle AI par le ou les pipelines AI pour obtenir une seconde valeur pour le paramètre de menace et/ou de risque de sécurité (902) ; déterminer si le changement de la première valeur à la seconde valeur provoque et/ou provoquerait un risque tolérable prédéterminé de dépassement du paramètre de menace et/ou de risque de sécurité (903).
PCT/CN2021/129895 2021-11-10 2021-11-10 Appareil, procédés et programmes informatiques WO2023082112A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/129895 WO2023082112A1 (fr) 2021-11-10 2021-11-10 Appareil, procédés et programmes informatiques

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/129895 WO2023082112A1 (fr) 2021-11-10 2021-11-10 Appareil, procédés et programmes informatiques

Publications (1)

Publication Number Publication Date
WO2023082112A1 true WO2023082112A1 (fr) 2023-05-19

Family

ID=86335021

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/129895 WO2023082112A1 (fr) 2021-11-10 2021-11-10 Appareil, procédés et programmes informatiques

Country Status (1)

Country Link
WO (1) WO2023082112A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104965972A (zh) * 2015-06-09 2015-10-07 南京联成科技发展有限公司 一种基于人工智能的信息系统安全风险评估与防护方法
CN108667850A (zh) * 2018-05-21 2018-10-16 济南浪潮高新科技投资发展有限公司 一种人工智能服务系统及其实现人工智能服务的方法
CN109309678A (zh) * 2018-09-28 2019-02-05 深圳市极限网络科技有限公司 基于人工智能的网络风险预警方法
US20200067963A1 (en) * 2019-10-28 2020-02-27 Olawale Oluwadamilere Omotayo Dada Systems and methods for detecting and validating cyber threats
CN112149119A (zh) * 2020-09-27 2020-12-29 苏州遐视智能科技有限公司 一种用于人工智能系统的动态主动安全防御方法、系统及存储介质
CN112581303A (zh) * 2019-09-30 2021-03-30 罗克韦尔自动化技术公司 用于工业自动化的人工智能通道

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104965972A (zh) * 2015-06-09 2015-10-07 南京联成科技发展有限公司 一种基于人工智能的信息系统安全风险评估与防护方法
CN108667850A (zh) * 2018-05-21 2018-10-16 济南浪潮高新科技投资发展有限公司 一种人工智能服务系统及其实现人工智能服务的方法
CN109309678A (zh) * 2018-09-28 2019-02-05 深圳市极限网络科技有限公司 基于人工智能的网络风险预警方法
CN112581303A (zh) * 2019-09-30 2021-03-30 罗克韦尔自动化技术公司 用于工业自动化的人工智能通道
US20200067963A1 (en) * 2019-10-28 2020-02-27 Olawale Oluwadamilere Omotayo Dada Systems and methods for detecting and validating cyber threats
CN112149119A (zh) * 2020-09-27 2020-12-29 苏州遐视智能科技有限公司 一种用于人工智能系统的动态主动安全防御方法、系统及存储介质

Similar Documents

Publication Publication Date Title
Khan et al. Edge computing: A survey
Adamsky et al. Integrated protection of industrial control systems from cyber-attacks: the ATENA approach
US9893943B2 (en) Network coordination apparatus
KR20140145151A (ko) 통신 네트워크들에서 서비스 액세스 경험 품질 이슈들의 예측 및 근본 원인 추천들
EP4155752A1 (fr) Identification de région de dispositif connecté
Zheng et al. Towards IoT security automation and orchestration
Fiandrino et al. Toward native explainable and robust AI in 6G networks: Current state, challenges and road ahead
Wenjing et al. Centralized management mechanism for cell outage compensation in LTE networks
US20230136756A1 (en) Determining spatial-temporal informative patterns for users and devices in data networks
Khan et al. ORAN-B5G: A next generation open radio access network architecture with machine learning for beyond 5G in industrial 5.0
WO2023082112A1 (fr) Appareil, procédés et programmes informatiques
US20230040284A1 (en) Trust related management of artificial intelligence or machine learning pipelines
WO2023130359A1 (fr) Appareil, procédés et programmes informatiques
Yungaicela-Naula et al. Misconfiguration in O-RAN: Analysis of the impact of AI/ML
Aumayr et al. Service-based Analytics for 5G open experimentation platforms
Palma et al. Enhancing trust and liability assisted mechanisms for ZSM 5G architectures
Millar et al. Intelligent security and pervasive trust for 5g and beyond
Na et al. A methodology of assessing security risk of cloud computing in user perspective for security-service-level agreements
WO2023015448A1 (fr) Appareil, procédé, et programme informatique
Sandeep et al. Case Studies on 5G and IoT Security Issues from the Leading 5G and IoT System Integration Vendors
Gkonis et al. Leveraging Network Data Analytics Function and Machine Learning for Data Collection, Resource Optimization, Security and Privacy in 6G Networks
Péerez-Valero et al. AI-driven Orchestration for 6G Networking: the Hexa-X vision
Kostopoulos et al. Protocol deployment for employing honeypot-as-a-service
US20240195701A1 (en) Framework for trustworthiness
Chouman et al. A Modular, End-to-End Next-Generation Network Testbed: Towards a Fully Automated Network Management Platform

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21963567

Country of ref document: EP

Kind code of ref document: A1