WO2024085872A1 - Système de raisonnement causal pour jumeau opérationnel (carot) pour le développement et l'exploitation de cnf 5g - Google Patents
Système de raisonnement causal pour jumeau opérationnel (carot) pour le développement et l'exploitation de cnf 5g Download PDFInfo
- Publication number
- WO2024085872A1 WO2024085872A1 PCT/US2022/047205 US2022047205W WO2024085872A1 WO 2024085872 A1 WO2024085872 A1 WO 2024085872A1 US 2022047205 W US2022047205 W US 2022047205W WO 2024085872 A1 WO2024085872 A1 WO 2024085872A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- network functions
- cloud native
- native network
- observed
- cause
- Prior art date
Links
- 230000001364 causal effect Effects 0.000 title claims abstract description 154
- 238000011161 development Methods 0.000 title description 13
- 230000006870 function Effects 0.000 claims abstract description 389
- 230000000694 effects Effects 0.000 claims abstract description 237
- 230000015654 memory Effects 0.000 claims abstract description 53
- 238000000034 method Methods 0.000 claims description 80
- 238000002474 experimental method Methods 0.000 claims description 60
- 230000007613 environmental effect Effects 0.000 claims description 51
- 238000013461 design Methods 0.000 claims description 40
- 238000011282 treatment Methods 0.000 claims description 38
- 238000004519 manufacturing process Methods 0.000 claims description 27
- 230000006735 deficit Effects 0.000 claims description 26
- 230000015556 catabolic process Effects 0.000 claims description 22
- 238000004458 analytical method Methods 0.000 claims description 21
- 230000010354 integration Effects 0.000 claims description 21
- 238000012545 processing Methods 0.000 claims description 18
- 238000012360 testing method Methods 0.000 claims description 17
- 241000380131 Ammophila arenaria Species 0.000 claims description 9
- 238000010801 machine learning Methods 0.000 claims description 9
- 238000007619 statistical method Methods 0.000 claims description 9
- 230000000295 complement effect Effects 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 4
- 238000004891 communication Methods 0.000 description 22
- 230000008569 process Effects 0.000 description 22
- 238000004590 computer program Methods 0.000 description 20
- 238000007726 management method Methods 0.000 description 12
- 238000005457 optimization Methods 0.000 description 9
- 239000008186 active pharmaceutical agent Substances 0.000 description 7
- 230000007704 transition Effects 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 238000013459 approach Methods 0.000 description 4
- 230000006399 behavior Effects 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 239000000835 fiber Substances 0.000 description 4
- 238000002347 injection Methods 0.000 description 4
- 239000007924 injection Substances 0.000 description 4
- 208000018910 keratinopathic ichthyosis Diseases 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 230000002688 persistence Effects 0.000 description 3
- 238000012384 transportation and delivery Methods 0.000 description 3
- 125000002015 acyclic group Chemical group 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 239000000969 carrier Substances 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000010168 coupling process Methods 0.000 description 2
- 238000005859 coupling reaction Methods 0.000 description 2
- 238000013500 data storage Methods 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 239000000203 mixture Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012552 review Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 230000008685 targeting Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 102100022734 Acyl carrier protein, mitochondrial Human genes 0.000 description 1
- 101000678845 Homo sapiens Acyl carrier protein, mitochondrial Proteins 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000011284 combination treatment Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000009977 dual effect Effects 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- GVVPGTZRZFNKDS-JXMROGBWSA-N geranyl diphosphate Chemical compound CC(C)=CCC\C(C)=C\CO[P@](O)(=O)OP(O)(O)=O GVVPGTZRZFNKDS-JXMROGBWSA-N 0.000 description 1
- PCHJSUWPFVWCPO-UHFFFAOYSA-N gold Chemical compound [Au] PCHJSUWPFVWCPO-UHFFFAOYSA-N 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000006698 induction Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000008450 motivation Effects 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 230000003362 replicative effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000013522 software testing Methods 0.000 description 1
- 239000000243 solution Substances 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
- 238000013024 troubleshooting Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/079—Root cause analysis, i.e. error or fault diagnosis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/07—Responding to the occurrence of a fault, e.g. fault tolerance
- G06F11/0703—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
- G06F11/0706—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
- G06F11/0709—Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a distributed system consisting of a plurality of standalone computer nodes, e.g. clusters, client-server systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3003—Monitoring arrangements specially adapted to the computing system or computing system component being monitored
- G06F11/3006—Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/3003—Monitoring arrangements specially adapted to the computing system or computing system component being monitored
- G06F11/302—Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a software system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/30—Monitoring
- G06F11/34—Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
- G06F11/3447—Performance evaluation by modeling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/06—Management of faults, events, alarms or notifications
- H04L41/0631—Management of faults, events, alarms or notifications using root cause analysis; using analysis of correlation between notifications, alarms or events based on decision criteria, e.g. hierarchy, tree or time analysis
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/14—Network analysis or design
- H04L41/145—Network analysis or design involving simulating, designing, planning or modelling of a network
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/16—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using machine learning or artificial intelligence
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/50—Testing arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2201/00—Indexing scheme relating to error detection, to error correction, and to monitoring
- G06F2201/865—Monitoring of software
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/10—Geometric CAD
- G06F30/18—Network design, e.g. design based on topological or interconnect aspects of utility systems, piping, heating ventilation air conditioning [HVAC] or cabling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/30—Creation or generation of source code
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/40—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks using virtualisation of network functions or resources, e.g. SDN or NFV entities
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
- H04L43/0876—Network utilisation, e.g. volume of load or congestion level
Definitions
- the examples and non-limiting example embodiments relate generally to chaos engineering and, more particularly, to a causal reasoning system for operational twin (CAROT) for development and operation of a 5G CNFs.
- CAROT operational twin
- an apparatus includes at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus at least to: operate a replicate of one or more cloud native network functions; generate observational data of the replicate of the one or more cloud native network functions, the observational data generated based on a plurality of operating conditions of the one or more cloud native network functions; and apply a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more cloud native network functions and the at least one observed effect of the one or more cloud native network functions.
- an apparatus includes at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus at least to: operate a replicate of one or more target applications; generate observational data of the replicate of the one or more target applications, the observational data generated based on a plurality of operating conditions of the one or more target applications; and apply a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more target applications and the at least one observed effect of the one or more target applications.
- an apparatus includes at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus at least to: select one or more cloud native network functions; select at least one feature of the one or more cloud native network functions; select at least one environmental condition of the one or more cloud native network functions; select a load of the one or more cloud native network functions, the load comprising an intensity and duration of processing of the one or more cloud native network functions subject to the at least one environmental condition; perform at least one experiment with the one or more cloud native network functions, the experiment based on the at least one feature, the load, and the at least one environmental condition of the one or more cloud native network functions; collect observational data from the at least one experiment; and apply a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more cloud native network functions and the at least one observed effect of the one or more cloud native network functions.
- a method includes operating a replicate of one or more cloud native network functions; generating observational data of the replicate of the one or more cloud native network functions, the observational data generated based on a plurality of operating conditions of the one or more cloud native network functions; and applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more cloud native network functions and the at least one observed effect of the one or more cloud native network functions.
- a method includes operating a replicate of one or more target applications; generating observational data of the replicate of the one or more target applications, the observational data generated based on a plurality of operating conditions of the one or more target applications; and applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more target applications and the at least one observed effect of the one or more target applications.
- a method includes selecting one or more cloud native network functions; selecting at least one feature of the one or more cloud native network functions; selecting at least one environmental condition of the one or more cloud native network functions; selecting a load of the one or more cloud native network functions, the load comprising an intensity and duration of processing of the one or more cloud native network functions subject to the at least one environmental condition; performing at least one experiment with the one or more cloud native network functions, the experiment based on the at least one feature, the load, and the at least one environmental condition of the one or more cloud native network functions; collecting observational data from the at least one experiment; and applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more cloud native network functions and the at least one observed effect of the one or more cloud native network functions.
- an apparatus includes means for operating a replicate of one or more cloud native network functions; means for generating observational data of the replicate of the one or more cloud native network functions, the observational data generated based on a plurality of operating conditions of the one or more cloud native network functions; and means for applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more cloud native network functions and the at least one observed effect of the one or more cloud native network functions.
- an apparatus includes means for operating a replicate of one or more target applications; means for generating observational data of the replicate of the one or more target applications, the observational data generated based on a plurality of operating conditions of the one or more target applications; and means for applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more target applications and the at least one observed effect of the one or more target applications.
- an apparatus includes means for selecting one or more cloud native network functions; means for selecting at least one feature of the one or more cloud native network functions; means for selecting at least one environmental condition of the one or more cloud native network functions; means for selecting a load of the one or more cloud native network functions, the load comprising an intensity and duration of processing of the one or more cloud native network functions subject to the at least one environmental condition; means for performing at least one experiment with the one or more cloud native network functions, the experiment based on the at least one feature, the load, and the at least one environmental condition of the one or more cloud native network functions; means for collecting observational data from the at least one experiment; and means for applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more cloud native network functions and the at least one observed effect of the one or more cloud native network functions.
- a non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable with the machine for performing operations, the operations including: operating a replicate of one or more cloud native network functions; generating observational data of the replicate of the one or more cloud native network functions, the observational data generated based on a plurality of operating conditions of the one or more cloud native network functions; and applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more cloud native network functions and the at least one observed effect of the one or more cloud native network functions.
- a non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable with the machine for performing operations, the operations comprising: operating a replicate of one or more target applications; generating observational data of the replicate of the one or more target applications, the observational data generated based on a plurality of operating conditions of the one or more target applications; and applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more target applications and the at least one observed effect of the one or more target applications.
- a non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable with the machine for performing operations, the operations including: selecting one or more cloud native network functions; selecting at least one feature of the one or more cloud native network functions; selecting at least one environmental condition of the one or more cloud native network functions; selecting a load of the one or more cloud native network functions, the load comprising an intensity and duration of processing of the one or more cloud native network functions subject to the at least one environmental condition; performing at least one experiment with the one or more cloud native network functions, the experiment based on the at least one feature, the load, and the at least one environmental condition of the one or more cloud native network functions; collecting observational data from the at least one experiment; and applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more cloud native network functions and the at least one observed effect of the one or more cloud native network functions.
- FIG. 1 illustrates a CI/CD pipeline.
- FIG. 2 depicts a left-shift paradigm framework in the CI/CD pipeline.
- FIG. 3 illustrates support of the CI/CD left-shift architecture pipeline by the examples described herein.
- FIG. 4 depicts layers of a causal reasoning system for an operational twin high-level architecture.
- FIG. 5 depicts lower building blocks (a chaos framework) for the causal reasoning system for operational twin described herein.
- FIG. 6 depicts upper building blocks (causal inference) for the causal reasoning system for operational twin described herein.
- FIG. 7 depicts a general problem statement for the causal reasoning system for operational twin described herein.
- FIG. 8 depicts a high-level task description of the causal reasoning system for operational twin described herein.
- FIG. 9 depicts the digital twin component of the causal reasoning system for operational twin described herein.
- FIG. 10 depicts a digital-twin component workflow of the causal reasoning system for operation twin described herein.
- FIG. 11 depicts a cause effect inference component.
- FIG. 12 depicts implementation of a randomized controlled trial (RCT).
- FIG. 13 depicts a cause inference effect workflow.
- FIG. 14 is a block diagram of one possible and non-limiting system in which the example embodiments may be practiced.
- FIG. 15 is an example apparatus configured to implement the examples described herein.
- FIG. 16 shows a representation of an example of non-volatile memory media.
- FIG. 17 is an example method to implement the examples described herein.
- FIG. 18 is an example method to implement the examples described herein.
- FIG. 19 is an example method to implement the examples described herein.
- Modern software including 5G cloud-native network function (e.g. CNF) appliances, is typically cloud-based, highly dynamic and service-oriented. These software appliances are often non-trivial to configure and difficult to optimize especially in environments where the infrastructure displays non-ideal conditions like production setups.
- CNF 5G cloud-native network function
- the 5G CNF development process flows from left to right for instance the source code is checked into the repository (version control 202) then built 204, tested 206, released 208, deployed 210, operated 212, and monitored 214.
- the left-shift principle including left-shift 216- to help increase productivity by identifying and rectifying issues much earlier in the development cycle reducing cost and increasing compliance to requirements.
- This principle lies at the center of this disclosure of CAROT.
- the CAROT (e.g. CAusal Reasoning for Operation Twin) framework 201 is depicted in FIG. 2 where operational environment conditions (including during operate 212 and monitor 214 in the pipeline) are brought forward into the test 206 phase, effectively recreating a digital-twin.
- the causal reasoning for operation twin system replicates operational environment conditions in a safe, controllable and repeatable manner. Its novelty comprises analyzing the observations collected from the digital twin setup to infer probable cause I effect relationships to provide insights into the 5G CNF’s configuration setups, design assertions, what-if scenarios and to support root-cause-analysis (RCA) basis for zero-touch management automation.
- RCA root-cause-analysis
- 5G-related CNFs which are micro- service-based software applications that support a functional communication network (e.g. 5G SA or NSA).
- 5G SA a functional communication network
- One or more of the following technical effects can be selected: validating 5G CNFs in operational environment replicas and digital twins, robust operation of a 5G CNF-based communication network, optimizing configuration of a 5G CNF-based communication network, discovering unforeseen I unplanned 5G CNF behavior insights, and/or continuous improvement of the design of a 5G CNF-based communication network.
- the causal reasoning system for operational twin system 201 spans engineering layers including outcome 302, ML 304, data 306, and system 308.
- Outcome 302 includes optimal code 310, RCA and fault 324, and RCA and fix 326.
- ML 304 includes code review 312, causality analytics 328, and anomaly detection 330.
- Data 306 includes code configurations 314 and production metrics 332.
- System 308 includes IDE 316 and production 334.
- Development 318 includes optimal code 310, code review 312, code configurations 314, and IDE 316.
- CAROT 201 is a component of the development operations pipeline 320 and implements simulation and a digital twin 322.
- Operations 336 includes RCA and fault 324, RCA and fix 326, causality analytics 328, anomaly detection 330, production metrics 332, and production 334.
- CAROT 201 receives input from IDE 316 and production metrics 332, and provides output to optimal code 310, causality analytics 328, and production 334.
- CAROT 201 is built over multiple specialized components as depicted in the high- level architecture shown in FIG. 4.
- the specialized components include chaos framework 402, chaos metrics 404, causal inference 406, causal discovery 408, and design optimization 410.
- Causal discovery provides results to causality analytics 328 over interface 412, and through interface 412 causal discovery 408 receives information from production metrics 332.
- Results of design optimization 410 is provided through interface 410 to develop optimal code 310.
- the term “chaos” alludes to study of fault generation and fault observation, with altering a configuration, parameter, or component of a system.
- CAROT employs a digital twin engine, including an automated planned failure injection (including chaos failure injection) and load stressing (e.g. REST and network traffic benchmarking) framework.
- This subjects target applications (5G CNFs) to configurable workloads during which planned infrastructure non-optimal conditions can be injected and observed through captured metrics.
- This capability is provided by the lower two blocks (chaos framework 402 and chaos metrics 404) in the CAROT framework high-level architecture as highlighted in system component 502 shown in FIG. 5.
- the resulting observations can help strengthen or dismiss assumptions (e.g. one or more hypotheses) about the application’s performance and/or robustness in measurable, quantitative terms.
- test tasks are typically carried out in a cloud infrastructure displaying ideal conditions which makes it difficult to identify configuration and design issues that are likely to emerge once the application is deployed into a production environment where resources can be constrained and the application is generally displaying signs of duress.
- Examples include determining or refining infrastructure thresholds e.g. increasing or decreasing specific resource availability and how these influence the application to help save costs (e.g. suppressing resources that have negligible impact), refine environment requirements, validate scaling and performance capabilities, identify root causes more efficiently, etc.
- FIG. 7 provides a general summary of the problem statement.
- the configuration, performance, or design issue is not identified because environmental conditions did not replicate production or generate a digital-twin.
- the design, configuration, or performance issue is identified, but at this stage of the pipeline, the design, configuration, or performance issue is costly and prone to requiring more time to fix, leading to longer deployment timelines, a decrease in quality, and a general confidence drop from stakeholders.
- item 704 of FIG.
- CAROT 201 identifies the design, configuration, or performance issue early in the CI/CD pipeline, and earlier than the process shown by item 702, soon after the test phase 206 and prior to the release phase 208 (and substantially earlier than the operate phase 212), when it is faster and less costly to rectify.
- the left-shift paradigm comprises operational environment-like conditions inserted early in the CNF CI/CD test task.
- Digital twins include deploying and configuring the CNF, exposing it to controlled, variable rates (benchmarking of REST and network traffic), replicating optimal and sub- optimal operational environment cloud infrastructure conditions (chaos engineering), and observing infrastructure and interaction across the CNF microservices and components including monitoring and tracing.
- Causal inference includes determining probable cause I effect relationships, analyzing the magnitude of influence between a cause and an effect, utilizing or constructing of a direct acyclic graph (DAG) by performing causal discovery, determining root cause analysis (RCA), and developing and determining what-if scenarios to generate inference insights.
- DAG direct acyclic graph
- RCA root cause analysis
- CAROT automated adhoc network
- cloud infrastructure conditions e.g. digital twins
- the observations and metrics collected become datasets which are analyzed by the causal inference algorithm, for example using machine learning to either validate existing design assumptions and/or discover new ones to analyze cause and effect impact.
- the system described herein may also provide complementary information for operational root cause analysis (RCA) in an operational environment.
- RCA operational root cause analysis
- CAROT provides a framework that is designed to be integrated into a CI/CD 5G CNF pipeline to explore behavior in a typical operational environment, creating a digital twin, inferring probable causes and effects helping to explore what-if scenarios, assert or disregard CNF design assumptions, identify optimal CNF configuration setups, validate SLO/SLA under non-ideal cloud infrastructure conditions, and support root cause analysis (e.g. zero-touch management automation).
- CAROT’s components are referenced in FIG. 8.
- the system is intended for the 5G CNF CI/CD pipeline integration as shown in (802) via open APIs. As shown in FIG. 8, the system may be integrated into the test portion 206 of the CI/CD pipeline.
- the system subjects a target application 816 to a configurable infrastructure emulating a production environment (e.g. digital-twin): this includes load 814 - either REST or network traffic (804), and cloud infrastructure impairments (806).
- the chaos infrastructure impairment injection into the software application 816 includes total breakdown 818, resource stress 820, network stress 822, and I/O impairment 824.
- the system automatically captures infrastructure and application observations (e.g. tracing and monitoring) for posterior analysis (808).
- Causal inference 406 and discovery 408 algorithms ML are applied to observations 826 in tandem with one or more causal direct acyclic graphs (DAGs) (828) to provide insights (810).
- DAGs causal direct acyclic graphs
- the framework’s (CAROT) output involves what-if scenarios 830, decision insights 836, significant features 832 and a model for RCA 834 leading to zero-touch management automation (812).
- CAROT system described herein
- CAROT s framework
- a production environment e.g. resource stress, hardware failures, oversubscription, etc.
- a planned, repeatable, controlled setting digital twin
- SLO assumptions and optimal configurations insights can be asserted or dismissed before the CNF reaches the operational deployment stage.
- FIG. 7 depicts this feature.
- a functional representation of CAROT’ s digital twin component 900 is shown in FIG. 9. The following descriptions provide further insights.
- 5G CNF target - at the center of the image is the 5G CNF 902 which must be wrapped in a helm template if the deployment and management is to be handled by CAROT internally.
- the CNF 902 may reside externally in which case the target system details must be provided, e.g., URL endpoints for load targeting and monitoring, such as for K8s labels chaos feature targeting and monitoring.
- 5G CNF load - sitting on top of the application 816 is the load module 814.
- the load module 814 can target the 5G CNF 902 with either REST or network traffic such as IMIX traffic.
- the load level is configurable, and based on user choice CAROT automatically spawns the necessary workers to support it appropriately. The higher the load, the more workers are spawned.
- the overall experiment length e.g., the amount of time the 5G CNF 902 endures controlled environment features while monitored is set by the load duration, a configurable parameter that may range from seconds to days.
- the infrastructure impairment injection module 904 supports a configurable range of cloud infrastructure conditions that can be applied to the environment where the target application is deployed, provided by chaos engineering tools. Impairment features can be combined, or can completely dismissed to represent ideal conditions.
- Observations 826 are collected from the application 816, e.g. using tracing, and infrastructure packaged and labelled with unique ids that are maintained in the framework’ s API persistence layer for future reference.
- the infrastructure may comprise node-exporter and cAdvisor via Prometheus.
- FIG. 10 shows the general workflow of the digital twin component.
- the workflow shown in FIG. 10 can be related to the descriptions provided above.
- Three modules manage and operate the workflow/component 1000, namely the API module 1001, the operator module 1003, and the engine module 1005.
- the API module 1001 is an open-API endpoint for all tasks related to requests, e.g. submission, updates, cancellations, etc.
- the API module 1001 holds the persistence layer of the component.
- the operator module 1003 manages the automatic scaling of the K8s worker clusters.
- the engine module 1005 handles the order stored in the persistence layer, coordinates deployments, terminations for the application and the environment features.
- the engine module 1005 handles observation collections and packaging.
- This component 1000 is designed and capable of scaling vertically and horizontally allowing for the simultaneous execution of experiments in large numbers provided the computational resources are available. As a reference, the component 1000 has handled tens of thousands of short duration experiments.
- the system determines whether a target application 816 (e.g. a 5G CNF 902) is internally administered. If the system determines at 1002 that the 5G CNF is not internally administered and is externally administered, then at 1004 the system provides endpoint details such as URLs or labels. If the system determines at 1002 that the 5G CNF is internally administered, then at 1006 the system selects a target 5G CNF including replicas. At 1008, the system selects 5G CNF features including computing features and instances. From items 1004 and 1008, the method transitions to 1010. At 1010, the system selects 5G CNF load including intensity and duration. At 1012, the system selects environmental conditions such as type, intensity, and periodicity. At 1014, the system determines whether the 5G CNF is internally administered.
- a target application 816 e.g. a 5G CNF 902
- the method transitions to 1016. If at 1014 the system determines that the 5G CNF is internally administered, the method transitions to 1018.
- the 5G CNF is deployed, e.g. using a helm chart or template.
- environmental conditions are deployed, e.g. using a helm chart or template.
- the load is deployed, e.g. using a helm chart or template.
- the system waits for the experiment to complete.
- the system collects, packages, and labels observations 826 such as metrics.
- one or more cause effect inference components process(es) the observations 826.
- the 5G CNF, load and chaos deployments are terminated, e.g. using a helm chart or template.
- the method ends.
- This component comprises applying causal reasoning techniques to the observations captured by the digital-twin module to produce application configuration optimization and performance insight discoveries.
- Causal reasoning breaks from traditional machine learning approaches in that it attempts to answer the why behind the decision-making process, effectively computationally addressing the counterfactual what if question: e.g., how much better would the latency KPI be if the vCPU resource type X is used instead for this application?
- Causal reasoning is a step forward towards artificial general intelligence (AGI) and it is pragmatically applied by the examples described herein towards the software configuration and performance optimization problem.
- AGI artificial general intelligence
- This component is built upon four modules, with reference to the cause effect inference component 1100 shown in FIG. 11, including generate and collect observational data (1104, 1116, 402), formulating the inputs (1103, 1122, 1128), performing causal inference (1102, 1110, 1112, 1114, 1118), and causal discovery (1124, 1126), as well as design optimization 410.
- the first step in the process is to collect observational data 1108 using at least partially module 1103.
- this observational data 1108 is automatically collected from the digital-twin component.
- the data 1108 is generated and stored in tabular form (1122), such that each record corresponds to an experiment performed (1116) and each has associated with it a set of attributes.
- These sets of attributes correspond to the nodes in the direct acyclic graph (DAG) in which causal effects inference calculations are carried out. Attributes can be related to configuration (e.g. cloud computing settings), control (e.g. chaos features) or observables metrics and combinations thereof. As shown in FIG.
- the chaos framework 402 includes domain name system (DNS) impairment 801, kernel breakdown 803, total breakdown 818, network stress 822, computational resource stress 820, HTTP stress 805 e.g. internet stress, time impairment 807, web service (WS) chaos or breakdown 809, and input/output (I/O) impairment 824.
- DNS domain name system
- the chaos framework 402 may also include network load and application programming interface load.
- the run experiments block (1116) sets up the chaos conditions (1107), runs an experiment (402), observes by collecting data (1105), then repeats this process many times.
- the collective observational data is stored (1115) into the dataset (1122).
- Module 1104 includes chaos metrics 404 and chaos framework 402.
- the process requires a set of observational data 1122 that are generated 1115 at least partially with experiments 1116, and a causal graph (one of causal graphs 1128) e.g. the DAG which can be formulated using domain expert knowledge.
- the causal graphs are generated (1127) using at least ambiguity removal 1126.
- People that design DAGs carefully define edges interconnecting the nodes. An edge implies there might be a causal relationship from the parent node to the child node, where for example the parent defines the cause 1130 and the child defines the effect 1132, where the effect 1132 may be one or more KPIs.
- edges are unidirectional by nature, and the acyclic name implies that there cannot be any cycle in the graph structure. Cycles are considered unhelpful in the casual inference analysis as a cycle makes it nearly impossible to decipher between cause and effect.
- the strength of causal influence between two nodes need not be defined in the design phase. During the design phase what is simply defined is that the causation might exist. When in doubt, the recommendation is to add an edge as ignoring one is considered a very strong assumption. In essence, the casual inference function is to analyze and calculate this strength quantitatively between a change of a cause and its causal impact to an effect attribute. This is described in more detail in the following point.
- causal inference (1102, 1110, 1112, 1114, 1118) - this module is responsible for processing raw observational data 1122 with input from the DAG, e.g. with use of one or more of causal graphs 1128, to perform causal inference 406.
- Metrics are reflected by causal effects 1110 which may include ATE (average treatment effect), ATT (average treatment effect on the treated), conditional- ATE, ITE (individual treatment effect), meditation analysis e.g. NDE (natural direct effect) and NIE (natural indirect effect).
- Any configuration or control nodes can be treatment candidates and any value within that node can be used as control, or as a base, and some other value is to be considered as a treated value, for instance, by examining the service’s throughput (e.g. a DAG outcome node) average treatment effect (e.g. ATE) when modifying the K8s cluster worker node type (e.g. a treatment node) where the application 816 (5G CNF 902) is deployed from a low powered CPU to a medium powered one.
- the K8s cluster worker node type e.g. a treatment node
- the application 816 5G CNF 902
- This section of the design typically involves performing a formulation of the estimand (refer to identification 1114) using doCalculus (refer to 1106) or some other causal inference technique as the first step, with input 1111 from the causal graphs 1128, and input 1113 from the column row data 1122.
- This allows counterfactual scenarios to be formulated -the reasoning aspect- which then allows the next step to evoke a statistical analysis technique 1112 to calculate the causal inference results metrics that is described herein.
- An output 1119 of ML and statistical analysis 1112 is provided to robustness check 1118, and another output 1121 of ML and statistical analysis 1112 is provided to causal effects module 1110.
- the output 1123 of the identification 1114 is provided to ML and statistical analysis 1112.
- Item 1102 includes both causal inference 406 and causal discovery 408.
- the DAG 1128 (the one or more DAGs 1128) can be 1) derived using the causal discovery process (1124, 1125, 1126, 1127) or 2) the DAG 1128 (the one or more DAGs 1128) can be come up with (e.g. developed) by a human expert.
- the DAG can be generated semi-automatically using a causal discovery (CD) process 1124, with input 1125 from observational data 1122. Many of these algorithms can be used to infer a graph by using observational data 1122 as input. However, given the SOTA technique in this domain, the resulting graphs are not necessarily DAGs in nature, as edges might lack direction or display other types of ambiguity. Expert knowledge is usually still applied to assist in removing ambiguities 1126 in the graph to formulate the DAG. The causal discovery process might generate many candidate graphs where expert knowledge is applied to select the closest one. The resulting causal discovery-generated DAG can be used as part of the regular causal inference process. In addition, this DAG can be used for root cause analysis purpose. In certain use cases the causal discovery function can be used standalone beyond being just an assistive function to the causal inference process.
- CD causal discovery
- Design optimization - this final step includes assessing the resulting causal inference in search for validation or dismissal of assumptions established during the design phase.
- the design task may be fulfilled manually in CAROT.
- the design optimization can be tightly integrated to CI/CD pipelines to streamline the entire process.
- RCT randomized controlled trial
- RCT is performed using the digital twin component testing one treatment effect at a time.
- the causal inference process described remains the most advantageous when considering possible permutations of treatment effects.
- the causal inference process is an effective process if the objective is to understand many specific treatment combinations.
- the RCT setup is used as a verification function to spot check the causal inference results, and carried out to specifically study a treatment outcome pair and compare results to those produced by the causal inference analysis.
- interface 415 used to provide information from the chaos framework 402 to experiments 1116.
- FIG. 13 shows the workflow diagram 1300 across the individual modules of the cause effect inference component 1100.
- Observational data is generated and collected (1108), which includes generating digital twin observations (1104) and formatting (1116) the observations as rows in a table, with columns representing experiment features and KPIs.
- a determination is made as to whether causal discovery is applied. If at 1302 it is determined that causal discovery is applied, the method transitions to causal discovery 1304, and if at 1302 it is determined that causal discovery is not applied, the method transitions to formulating the input 1122.
- Optional causal discovery 1304 includes auto DAG generation 1124 and ambiguity removal 1126, which ambiguity removal 1126 may be a human-assisted process. Formatting the input 1122 includes generating a causal graph (1128) using domain knowledge.
- causal inference 1102 includes identification 1114, ML and statistical analysis 1112, robustness check 1118 which may include refutations, and generating causal effects 1110 such as ATE, CATE, and causal tree generation.
- Design optimization 410 is performed following causal inference 1102, followed by ending the process at 1306.
- the causal reasoning system for operational twin described herein provides several advantages and technical effects.
- the system s unique design allows for cause effect insights to be inferred from environments that resemble very closely production conditions by creating digital twins. This provides several advantages that help improve overall application development cycles and quality, specifically confirming or dismissing application e.g. 5G CNF design assumptions regarding ideal and impaired environments including performance, robustness, and application configuration setups.
- the system also provides functionality for confirming or dismissing application e.g. 5G CNF design assumptions regarding platform requirements such as computational and networking requirements, etc.
- the system additionally includes functionality for validating root cause assumptions in zero-touch and troubleshooting, and identifying potential application e.g. 5G CNF issues long before they are encountered in production environments.
- the examples described herein relate to software development, agile methodologies, application architecture for the deployment (e.g. CI/CD) and testing of applications (e.g. 5G CNFs) on customer production environments.
- the examples described herein center on the application of causal reasoning (ML) techniques to 5G CNF software testing under a CI/CD pipeline, and what-if scenario analysis capability, the ability to address counterfactual questions and to provide the equivalent of treatment effect estimations for performing causal inference.
- ML causal reasoning
- FIG. 14 shows a block diagram of one possible and nonlimiting example in which the examples may be practiced.
- a user equipment (UE) 110 radio access network (RAN) node 170, and network element(s) 190 are illustrated.
- the user equipment (UE) 110 is in wireless communication with a wireless network 100.
- a UE is a wireless device that can access the wireless network 100.
- the UE 110 includes one or more processors 120, one or more memories 125, and one or more transceivers 130 interconnected through one or more buses 127.
- Each of the one or more transceivers 130 includes a receiver, Rx, 132 and a transmitter, Tx, 133.
- the one or more buses 127 may be address, data, or control buses, and may include any interconnection mechanism, such as a series of lines on a motherboard or integrated circuit, fiber optics or other optical communication equipment, and the like.
- the one or more transceivers 130 are connected to one or more antennas 128.
- the one or more memories 125 include computer program code 123.
- the UE 110 includes a module 140, comprising one of or both parts 140- 1 and/or 140-2, which may be implemented in a number of ways.
- the module 140 may be implemented in hardware as module 140-1, such as being implemented as part of the one or more processors 120.
- the module 140-1 may be implemented also as an integrated circuit or through other hardware such as a programmable gate array.
- the module 140 may be implemented as module 140-2, which is implemented as computer program code 123 and is executed by the one or more processors 120.
- the one or more memories 125 and the computer program code 123 may be configured to, with the one or more processors 120, cause the user equipment 110 to perform one or more of the operations as described herein.
- the UE 110 communicates with RAN node 170 via a wireless link 111.
- the RAN node 170 in this example is a base station that provides access by wireless devices such as the UE 110 to the wireless network 100.
- the RAN node 170 may be, for example, a base station for 5G, also called New Radio (NR).
- the RAN node 170 may be a NG-RAN node, which is defined as either a gNB or an ng-eNB.
- a gNB is a node providing NR user plane and control plane protocol terminations towards the UE, and connected via the NG interface (such as connection 131) to a 5GC (such as, for example, the network element(s) 190).
- the ng-eNB is a node providing E-UTRA user plane and control plane protocol terminations towards the UE, and connected via the NG interface (such as connection 131) to the 5GC.
- the NG-RAN node may include multiple gNBs, which may also include a central unit (CU) (gNB-CU) 196 and distributed unit(s) (DUs) (gNB-DUs), of which DU 195 is shown.
- CU central unit
- DUs distributed unit
- the DU 195 may include or be coupled to and control a radio unit (RU).
- the gNB-CU 196 is a logical node hosting radio resource control (RRC), SDAP and PDCP protocols of the gNB or RRC and PDCP protocols of the en-gNB that control the operation of one or more gNB-DUs.
- RRC radio resource control
- the gNB-CU 196 terminates the Fl interface connected with the gNB-DU 195.
- the Fl interface is illustrated as reference 198, although reference 198 also illustrates a link between remote elements of the RAN node 170 and centralized elements of the RAN node 170, such as between the gNB-CU 196 and the gNB- DU 195.
- the gNB-DU 195 is a logical node hosting RLC, MAC and PHY layers of the gNB or en-gNB, and its operation is partly controlled by gNB-CU 196.
- One gNB-CU 196 supports one or multiple cells.
- One cell may be supported with one gNB-DU 195, or one cell may be supported/shared with multiple DUs under RAN sharing.
- the gNB-DU 195 terminates the Fl interface 198 connected with the gNB-CU 196.
- the DU 195 is considered to include the transceiver 160, e.g., as part of a RU, but some examples of this may have the transceiver 160 as part of a separate RU, e.g., under control of and connected to the DU 195.
- the RAN node 170 may also be an eNB (evolved NodeB) base station, for LTE (long term evolution), or any other suitable base station or node.
- the RAN node 170 includes one or more processors 152, one or more memories 155, one or more network interfaces (N/W I/F(s)) 161, and one or more transceivers 160 interconnected through one or more buses 157.
- Each of the one or more transceivers 160 includes a receiver, Rx, 162 and a transmitter, Tx, 163.
- the one or more transceivers 160 are connected to one or more antennas 158.
- the one or more memories 155 include computer program code 153.
- the CU 196 may include the processor(s) 152, memory(ies) 155, and network interfaces 161. Note that the DU 195 may also contain its own memory/memories and processor(s), and/or other hardware, but these are not shown.
- the RAN node 170 includes a module 150, comprising one of or both parts 150-1 and/or 150-2, which may be implemented in a number of ways.
- the module 150 may be implemented in hardware as module 150-1, such as being implemented as part of the one or more processors 152.
- the module 150-1 may be implemented also as an integrated circuit or through other hardware such as a programmable gate array.
- the module 150 may be implemented as module 150-2, which is implemented as computer program code 153 and is executed by the one or more processors 152.
- the one or more memories 155 and the computer program code 153 are configured to, with the one or more processors 152, cause the RAN node 170 to perform one or more of the operations as described herein.
- the functionality of the module 150 may be distributed, such as being distributed between the DU 195 and the CU 196, or be implemented solely in the DU 195.
- the one or more network interfaces 161 communicate over a network such as via the links 176 and 131.
- Two or more gNBs 170 may communicate using, e.g., link 176.
- the link 176 may be wired or wireless or both and may implement, for example, an Xn interface for 5G, an X2 interface for LTE, or other suitable interface for other standards.
- the one or more buses 157 may be address, data, or control buses, and may include any interconnection mechanism, such as a series of lines on a motherboard or integrated circuit, fiber optics or other optical communication equipment, wireless channels, and the like.
- the one or more transceivers 160 may be implemented as a remote radio head (RRH) 195 for LTE or a distributed unit (DU) 195 for gNB implementation for 5G, with the other elements of the RAN node 170 possibly being physically in a different location from the RRH/DU 195, and the one or more buses 157 could be implemented in part as, for example, fiber optic cable or other suitable network connection to connect the other elements (e.g., a central unit (CU), gNB-CU 196) of the RAN node 170 to the RRH/DU 195.
- Reference 198 also indicates those suitable network link(s).
- each cell performs functions, but it should be clear that equipment which forms the cell may perform the functions.
- the cell makes up part of a base station. That is, there can be multiple cells per base station. For example, there could be three cells for a single carrier frequency and associated bandwidth, each cell covering one- third of a 360 degree area so that the single base station’s coverage area covers an approximate oval or circle.
- each cell can correspond to a single carrier and a base station may use multiple carriers. So if there are three 120 degree cells per carrier and two carriers, then the base station has a total of 6 cells.
- the wireless network 100 may include a network function or functions 190 that may include core network functionality, and which provides connectivity via a link or links 181 with a further network, such as a telephone network and/or a data communications network (e.g., the Internet).
- a further network such as a telephone network and/or a data communications network (e.g., the Internet).
- Such core network functionality for 5G may include location management functions (LMF(s)) and/or access and mobility management function(s) (AMF(S)) and/or user plane functions (UPF(s)) and/or session management function(s) (SMF(s)).
- LMF(s) location management functions
- AMF(S) access and mobility management function(s)
- UPF(s) user plane functions
- SMF(s) session management function
- Such core network functionality for LTE may include MME (Mobility Management Entity)/SGW (Serving Gateway) functionality.
- MME Mobility Management Entity
- SGW Serving Gateway
- the RAN node 170 is coupled via a link 131 to the network function 190.
- the link 131 may be implemented as, e.g., an NG interface for 5G, or an SI interface for LTE, or other suitable interface for other standards.
- the network function 190 includes one or more processors 175, one or more memories 171, and one or more network interfaces (N/W I/F(s)) 180, interconnected through one or more buses 185.
- the one or more memories 171 include computer program code 173.
- the wireless network 100 may implement network virtualization, which is the process of combining hardware and software network resources and network functionality into a single, software-based administrative entity, a virtual network.
- Network virtualization involves platform virtualization, often combined with resource virtualization.
- Network virtualization is categorized as either external, combining many networks, or parts of networks, into a virtual unit, or internal, providing network-like functionality to software containers on a single system. Note that the virtualized entities that result from the network virtualization are still implemented, at some level, using hardware such as processors 152 or 175 and memories 155 and 171, and also such virtualized entities create technical effects.
- the computer readable memories 125, 155, and 171 may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, non-transitory memory, transitory memory, fixed memory and removable memory.
- the computer readable memories 125, 155, and 171 may be means for performing storage functions.
- the processors 120, 152, and 175 may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and processors based on a multi-core processor architecture, as nonlimiting examples.
- the processors 120, 152, and 175 may be means for performing functions, such as controlling the UE 110, RAN node 170, network function(s) 190, and other functions as described herein.
- the various embodiments of the user equipment 110 can include, but are not limited to, cellular telephones such as smart phones, tablets, personal digital assistants (PDAs) having wireless communication capabilities, portable computers having wireless communication capabilities, image capture devices such as digital cameras having wireless communication capabilities, gaming devices having wireless communication capabilities, music storage and playback appliances having wireless communication capabilities, Internet appliances permitting wireless Internet access and browsing, tablets with wireless communication capabilities, head mounted displays such as those that implement virtual/augmented/mixed reality, as well as portable units or terminals that incorporate combinations of such functions.
- PDAs personal digital assistants
- portable computers having wireless communication capabilities
- image capture devices such as digital cameras having wireless communication capabilities
- gaming devices having wireless communication capabilities
- music storage and playback appliances having wireless communication capabilities
- Internet appliances permitting wireless Internet access and browsing
- tablets with wireless communication capabilities head mounted displays such as those that implement virtual/augmented/mixed reality, as well as portable units or terminals that incorporate combinations of such functions.
- UE 110, RAN node 170, and/or network function(s) 190, (and associated memories, computer program code and modules) may be configured to implement (e.g. in part) the methods described herein, including a causal reasoning system for operational twin (CAROT) for development and operation of 5G CNFS.
- CAROT operational twin
- computer program code 123, module 140- 1, module 140-2, and other elements/features shown in FIG. 14 of UE 110 may implement user equipment related aspects of the methods described herein.
- RAN node 170 may implement gNB/TRP related aspects of the methods described herein, such as for a target gNB or a source gNB.
- Computer program code 173 and other elements/features shown in FIG. 14 of network function(s) 190 may be configured to implement network function/element related aspects of the methods described herein such as for an 0AM node.
- FIG. 15 is an example apparatus 1500, which may be implemented in hardware, configured to implement the examples described herein.
- the apparatus 1500 comprises at least one processor 1502 (e.g. an FPGA and/or CPU), at least one memory 1504 including computer program code 1505, wherein the at least one memory 1504 and the computer program code 1505 are configured to, with the at least one processor 1502, cause the apparatus 1500 to implement circuitry, a process, component, module, or function (collectively control 1506) to implement the examples described herein, including a causal reasoning system for operational twin (CAROT) for development and operation of 5G CNFs.
- the memory 1504 may be a non-transitory memory, a transitory memory, a volatile memory (e.g. RAM), or a non-volatile memory (e.g. ROM).
- the apparatus 1500 optionally includes a display and/or I/O interface 1508 that may be used to display aspects or a status of the methods described herein (e.g., as one of the methods is being performed or at a subsequent time), or to receive input from a user such as with using a keypad, camera, touchscreen, touch area, microphone, biometric recognition, one or more sensors, etc.
- the apparatus 1500 includes one or more communication interfaces (I/F(s)) 1510 e.g. one or more network (N/W) interface(s).
- the communication I/F(s) 1510 may be wired and/or wireless and communicate over the Internet/other network(s) via any communication technique.
- the communication I/F(s) 1510 may comprise one or more transmitters and one or more receivers.
- the communication I/F(s) 1510 may comprise standard well-known components such as an amplifier, filter, frequency-converter, (de)modulator, and encoder/decoder circuitries and one or more antennas.
- the apparatus 1500 to implement the functionality of control 1506 may be UE 110, RAN node 170 (e.g. gNB), network element(s) 190 or any of the other examples described herein.
- processor 1502 may correspond to processor(s) 120, processor(s) 152 and/or processor(s) 175, memory 1504 may correspond to memory(ies) 125, memory(ies) 155 and/or memory(ies) 171
- computer program code 1505 may correspond to computer program code 123, module 140-1, module 140-2, and/or computer program code 153, module 150-1, module 150-2, and/or computer program code 173, and communication I/F(s) 1510 may correspond to transceiver 130, antenna(s) 128, transceiver 160, antenna(s) 158, N/W I/F(s) 161, and/or N/W I/F(s) 180.
- apparatus 1500 may not correspond to either of UE 110, RAN node 170, or network function(s) 190, as apparatus 1500 may be
- the apparatus 1500 may also be distributed throughout the network (e.g. 100) including within and between apparatus 1500 and any network element (such as a network control function (NCE) 190 and/or the RAN node 170 and/or the UE 110.
- NCE network control function
- Interface 1512 enables data communication between the various items of apparatus 1500, as shown in FIG. 15.
- the interface 1512 may be one or more buses such as address, data, or control buses, and may include any interconnection mechanism, such as a series of lines on a motherboard or integrated circuit, fiber optics or other optical communication equipment, and the like.
- Computer program code 1505, including control 1506 may comprise object-oriented software configured to pass data and messages between objects within computer program code 1505.
- the apparatus 1500 need not comprise each of the features mentioned, or may comprise other features as well.
- FIG. 16 shows a schematic representation of non-volatile memory media 1600a (e.g. computer disc (CD) or digital versatile disc (DVD)) and 1600b (e.g. universal serial bus (USB) memory stick) storing instructions and/or parameters 1602 which when executed by a processor allows the processor to perform one or more of the steps of the methods described herein.
- 1600a e.g. computer disc (CD) or digital versatile disc (DVD)
- 1600b e.g. universal serial bus (USB) memory stick
- FIG. 17 is an example method 1700 to implement the example embodiments described herein.
- the method includes operating a replicate of one or more cloud native network functions.
- the method includes generating observational data of the replicate of the one or more cloud native network functions, the observational data generated based on a plurality of operating conditions of the one or more cloud native network functions.
- the method includes applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more cloud native network functions and the at least one observed effect of the one or more cloud native network functions.
- Method 1700 may be performed with CAROT 201, apparatus 1100, or apparatus 1500.
- the method includes operating a replicate of one or more target applications.
- the method includes generating observational data of the replicate of the one or more target applications, the observational data generated based on a plurality of operating conditions of the one or more target applications.
- the method includes applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more target applications and the at least one observed effect of the one or more target applications.
- Method 1800 may be performed with CAROT 201, apparatus 1100, or apparatus 1500.
- FIG. 19 is an example method 1900 to implement the example embodiments described herein.
- the method includes selecting one or more cloud native network functions.
- the method includes selecting at least one feature of the one or more cloud native network functions.
- the method includes selecting at least one environmental condition of the one or more cloud native network functions.
- the method includes selecting a load of the one or more cloud native network functions, the load comprising an intensity and duration of processing of the one or more cloud native network functions subject to the at least one environmental condition.
- the method includes performing at least one experiment with the one or more cloud native network functions, the experiment based on the at least one feature, the load, and the at least one environmental condition of the one or more cloud native network functions.
- the method includes collecting observational data from the at least one experiment.
- the method includes applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more cloud native network functions and the at least one observed effect of the one or more cloud native network functions.
- Method 1900 may be performed with CAROT 201, apparatus 1100, or apparatus 1500.
- Example 1 An apparatus including: at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus at least to: operate a replicate of one or more cloud native network functions; generate observational data of the replicate of the one or more cloud native network functions, the observational data generated based on a plurality of operating conditions of the one or more cloud native network functions; and apply a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more cloud native network functions and the at least one observed effect of the one or more cloud native network functions.
- Example 2 The apparatus of example 1, wherein the apparatus is caused to: design the one or more cloud native network functions based on the analyzed causality between the at least one observed cause and the at least one observed effect of the one or more cloud native network functions.
- Example 3 The apparatus of example 2, wherein the one or more cloud native network functions is designed during or after a testing phase of a continuous integration and continuous deployment pipeline, and prior to release of the one or more cloud native network functions in a production environment, the release being of the continuous integration and continuous deployment pipeline.
- Example 4 The apparatus of any of examples 1 to 3, wherein the apparatus is caused to: operate or configure the one or more cloud native network functions during or after a testing phase of a continuous integration and continuous deployment pipeline, and prior to release of the one or more cloud native network functions in a production environment, the release being of the continuous integration and continuous deployment pipeline.
- Example 5 The apparatus of any of examples 1 to 4, wherein the one or more cloud native network functions comprises a software application for a 5G network.
- Example 6 The apparatus of any of examples 1 to 5, wherein the replicate comprises a digital twin of the one or more cloud native network functions.
- Example 7 The apparatus of any of examples 1 to 6, wherein the at least one observed cause comprises an operating attribute of the one or more cloud native network functions, wherein configuration of the one or more cloud native network functions comprises a type of operating attribute.
- Example 8 The apparatus of any of examples 1 to 7, wherein the at least one observed effect comprises at least one performance indicator of the one or more cloud native network functions.
- Example 9 The apparatus of any of examples 1 to 8, wherein the at least one observed effect comprises at least one feature, a load, or at least one environmental condition of the one or more cloud native network functions.
- Example 10 The apparatus of any of examples 1 to 9, wherein analyzing the causality between the at least one observed cause and the at least one observed effect of the one or more cloud native network functions comprises at least one of: determining an existence of a relationship between the at least one observed cause and the at least one observed effect; determining a magnitude of the relationship between the at least one observed cause and the at least one observed effect; or determining a way in which the at least one observed cause and the at least one observed effect are related.
- Example 11 The apparatus of any of examples 1 to 10, wherein the plurality of operating conditions comprises at least one operating condition of the one or more cloud native network functions, the at least one operating condition comprising at least one of: network load; application programming interface load; domain name system impairment; kernel breakdown; total breakdown; network stress; computational resource stress; time impairment; web service breakdown; or input output impairment.
- Example 12 The apparatus of any of examples 1 to 11, wherein the apparatus is further caused to: form at least one graph that represents the causality between the at least one observed cause and the at least one observed effect of the one or more cloud native network functions.
- Example 13 The apparatus of example 12, wherein the at least one graph comprises a direct acyclic graph.
- Example 14 The apparatus of any of examples 12 to 13, wherein the apparatus is further caused to: form the at least one graph using domain expert knowledge to remove at least one ambiguity related to causality between the at least one observed cause and the at least one observed effect of the one or more cloud native network functions.
- Example 15 The apparatus of any of examples 1 to 14, wherein the apparatus is further caused to: generate the observational data as a table, wherein a row of the table comprises an observation, and a column of the table comprises a cause attribute, an effect attribute, or an attribute comprising both a cause and effect.
- Example 16 The apparatus of any of examples 1 to 15, wherein the apparatus is further caused to: generate the observational data as a table, wherein a first column of the table comprises at least a portion of operating configurations of the one or more cloud native network functions, and a second column of the table comprises the at least one observed effect of the one or more cloud native network functions.
- Example 17 The apparatus of any of examples 1 to 16, wherein the apparatus is further caused to: generate the observational data as a table, wherein the table comprises an experimental group and a control group, wherein the experimental group comprises a collection of experiments in which at least one operating attribute is set to a specific value that is to be studied, and wherein the control group comprises a collection of experiments in which operating attributes are randomized.
- Example 18 The apparatus of example 17, wherein the apparatus is further caused to: determine an average treatment effect of the one or more cloud native network functions when the at least one operating attribute is set to the specific value.
- Example 19 The apparatus of any of examples 1 to 18, wherein the apparatus is further caused to: determine a magnitude of a relationship between the at least one observed cause and the at least one observed effect; wherein the magnitude of a relationship comprises at least one of: average treatment effect, a conditional average treatment effect, an individual treatment effect, a natural direct effect, or a natural indirect effect.
- Example 20 The apparatus of any of examples 1 to 19, wherein applying the causal reasoning function comprises performing a do-calculus to replace a do operator with at least one conditional probability of the at least one observed effect of the one or more cloud native network functions given the at least one observed cause, the at least one conditional probability used to infer the causality between the at least one observed cause and the at least one observed effect.
- Example 21 The apparatus of any of examples 1 to 20, wherein applying the causal reasoning function comprises performing a statistical analysis of the at least one observed cause and the at least one observed effect of the one or more cloud native network functions.
- Example 22 The apparatus of any of examples 1 to 21, wherein applying the causal reasoning function comprises applying machine learning to analyze the causality between the at least one observed cause and the at least one observed effect of the one or more cloud native network functions.
- Example 23 The apparatus of any of examples 1 to 22, wherein the apparatus is further caused to: validate or dismiss at least one design or configuration assumption related to the at least one observed cause or the at least one observed effect of the one or more cloud native network functions, based on the application of the causal reasoning function.
- Example 24 The apparatus of any of examples 1 to 23, wherein the apparatus is further caused to: determine at least one operating condition of the one or more cloud native network functions to be suboptimal, based on the application of the causal reasoning function.
- Example 25 The apparatus of any of examples 1 to 24, wherein the apparatus is further caused to: perform a root cause analysis to determine at least one cause of a fault of operation of the one or more cloud native network functions.
- Example 26 The apparatus of example 25, wherein the apparatus is further caused to: operate or configure the one or more cloud native network functions using at least one result of the root cause analysis to complement the application of the causal reasoning function.
- Example 27 The apparatus of any of examples 1 to 26, wherein the apparatus is further caused to: determine whether the one or more cloud native network functions is an external application or an internal application.
- Example 28 The apparatus of any of examples 1 to 27, wherein the one or more cloud native network functions is in containerized software form.
- Example 29 An apparatus including: at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus at least to: operate a replicate of one or more target applications; generate observational data of the replicate of the one or more target applications, the observational data generated based on a plurality of operating conditions of the one or more target applications; and apply a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more target applications and the at least one observed effect of the one or more target applications.
- Example 30 The apparatus of example 29, wherein the apparatus is caused to: design the one or more target applications based on the analyzed causality between the at least one observed cause and the at least one observed effect of the one or more target applications.
- Example 31 The apparatus of example 30, wherein the one or more target applications is designed during or after a testing phase of a continuous integration and continuous deployment pipeline, and prior to release of the one or more target applications in a production environment, the release being of the continuous integration and continuous deployment pipeline.
- Example 32 The apparatus of any of examples 29 to 31, wherein the apparatus is caused to: operate or configure the one or more target applications during or after a testing phase of a continuous integration and continuous deployment pipeline, and prior to release of the one or more target applications in a production environment, the release being of the continuous integration and continuous deployment pipeline.
- Example 33 The apparatus of any of examples 29 to 32, wherein the one or more target applications comprises a software application for a 5G network.
- Example 34 The apparatus of any of examples 29 to 33, wherein the replicate comprises a digital twin of the one or more target applications.
- Example 35 The apparatus of any of examples 29 to 34, wherein the at least one observed cause comprises an operating attribute of the one or more target applications, wherein configuration of the one or more target applications comprises a type of operating attribute.
- Example 36 The apparatus of any of examples 29 to 35, wherein the at least one observed effect comprises at least one performance indicator of the one or more target applications.
- Example 37 The apparatus of any of example 29 to 36, wherein the at least one observed effect comprises at least one feature, a load, or at least one environmental condition of the one or more target applications.
- Example 38 The apparatus of any of examples 29 to 37, wherein analyzing the causality between the at least one observed cause and the at least one observed effect of the one or more target applications comprises at least one of: determining an existence of a relationship between the at least one observed cause and the at least one observed effect; determining a magnitude of the relationship between the at least one observed cause and the at least one observed effect; or determining a way in which the at least one observed cause and the at least one observed effect are related.
- Example 39 The apparatus of any of examples 29 to 38, wherein the plurality of operating conditions comprises at least one operating condition of the one or more target applications, the at least one operating condition comprising at least one of: network load; application programming interface load; domain name system impairment; kernel breakdown; total breakdown; network stress; computational resource stress; time impairment; web service breakdown; or input output impairment.
- Example 40 The apparatus of any of examples 29 to 39, wherein the apparatus is further caused to: form at least one graph that represents the causality between the at least one observed cause and the at least one observed effect of the one or more target applications.
- Example 41 The apparatus of example 40, wherein the at least one graph comprises a direct acyclic graph.
- Example 42 The apparatus of any of examples 40 to 41, wherein the apparatus is further caused to: form the at least one graph using domain expert knowledge to remove at least one ambiguity related to causality between the at least one observed cause and the at least one observed effect of the one or more target applications.
- Example 43 The apparatus of any of examples 29 to 42, wherein the apparatus is further caused to: generate the observational data as a table, wherein a row of the table comprises an observation, and a column of the table comprises a cause attribute, an effect attribute, or an attribute comprising both a cause and effect.
- Example 44 The apparatus of any of examples 29 to 43, wherein the apparatus is further caused to: generate the observational data as a table, wherein a first column of the table comprises at least a portion of operating configurations of the one or more target applications, and a second column of the table comprises the at least one observed effect of the one or more target applications.
- Example 45 The apparatus of any of examples 29 to 44, wherein the apparatus is further caused to: generate the observational data as a table, wherein the table comprises an experimental group and a control group, wherein the experimental group comprises a collection of experiments in which at least one operating attribute is set to a specific value that is to be studied, and wherein the control group comprises a collection of experiments in which operating attributes are randomized.
- Example 46 The apparatus of example 45, wherein the apparatus is further caused to: determine an average treatment effect of the one or more target applications when the at least one operating attribute is set to the specific value.
- Example 47 The apparatus of any of examples 29 to 46, wherein the apparatus is further caused to: determine a magnitude of a relationship between the at least one observed cause and the at least one observed effect; and wherein the magnitude of a relationship comprises at least one of: average treatment effect, a conditional average treatment effect, an individual treatment effect, a natural direct effect, or a natural indirect effect.
- Example 48 The apparatus of any of examples 29 to 47, wherein applying the causal reasoning function comprises performing a do-calculus to replace a do operator with at least one conditional probability of the at least one observed effect of the one or more target applications given the at least one observed cause, the at least one conditional probability used to infer the causality between the at least one observed cause and the at least one observed effect.
- Example 49 The apparatus of any of examples 29 to 48, wherein applying the causal reasoning function comprises performing a statistical analysis of the at least one observed cause and the at least one observed effect of the one or more target applications.
- Example 50 The apparatus of any of examples 29 to 49, wherein applying the causal reasoning function comprises applying machine learning to analyze the causality between the at least one observed cause and the at least one observed effect of the one or more target applications.
- Example 51 The apparatus of any of examples 29 to 50, wherein the apparatus is further caused to: validate or dismiss at least one design or configuration assumption related to the at least one observed cause or the at least one observed effect of the one or more target applications, based on the application of the causal reasoning function.
- Example 52 The apparatus of any of examples 29 to 51, wherein the apparatus is further caused to: determine at least one operating condition of the one or more target applications to be suboptimal, based on the application of the causal reasoning function.
- Example 53 The apparatus of any of examples 29 to 52, wherein the apparatus is further caused to: perform a root cause analysis to determine at least one cause of a fault of operation of the one or more target applications.
- Example 54 The apparatus of example 53, wherein the apparatus is further caused to: operate or configure the one or more target applications using at least one result of the root cause analysis to complement the application of the causal reasoning function.
- Example 55 The apparatus of any of examples 29 to 54, wherein the apparatus is further caused to: determine whether the one or more target applications is an external application or an internal application.
- Example 56 The apparatus of any of examples 29 to 55, wherein the one or more target applications is in containerized software form.
- Example 57 The apparatus of any of examples 29 to 56, wherein the one or more target applications comprises a cloud native network function.
- Example 58 The apparatus of example 57, wherein the cloud native network function comprises a 5G cloud native network function.
- Example 59 The apparatus of any of examples 57 to 58, wherein the cloud native network function is in containerized software form.
- Example 60 An apparatus including: at least one processor; and at least one memory storing instructions that, when executed by the at least one processor, cause the apparatus at least to: select one or more cloud native network functions; select at least one feature of the one or more cloud native network functions; select at least one environmental condition of the one or more cloud native network functions; select a load of the one or more cloud native network functions, the load comprising an intensity and duration of processing of the one or more cloud native network functions subject to the at least one environmental condition; perform at least one experiment with the one or more cloud native network functions, the experiment based on the at least one feature, the load, and the at least one environmental condition of the one or more cloud native network functions; collect observational data from the at least one experiment; and apply a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more cloud native network functions and the at least one observed effect of the one or more cloud native network functions.
- Example 61 The apparatus of example 60, wherein the at least one observed cause comprises an operating attribute of the one or more cloud native network functions, wherein configuration of the one or more cloud native network functions comprises a type of operating attribute.
- Example 62 The apparatus of any of examples 60 to 61, wherein the at least one observed effect comprises at least one performance indicator of the one or more cloud native network functions.
- Example 63 The apparatus of any of examples 60 to 62, wherein the at least one observed effect comprises the at least one feature, the load, or the at least one environmental condition of the one or more cloud native network functions.
- Example 64 The apparatus of any of examples 60 to 63, wherein the apparatus is further caused to: design the one or more cloud native network functions based on the analyzed causality between the at least one observed cause and the at least one observed effect of the one or more cloud native network functions.
- Example 65 The apparatus of any of examples 60 to 64, wherein the apparatus is further caused to: determine whether the one or more cloud native network functions is an external application or an internal application.
- Example 66 The apparatus of example 65, wherein the apparatus is further caused to: provide an endpoint of the one or more cloud native network functions, in response to determining that the one or more cloud native network functions is an external application.
- Example 67 The apparatus of example 66, wherein the endpoint comprises a uniform resource locator.
- Example 68 The apparatus of any of examples 65 to 67, wherein the apparatus is further caused to: deploy the one or more cloud native network functions, in response to determining that the one or more cloud native network functions is an internal application; wherein the least one experiment is performed with the one or more cloud native network functions.
- Example 69 The apparatus of example 68, wherein the apparatus is further caused to: wrap the one or more cloud native network functions within a helm template during the application of the causal reasoning function.
- Example 70 The apparatus of any of examples 60 to 69, wherein the environmental condition of the one or more cloud native network functions comprises at least one of: network load; application programming interface load; domain name system impairment; kernel breakdown; total breakdown; network stress; computational resource stress; time impairment; web service breakdown; or input output impairment.
- Example 71 The apparatus of any of examples 60 to 70, wherein the apparatus is further caused to: form at least one graph that represents the causality between the at least one observed cause and the at least one observed effect of the one or more cloud native network functions.
- Example 72 The apparatus of any of examples 60 to 71, wherein the apparatus is further caused to: determine a magnitude of a relationship between the at least one observed cause and the at least one observed effect; and wherein magnitude of a relationship comprises at least one of: average treatment effect, a conditional average treatment effect, an individual treatment effect, a natural direct effect, or a natural indirect effect.
- Example 73 The apparatus of any of examples 60 to 72, wherein the one or more cloud native network functions is in containerized software form.
- Example 74 The apparatus of any of examples 60 to 74, wherein the one or more cloud native network functions is a fifth generation (5G) cloud native network function.
- 5G fifth generation
- Example 75 A method including: operating a replicate of one or more cloud native network functions; generating observational data of the replicate of the one or more cloud native network functions, the observational data generated based on a plurality of operating conditions of the one or more cloud native network functions; and applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more cloud native network functions and the at least one observed effect of the one or more cloud native network functions.
- Example 76 A method including: operating a replicate of one or more target applications; generating observational data of the replicate of the one or more target applications, the observational data generated based on a plurality of operating conditions of the one or more target applications; and applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more target applications and the at least one observed effect of the one or more target applications.
- Example 77 A method including: selecting one or more cloud native network functions; selecting at least one feature of the one or more cloud native network functions; selecting at least one environmental condition of the one or more cloud native network functions; selecting a load of the one or more cloud native network functions, the load comprising an intensity and duration of processing of the one or more cloud native network functions subject to the at least one environmental condition; performing at least one experiment with the one or more cloud native network functions, the experiment based on the at least one feature, the load, and the at least one environmental condition of the one or more cloud native network functions; collecting observational data from the at least one experiment; and applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more cloud native network functions and the at least one observed effect of the one or more cloud native network functions.
- Example 78 An apparatus including: means for operating a replicate of one or more cloud native network functions; means for generating observational data of the replicate of the one or more cloud native network functions, the observational data generated based on a plurality of operating conditions of the one or more cloud native network functions; and means for applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more cloud native network functions and the at least one observed effect of the one or more cloud native network functions.
- Example 79 An apparatus including: means for operating a replicate of one or more target applications; means for generating observational data of the replicate of the one or more target applications, the observational data generated based on a plurality of operating conditions of the one or more target applications; and means for applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more target applications and the at least one observed effect of the one or more target applications.
- Example 80 An apparatus including: means for selecting one or more cloud native network functions; means for selecting at least one feature of the one or more cloud native network functions; means for selecting at least one environmental condition of the one or more cloud native network functions; means for selecting a load of the one or more cloud native network functions, the load comprising an intensity and duration of processing of the one or more cloud native network functions subject to the at least one environmental condition; means for performing at least one experiment with the one or more cloud native network functions, the experiment based on the at least one feature, the load, and the at least one environmental condition of the one or more cloud native network functions; means for collecting observational data from the at least one experiment; and means for applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more cloud native network functions and the at least one observed effect of the one or more cloud native network functions.
- Example 81 A non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable with the machine for performing operations, the operations including: operating a replicate of one or more cloud native network functions; generating observational data of the replicate of the one or more cloud native network functions, the observational data generated based on a plurality of operating conditions of the one or more cloud native network functions; and applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more cloud native network functions and the at least one observed effect of the one or more cloud native network functions.
- Example 82 A non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable with the machine for performing operations, the operations including: operating a replicate of one or more cloud native network functions; generating observational data of the replicate of the one or more cloud native network functions, the observational data generated based on a plurality of operating conditions of the one or more cloud native network functions; and applying
- a non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable with the machine for performing operations, the operations including: operating a replicate of one or more target applications; generating observational data of the replicate of the one or more target applications, the observational data generated based on a plurality of operating conditions of the one or more target applications; and applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more target applications and the at least one observed effect of the one or more target applications.
- Example 83 A non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable with the machine for performing operations, the operations including: selecting one or more cloud native network functions; selecting at least one feature of the one or more cloud native network functions; selecting at least one environmental condition of the one or more cloud native network functions; selecting a load of the one or more cloud native network functions, the load comprising an intensity and duration of processing of the one or more cloud native network functions subject to the at least one environmental condition; performing at least one experiment with the one or more cloud native network functions, the experiment based on the at least one feature, the load, and the at least one environmental condition of the one or more cloud native network functions; collecting observational data from the at least one experiment; and applying a causal reasoning function using the observational data to analyze causality between at least one observed cause of at least one observed effect of the one or more cloud native network functions and the at least one observed effect of the one or more cloud native network functions.
- references to a ‘computer’, ‘processor’, etc. should be understood to encompass not only computers having different architectures such as single/multi -processor architectures and sequential or parallel architectures but also specialized circuits such as field- programmable gate arrays (FPGAs), application specific circuits (ASICs), signal processing devices and other processing circuitry.
- References to computer program, instructions, code etc. should be understood to encompass software for a programmable processor or firmware such as, for example, the programmable content of a hardware device whether instructions for a processor, or configuration settings for a fixed-function device, gate array or programmable logic device etc.
- the memory(ies) as described herein may be implemented using any suitable data storage technology, such as semiconductor based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, non-transitory memory, transitory memory, fixed memory and removable memory.
- the memory(ies) may comprise a database for storing data.
- circuitry may refer to the following: (a) hardware circuit implementations, such as implementations in analog and/or digital circuitry, and (b) combinations of circuits and software (and/or firmware), such as (as applicable): (i) a combination of processor(s) or (ii) portions of processor(s)/software including digital signal processor(s), software, and memory(ies) that work together to cause an apparatus to perform various functions, and (c) circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
- circuitry would also cover an implementation of merely a processor (or multiple processors) or a portion of a processor and its (or their) accompanying software and/or firmware.
- circuitry would also cover, for example and if applicable to the particular element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, or another network device.
- DSP digital signal processor eNB evolved Node B e.g., an LTE base station
- EN-DC E-UTRAN new radio - dual connectivity en-gNB node providing NR user plane and control plane protocol terminations towards the UE, and acting as a secondary node in EN-
- E-UTRA evolved universal terrestrial radio access, i.e., the LTE radio access technology
- FPGA field-programmable gate array gNB base station for 5G/NR i.e., a node providing NR user plane and control plane protocol terminations towards the UE, and connected via the NG interface to the 5GC
- UE user equipment e.g., a wireless, typically mobile device
- X2 network interface between RAN nodes and between RAN and the core network
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Quality & Reliability (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Computing Systems (AREA)
- Signal Processing (AREA)
- Mathematical Physics (AREA)
- Computer Hardware Design (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
L'invention concerne un appareil comportant : au moins un processeur ; et au moins une mémoire conservant des instructions qui, lorsqu'elles sont exécutées par le ou les processeurs, amènent l'appareil au moins à : exploiter une réplique d'une ou de plusieurs fonctions de réseau natives en nuage ; générer des données d'observation de la réplique de la ou des fonctions de réseau natives en nuage, les données d'observation étant générées sur la base d'une pluralité de conditions d'exploitation de la ou des fonctions de réseau natives en nuage ; et appliquer une fonction de raisonnement causal en utilisant les données d'observation pour analyser la causalité entre au moins une cause observée d'au moins un effet observé de la ou des fonctions de réseau natives en nuage et l'effet ou les effets observés de la ou des fonctions de réseau natives en nuage.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2022/047205 WO2024085872A1 (fr) | 2022-10-20 | 2022-10-20 | Système de raisonnement causal pour jumeau opérationnel (carot) pour le développement et l'exploitation de cnf 5g |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2022/047205 WO2024085872A1 (fr) | 2022-10-20 | 2022-10-20 | Système de raisonnement causal pour jumeau opérationnel (carot) pour le développement et l'exploitation de cnf 5g |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024085872A1 true WO2024085872A1 (fr) | 2024-04-25 |
Family
ID=84360485
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2022/047205 WO2024085872A1 (fr) | 2022-10-20 | 2022-10-20 | Système de raisonnement causal pour jumeau opérationnel (carot) pour le développement et l'exploitation de cnf 5g |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2024085872A1 (fr) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200371857A1 (en) * | 2018-11-25 | 2020-11-26 | Aloke Guha | Methods and systems for autonomous cloud application operations |
US11468348B1 (en) * | 2020-02-11 | 2022-10-11 | Amazon Technologies, Inc. | Causal analysis system |
-
2022
- 2022-10-20 WO PCT/US2022/047205 patent/WO2024085872A1/fr unknown
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200371857A1 (en) * | 2018-11-25 | 2020-11-26 | Aloke Guha | Methods and systems for autonomous cloud application operations |
US11468348B1 (en) * | 2020-02-11 | 2022-10-11 | Amazon Technologies, Inc. | Causal analysis system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11483218B2 (en) | Automating 5G slices using real-time analytics | |
US10528454B1 (en) | Intelligent automation of computer software testing log aggregation, analysis, and error remediation | |
WO2017144432A1 (fr) | Vérification de nuage et automatisation de tests | |
Rosa et al. | Take your vnf to the gym: A testing framework for automated nfv performance benchmarking | |
Abbas et al. | IBNSlicing: Intent-based network slicing framework for 5G networks using deep learning | |
CN106776280A (zh) | 可配置性能测试装置 | |
US11354593B2 (en) | Techniques to generate network simulation scenarios | |
US20180314549A1 (en) | Operational micro-services design, development, deployment | |
CN105975396A (zh) | 一种自动化测试用例生成方法与系统 | |
Trivedi et al. | A fully automated deep packet inspection verification system with machine learning | |
Soldani et al. | 5G AI-enabled automation | |
CN110825589B (zh) | 用于微服务系统的异常检测方法及其装置和电子设备 | |
WO2022197604A1 (fr) | Procédés, systèmes et supports lisibles par ordinateur pour la génération de scénario d'essai de réseau autonome | |
CN105808422B (zh) | 一种基于网络的软件测试方法、客户端及待测试设备 | |
WO2024085872A1 (fr) | Système de raisonnement causal pour jumeau opérationnel (carot) pour le développement et l'exploitation de cnf 5g | |
Huynh‐Van et al. | SD‐IoTR: an SDN‐based Internet of Things reprogramming framework | |
Mendonça et al. | Assessing performance and energy consumption in mobile applications | |
CN107305524A (zh) | 压力测试方法及系统 | |
Varghese et al. | Can commercial testing automation tools work for iot? A case study of selenium and node-red | |
Tiloca et al. | SEA++: A framework for evaluating the impact of security attacks in OMNeT++/INET | |
WO2022116427A1 (fr) | Procédé et appareil de test, dispositif électronique et support d'enregistrement | |
US20230071504A1 (en) | Multi-client orchestrated automated testing platform | |
Aykurt et al. | NETLLMBENCH: A Benchmark Framework for Large Language Models in Network Configuration Tasks | |
US20240012833A1 (en) | Systems and methods for seamlessly updating and optimizing a digital system | |
US20240147268A1 (en) | "real feel" autonomous network test solution system, method, device, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22808911 Country of ref document: EP Kind code of ref document: A1 |