WO2024004174A1 - Automatic profile generation on test machine - Google Patents

Automatic profile generation on test machine Download PDF

Info

Publication number
WO2024004174A1
WO2024004174A1 PCT/JP2022/026386 JP2022026386W WO2024004174A1 WO 2024004174 A1 WO2024004174 A1 WO 2024004174A1 JP 2022026386 W JP2022026386 W JP 2022026386W WO 2024004174 A1 WO2024004174 A1 WO 2024004174A1
Authority
WO
WIPO (PCT)
Prior art keywords
configuration
data
ticket
obtaining
measuring device
Prior art date
Application number
PCT/JP2022/026386
Other languages
French (fr)
Japanese (ja)
Inventor
仁 中里
光広 朽津
紗季 田中
遥 堀内
啓佑 高見
Original Assignee
楽天モバイル株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 楽天モバイル株式会社 filed Critical 楽天モバイル株式会社
Priority to PCT/JP2022/026386 priority Critical patent/WO2024004174A1/en
Publication of WO2024004174A1 publication Critical patent/WO2024004174A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software

Definitions

  • the present disclosure relates to generating profiles for testing and evaluation of radio access networks on a virtualized infrastructure.
  • the virtualization of the core network (CN) is progressing, and following this, the virtualization of the radio access network (RAN) is attracting attention.
  • the functions of the RAN's Central Unit (CU) and Distributed Unit (DU) can be virtualized into virtualized CUs (vCUs) and virtualized DUs (vDUs), respectively.
  • TiDD ticket-driven development
  • BTS bug tracking system
  • TiDD follows the principle that all work and issues in software development are developed after filing a ticket.
  • BTS is a system for registering project bugs and tracking their modification status. More specifically, a ticket is issued as a bug registration, and the modification status of the project is tracked using the ticket.
  • Redmine Non-Patent Document 1
  • Non-Patent Document 1 is known as a tool for BTS.
  • RAN testing and evaluation is becoming more complex due to multi-vendoring, where RUs, DUs, and CUs are provided by multiple different vendors. Furthermore, due to multi-vendor technology, it is becoming more difficult to identify the location of quality deterioration during software updates.
  • the present disclosure is devised to solve at least one of the problems of the prior art, and provides a method for creating a profile for testing and evaluation of a radio access network on a virtualization infrastructure (test machine). do.
  • a management device configures a first measuring device for a first configuration on a virtualization platform, and obtains first data for the first configuration using the first measuring device.
  • a processor is provided that executes the following steps: obtaining second data for the second configuration using a measuring instrument, and obtaining a difference profile between the first data and the second data.
  • the test evaluation system includes the management device and a ticket management system.
  • the ticket management system includes a quality evaluation of the sent first configuration in a release process ticket, and uses the release process ticket to determine the relationship between the software version and the number of bugs for the one device. , and a processor to execute the .
  • a test evaluation method includes: providing a first configuration for calculating performance data on a virtualization platform; setting a first measuring device for the first configuration; obtaining first data for the first configuration using the first measuring instrument; providing a second configuration in which one device in the first configuration is replaced with a simulator; setting a second measuring device for the configuration; obtaining second data for the second configuration by the second measuring device; and a difference between the first data and the second data. and obtaining a profile.
  • FIG. 1 is a diagram illustrating an example of a wireless communication system to which a method or a management device according to the present disclosure is applied.
  • FIG. 2 is a diagram illustrating an example of the operation of test evaluation by the management device according to the embodiment.
  • FIG. 3 is a schematic diagram showing an example of the management device according to the embodiment.
  • FIG. 4 is a diagram illustrating an example of application of the differential profile according to the embodiment.
  • FIG. 5 is a sequence diagram showing an example of test evaluation according to the embodiment.
  • FIG. 6 is a flowchart illustrating an example of the test evaluation method according to the embodiment.
  • FIG. 1 shows an example of a wireless communication system to which a method or a management device according to the present disclosure is applied.
  • the wireless communication system 1 in FIG. 1 includes a radio access network (RAN) 200 and a core network (CN) 500.
  • the RAN 200 includes a Central Unit (CU) 210, a Distributed Unit (DU) 220, and a Radio Unit (RU) 230, and implements the functions of a base station.
  • RU 230 can communicate with user equipment (UE) 300.
  • CU210 is connected to CN500.
  • the functions of the DU 220 and the CU 210 are virtualized on a virtualization infrastructure and are constructed as a virtualized DU (vDU) 220 and a virtualized CU (vCU) 210, respectively.
  • vDU virtualized DU
  • vCU virtualized CU
  • connections between devices (CU, DU, or RU elements) in the RAN 200 and connections with the CN 500 follow specifications standardized by the O-RAN Alliance (O-RAN) or the like.
  • O-RAN O-RAN Alliance
  • the space between the RU 230 and the vDU 220 is called a fronthaul
  • the space between the vDU 220 and the vCU 210 is called a midhaul
  • the space between the vCU 210 and the CN 500 is called a backhaul.
  • the management device 100 can connect to the vCU 210, vDU 220, and RU 230 and control them.
  • the management device 100 may be an orchestrator (especially an E2E orchestrator).
  • testing and evaluation of the RAN is complicated because devices provided by different vendors are connected. Furthermore, due to multi-vendor technology, it has become more difficult to identify the location where quality deterioration occurs when updating software in each virtualized device.
  • the RAN 200 (see the upper part of FIG. 2), which is the target of test evaluation, will be described.
  • the RAN 200 to be evaluated (hereinafter referred to as "first configuration" or "performance calculation configuration") includes a CU 210, a DU 220, and an RU 230 to be evaluated.
  • a UE simulator 305 capable of transmitting and receiving test data is used as the UE.
  • a CN simulator (core simulator) 505 capable of transmitting and receiving test data is used as a CN.
  • a measuring device 400 (also referred to as a "first measuring device") for collecting signals and the like within the RAN 200 is used.
  • the measuring device 400 includes a radio wave capture 440, a fronthaul capture 430, a midhaul capture 420, and a backhaul capture 410.
  • the radio wave capture 440 captures radio waves in order to analyze the quality of the radio waves from the RU 230 to the UE simulator 305.
  • Fronthaul capture 430 collects signals between RU 230 and DU 220.
  • Midhaul capture 420 collects signals between DU 220 and CU 210.
  • Backhaul capture 410 collects signals between CU 210 and CN simulator 505.
  • the management device 100 analyzes or analyzes the signals or radio waves collected by the measuring device 400 for the RAN 200, and generates test result data 610 (also referred to as "first data").
  • the test result data for the performance calculation configuration is compared with the test result data for a configuration whose operation details have been confirmed in advance (referred to as the "second configuration” or “theoretical value calculation configuration").
  • the RAN 205 in the lower part of FIG. 2 is a configuration for calculating a theoretical value for test evaluation of the DU 220 (theoretical value calculation configuration).
  • the DU 220 to be evaluated in the RAN 200 in the upper part of FIG. 2 is replaced with a DU simulator 225 whose operation contents have been confirmed in advance. Since the CU 210, RU 230, UE simulator 305, and CN simulator 505 are the same as the RAN 200, their explanations will be omitted.
  • the same measuring device 400 (referred to as a “second measuring device”) for collecting signals and the like for test evaluation in the RAN 205 can be used as in the RAN 200.
  • test result data 615 also referred to as "second data"
  • the CU 210 and RU 230, the UE simulator 305, and the CN simulator 505 are common to the RAN 205 that is the theoretical value calculation configuration (second configuration) and the RAN 200 that is the actual performance calculation configuration (first configuration).
  • One device (that is, DU 220) in the RAN 200 that has a performance calculation configuration is replaced with a simulator (DU simulator 225) whose operation has been confirmed in advance in the RAN 205 that has a theoretical value calculation configuration. Therefore, by taking the difference between the test result data 610 for the RAN 200 and the test result data 615 for the RAN 205, the influence of the DU 220 can be calculated.
  • the management device 100 performs a test evaluation of the performance calculation configuration according to the test scenario 180 and creates test result data 610.
  • the test scenario 180 may be an electronic file readable by the management device 100 that is prepared in advance to specify the content of the test evaluation.
  • the test scenario 180 may specify a device (DU in the example of FIG. 2) to be tested and evaluated.
  • test scenario 180 may be provided by ticket management system 700.
  • the management device 100 configures a theoretical value calculation configuration by replacing the device to be evaluated in the performance calculation configuration with a simulator according to the test scenario 180, performs test evaluation, and creates test result data 615.
  • the test scenario 180 specifies the DU 220 as the element to be evaluated.
  • the DU 220 in the performance calculation configuration is replaced by the DU simulator 225.
  • the replacement of the DU 220 by the DU simulator 225 can be automatically performed by software without any human intervention.
  • the management device 100 uses the measuring instrument 400 to collect data related to the operation of each device in the performance calculation configuration and the theoretical value calculation configuration, analyzes or analyzes the data, and stores the data as test result data 610 and test result data 615, respectively.
  • the management device 100 further creates a difference profile 650 from the difference between the test result data 610 and the test result data 615.
  • the management device 100 determines the normality of the operation caused by incorporating the DU 220 into the RAN 200 based on the difference profile. This determination result is called quality evaluation for the RAN 200.
  • the quality evaluation is sent from the management device 100 to the ticket management system 700. Thereby, the ticket management system 700 can write the quality evaluation for the RAN 200 in the ticket. Tickets will be discussed later.
  • a difference profile can be automatically obtained based on the given test scenario 180.
  • the actual calculation configuration and the theoretical value calculation configuration are provided on the virtualization platform (test machine), and the human intervention is required until the test result data is obtained by setting the measuring instruments for each of the actual value calculation configuration and the theoretical value calculation configuration. I have nothing to do.
  • FIG. 3 is a schematic diagram illustrating a configuration example of the management device 100 for performing RAN test evaluation according to the embodiment.
  • Management device 100 includes a transmitting/receiving section 110 and a processing section 120.
  • the management device 100 may further include a configuration not shown in FIG. 3.
  • the transmitting/receiving unit 110 transmits and receives data to and from the CU 210, DU 220, RU 230, DU simulator 225, measuring instrument 400, and ticket management system (bug tracking system) 700 in FIG.
  • the transmitting/receiving unit 110 may also be configured to transmit and receive data with the UE simulator 305 or the CN simulator 505.
  • the processing unit 120 includes a processor 122 and a memory 124. Note that the number of processor 122 and memory 124 may be one or more.
  • the processing unit 120 may further include storage 126.
  • the processing unit 120 operates the transmitting/receiving unit 110, and can perform data processing as the management device 100 using the processor 122 and memory 124.
  • a test scenario 180 may be stored in the storage 126.
  • the processor 122 of the processing unit 120 can execute operations in the management device 100 in order to perform the RAN test evaluation described with reference to FIG. 2.
  • FIG. 4 shows an example of application of the difference (difference profile 650) of test result data (see reference numerals 610 and 615 in FIG. 2) to the actual value calculation configuration and the theoretical value calculation configuration.
  • the difference profile 650 is associated with a bug tracking system (BTS) ticket 660.
  • ticket 660 may include a quality assessment of a new release of DU 220.
  • the ticket 660 may be a release process ticket that includes a quality evaluation regarding the software version upgrade related to the DU 220. New release tickets and release process tickets can be used in ticket-driven development (TiDD).
  • FIG. 4 shows an example of the contents written on the ticket.
  • Information regarding automatic generation (Production Automation) of the RAN is written in the ticket.
  • the information may include some of the following examples.
  • “1vCU/4DU/3ORU” indicates that the RAN consists of one vCU, four DUs, and three ORUs (RUs with O-RAN specifications).
  • O-RAN Certification Limited For example, it relates to restrictions such as only supporting IPv6.
  • Type of virtualization COTS Common Off The Shelf). For example, information regarding FPGA support.
  • the ticket contains information about the monitoring system.
  • Information about the monitoring system may include definitions of monitoring conditions.
  • the definition of the monitoring condition includes, for example, a threshold value indicating a permissible value for the measurement data, a monitoring number indicating the number of data items to be monitored, or information regarding monitoring items.
  • FIG. 5 is a sequence diagram for test evaluation when upgrading the DU software by the test evaluation system including the management device 100 and the ticket management system 700 according to the embodiment.
  • a ticket management system (bug tracking system) 700 allocates tickets to version upgrades of DUs and manages bug handling and the like.
  • the ticket management system 700 issues a ticket and sends the generated test scenario (see reference numeral 180 in FIG. 2) to the management device 100 when new software for DU is released.
  • the management device 100 instructs the virtualization infrastructure to generate a performance calculation configuration (simply referred to as "configuration A") based on the test scenario.
  • configuration A includes a RAN 200 having a CU 210, an upgraded DU 220, and an RU 230.
  • a measuring instrument 400 (simply referred to as “measuring instrument #A") is set for the performance calculation configuration.
  • Measuring device #A acquires measurement data in configuration A and sends it to management device 100.
  • the management device 100 creates test result data (see reference numeral 610 in FIG. 2) from the data.
  • the management device 100 instructs the virtualization infrastructure to generate a theoretical value calculation configuration (simply referred to as "configuration B") based on the test scenario.
  • Configuration B includes a RAN 205 with a CU 210, a DU simulator 225, and an RU 230, as shown in FIG.
  • the management device 100 sets the measuring device 400 (simply referred to as “measuring device #B”) for the theoretical value calculation configuration.
  • the DU of configuration A can be automatically replaced by the DU simulator of configuration B using software.
  • Measuring device #B acquires data in configuration B and sends it to management device 100.
  • the management device 100 creates test result data (see reference numeral 615 in FIG.
  • the management device 100 further creates a difference profile (see 650 in FIG. 2) from the test result data for configuration A (see 610 in FIG. 2) and the test result data for configuration B (see 615 in FIG. 2).
  • the management device 100 determines the normality of the operation associated with the version upgrade of the DU based on the difference profile, and sends the determination result (quality evaluation) to the ticket management system 700.
  • the ticket management system 700 writes the sent determination result in the ticket.
  • the ticket management system 700 may learn the relationship between the software version of the DU and the number of bugs based on the contents of the ticket.
  • the ticket management system 700 includes the following steps: including the submitted quality evaluation in the release process ticket; and learning the relationship between the software version and the number of bugs for one device using the release process ticket. It may include one or more processors for execution.
  • a test evaluation method 1000 according to the embodiment will be described with reference to FIG. 6. This method includes the following steps.
  • a first configuration for calculating performance data is provided (1010).
  • An example of the first configuration is the performance calculation configuration shown in FIG.
  • a first meter for a first configuration is configured (1020).
  • An example of the first measuring device is the measuring device 400 for the performance calculation configuration shown in FIG.
  • a first measurement device obtains first data for a first configuration (1030).
  • An example of the first data is test result data 610 in FIG.
  • a second configuration is provided in which one device in the first configuration is replaced with a simulator (1040).
  • An example of the second configuration is a theoretical value calculation configuration in which the DU 220 in the actual performance calculation configuration in FIG. 2 is replaced with the DU simulator 225.
  • a second meter for a second configuration is configured (1050).
  • An example of the second measuring device is the measuring device 400 for the theoretical value calculation configuration shown in FIG.
  • a second measurement device obtains second data for a second configuration (1060).
  • An example of the second data is the test result data 615 in FIG.
  • a differential profile between the first data and the second data is obtained (1070).
  • An example of a differential profile is differential profile 650 in FIG. 2 .
  • the above steps (1010 to 1070) are not limited to execution in the above order as long as a difference profile between the first data and the second data can be obtained.
  • the first data may be obtained using the first configuration and the first measuring device.
  • test evaluation method 1000 may further include the following steps.
  • Quality evaluation of a first configuration corresponding to a software version for one device is performed according to the difference profile (1080).
  • the quality evaluation of the first configuration among the number of monitoring items and monitoring items, items (bugs) that are not recognized as normal operation according to the threshold value that gives the permissible value of the measured data value in the monitoring conditions in Figure 4. can be the number of Further, an example of a software version for one device is a DU software version.
  • a quality evaluation of the first configuration is included in the release process ticket (1090). For example, the number of bugs for each release version may be written on a ticket in a bug tracking system (BTS).
  • BTS bug tracking system
  • the relationship between the version and the number of bugs is learned (1100). For example, the number of bugs for each release version written in a ticket may be aggregated, and the relationship between the release version and the number of bugs may be learned by machine.
  • the present disclosure also includes a program for causing a system to execute the above-described test evaluation method 1000.
  • the program may be provided recorded on a computer-readable non-transitory storage medium.
  • the performance calculation configuration and the theoretical value calculation configuration are provided on the virtualization platform, and the test result data is obtained by setting measuring instruments for each of the performance calculation configuration and the theoretical value calculation configuration. There is no need for human intervention.
  • connection means a logical connection for communication.
  • RU connected to vDU means that the vDU and RU are logically connected so that they can communicate.
  • the vDU and RU do not necessarily have to be physically directly connected by a physical cable or the like, and a plurality of devices or wireless communication may be interposed between the vDU and RU.
  • a management device configuring a first measurement device for a first configuration on a virtualization infrastructure; obtaining first data for the first configuration by the first measuring device; providing a second configuration in which one device in the first configuration is replaced with a simulator; configuring a second measuring device for the second configuration; obtaining second data for the second configuration by the second measuring device; obtaining a differential profile between the first data and the second data;
  • a management device comprising a processor that executes.
  • the processor Evaluating the quality of the first configuration according to the difference profile; Sending a quality evaluation of the first configuration to a ticket management system; The management device according to [1], which further executes.
  • a management device configuring a first measurement device for a first configuration on a virtualization infrastructure; obtaining first data for the first configuration by the first measuring device; providing a second configuration in which one device in the first configuration is replaced with a simulator; configuring a second measuring device for the second configuration; obtaining second data for the second configuration by the second measuring device; obtaining a differential profile between the first data and the second data; Evaluating the quality of the first configuration corresponding to a software version for the one device according to the difference profile; Sending a quality evaluation of the first configuration to a ticket management system; a management device comprising a processor that executes; A ticket management system, including a quality evaluation of the sent first configuration in a release process ticket; learning a relationship between a software version and a number of bugs for the one device using the release process ticket; a ticket management system; and a ticket management system.
  • [5] Providing a first configuration for calculating performance data on a virtualization platform; configuring a first measuring device for the first configuration; obtaining first data for the first configuration by the first measuring device; providing a second configuration in which one device in the first configuration is replaced with a simulator; configuring a second measuring device for the second configuration; obtaining second data for the second configuration by the second measuring device; obtaining a differential profile between the first data and the second data; test evaluation methods including;
  • [6] Evaluating the quality of the first configuration corresponding to a software version for the one device according to the difference profile; including a quality evaluation of the first configuration in a release process ticket; The test evaluation method according to [5], further comprising learning the relationship between the version and the number of bugs based on the contents of the release process ticket.
  • Wireless communication system 100 Management device 110 Transmission/reception unit 120 Processing unit 122 Processor 124 Memory 126 Storage 180 Test scenario 200, 205 Radio access network (RAN) 210 cu. 220 D.U. 225 DU Simulator 230 RU 300 User terminal (UE) 305 UE simulator 400 Measuring device 410 Backhaul capture 420 Midhaul capture 430 Fronthaul capture 440 Radio wave capture 500 Core network (CN) 505 CN simulator 610, 615 Test result data 650 Difference profile 660 Ticket 700 Ticket management system 1000 Test evaluation method

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)
  • Stored Programmes (AREA)

Abstract

A test evaluation method according to the present disclosure includes: providing, on a virtual base, a first configuration for calculating actual data; setting a first measurement device for the first configuration; obtaining first data for the first configuration with the first measurement device; providing a second configuration in which one device in the first configuration has been replaced by a simulator; setting a second measurement device for the second configuration; obtaining second data for the second configuration with the second measurement device; and obtaining a differential profile for the first data and the second data. The method may also include: evaluating the quality of the first configuration corresponding to a version of software for the first device, in accordance with the differential profile; including the quality evaluation of the first configuration in a release process ticket; and learning the relationship between the version and the number of bugs on the basis of the contents of the release process ticket.

Description

テストマシンでの自動的なプロファイルの生成Automatic profile generation on test machine
 本開示は、仮想化基盤上で無線アクセスネットワークに対する試験評価のためのプロファイルを生成することに関する。 The present disclosure relates to generating profiles for testing and evaluation of radio access networks on a virtualized infrastructure.
 第5世代移動体システム(5Gシステム)においては、コアネットワーク(CN)の仮想化が進められ、これに続いて、無線アクセスネットワーク(RAN)の仮想化に注目が向けられている。
 RANのCentral Unit (CU)及びDistributed Unit (DU)の機能を仮想化して、それぞれ、仮想化CU(vCU)及び仮想化DU(vDU)とすることができる。
In the fifth generation mobile system (5G system), the virtualization of the core network (CN) is progressing, and following this, the virtualization of the radio access network (RAN) is attracting attention.
The functions of the RAN's Central Unit (CU) and Distributed Unit (DU) can be virtualized into virtualized CUs (vCUs) and virtualized DUs (vDUs), respectively.
 更に、O-RAN準拠のオープン化により、RANのCU、DU及びRadio Unit(RU)を複数の異なるベンダーが提供する、マルチベンダー化が進んでいる。 Furthermore, due to the openness of O-RAN compliance, multi-vendoring is progressing, with multiple different vendors providing RAN CUs, DUs, and Radio Units (RUs).
 一方、マルチベンダー化を含む大規模システムのために、チケット駆動開発(TiDD)という手法がある。TiDDとは、作業をタスクに分割してバグトラッキングシステム(BTS)のチケットに割り当てて管理を行う開発スタイルである。TiDDは、ソフトウェア開発における全ての作業及び課題はチケットに起票してから開発するという原則に従う。
 BTSとは、プロジェクトのバグを登録して、その修正状況を追跡するシステムである。より具体的には、バグの登録としてチケットを発行して、そのチケットによりプロジェクトの修正状況をトラッキングする。BTSのためのツールとして例えばRedmine(非特許文献1)が知られている。
On the other hand, for large-scale systems including multi-vendor systems, there is a method called ticket-driven development (TiDD). TiDD is a development style in which work is divided into tasks and managed by assigning them to tickets in a bug tracking system (BTS). TiDD follows the principle that all work and issues in software development are developed after filing a ticket.
BTS is a system for registering project bugs and tracking their modification status. More specifically, a ticket is issued as a bug registration, and the modification status of the project is tracked using the ticket. For example, Redmine (Non-Patent Document 1) is known as a tool for BTS.
 RU、DU及びCUを複数の異なるベンダーが提供するマルチベンダー化により、RANの試験評価が複雑化している。また、マルチベンダー化により、ソフトウェア更新時における品質劣化場所の特定がより困難になっている。 RAN testing and evaluation is becoming more complex due to multi-vendoring, where RUs, DUs, and CUs are provided by multiple different vendors. Furthermore, due to multi-vendor technology, it is becoming more difficult to identify the location of quality deterioration during software updates.
 本開示は、従来技術の問題点の少なくとも1つを解決しようと案出されたものであり、仮想化基盤(テストマシン)上で無線アクセスネットワークに対する試験評価のためのプロファイルを作成する方法を提供する。 The present disclosure is devised to solve at least one of the problems of the prior art, and provides a method for creating a profile for testing and evaluation of a radio access network on a virtualization infrastructure (test machine). do.
 本開示に係る管理装置は、仮想化基盤上の第1の構成のための第1の測定器を設定することと、前記第1の測定器による前記第1の構成に対する第1のデータを得ることと、前記第1の構成における1つの機器をシミュレータで置き換えた第2の構成を提供することと、前記第2の構成のための第2の測定器を設定することと、前記第2の測定器による前記第2の構成に対する第2のデータを得ることと、前記第1のデータと前記第2のデータの差分プロファイルを得ることと、を実行するプロセッサを備える。 A management device according to the present disclosure configures a first measuring device for a first configuration on a virtualization platform, and obtains first data for the first configuration using the first measuring device. providing a second configuration in which one device in the first configuration is replaced with a simulator; setting a second measuring device for the second configuration; A processor is provided that executes the following steps: obtaining second data for the second configuration using a measuring instrument, and obtaining a difference profile between the first data and the second data.
 本開示に係る試験評価システムは前記管理装置とチケット管理システムとを含む。チケット管理システムは、送付された前記第1の構成の品質評価をリリースプロセスチケットに含ませることと、前記リリースプロセスチケットを使用して前記1つの機器のためのソフトウェアのバージョンとバグ数との関係を学習することと、を実行するプロセッサを備える。 The test evaluation system according to the present disclosure includes the management device and a ticket management system. The ticket management system includes a quality evaluation of the sent first configuration in a release process ticket, and uses the release process ticket to determine the relationship between the software version and the number of bugs for the one device. , and a processor to execute the .
 本開示に係る試験評価方法は、実績データの算出のための第1の構成を仮想化基盤上に提供することと、前記第1の構成のための第1の測定器を設定することと、前記第1の測定器による前記第1の構成に対する第1のデータを得ることと、前記第1の構成における1つの機器をシミュレータで置き換えた第2の構成を提供することと、前記第2の構成のための第2の測定器を設定することと、前記第2の測定器による前記第2の構成に対する第2のデータを得ることと、前記第1のデータと前記第2のデータの差分プロファイルを得ることと、を含む。 A test evaluation method according to the present disclosure includes: providing a first configuration for calculating performance data on a virtualization platform; setting a first measuring device for the first configuration; obtaining first data for the first configuration using the first measuring instrument; providing a second configuration in which one device in the first configuration is replaced with a simulator; setting a second measuring device for the configuration; obtaining second data for the second configuration by the second measuring device; and a difference between the first data and the second data. and obtaining a profile.
図1は、本開示に係る方法又は管理装置が適用される無線通信システムの例を示す図である。FIG. 1 is a diagram illustrating an example of a wireless communication system to which a method or a management device according to the present disclosure is applied. 図2は、実施形態に係る管理装置による試験評価の動作の例を示す図である。FIG. 2 is a diagram illustrating an example of the operation of test evaluation by the management device according to the embodiment. 図3は、実施形態に係る管理装置の例を示す模式図である。FIG. 3 is a schematic diagram showing an example of the management device according to the embodiment. 図4は、実施形態に係る差分プロファイルの適用例を示す図である。FIG. 4 is a diagram illustrating an example of application of the differential profile according to the embodiment. 図5は、実施形態に係る試験評価の例を示すシーケンス図である。FIG. 5 is a sequence diagram showing an example of test evaluation according to the embodiment. 図6は、実施形態に係る試験評価方法の例を示すフローチャートである。FIG. 6 is a flowchart illustrating an example of the test evaluation method according to the embodiment.
 以下、本開示の一実施形態について、図面を参照して詳細に説明する。 Hereinafter, one embodiment of the present disclosure will be described in detail with reference to the drawings.
 図1に、本開示に係る方法又は管理装置が適用される無線通信システムの例を示す。図1の無線通信システム1は、無線アクセスネットワーク(RAN)200と、コアネットワーク(CN)500とを含む。
 RAN200はCentral Unit(CU)210と、Distributed Unit(DU)220と、Radio Unit(RU)230を含み、基地局の機能を実現する。RAN200においてRU230はユーザ端末(UE)300と通信することができる。また、CU210はCN500に接続している。
 DU220の機能及びCU210の機能は、仮想化基盤上で仮想化されて、仮想化DU(vDU)220、及び仮想化CU(vCU)210としてそれぞれ構築されている。
 以降の記載においては、特に必要がなければ、DUとvDUを区別せずに記載し、CUとvCUを区別せずに記載することがある。
FIG. 1 shows an example of a wireless communication system to which a method or a management device according to the present disclosure is applied. The wireless communication system 1 in FIG. 1 includes a radio access network (RAN) 200 and a core network (CN) 500.
The RAN 200 includes a Central Unit (CU) 210, a Distributed Unit (DU) 220, and a Radio Unit (RU) 230, and implements the functions of a base station. In RAN 200, RU 230 can communicate with user equipment (UE) 300. Moreover, CU210 is connected to CN500.
The functions of the DU 220 and the CU 210 are virtualized on a virtualization infrastructure and are constructed as a virtualized DU (vDU) 220 and a virtualized CU (vCU) 210, respectively.
In the following description, unless there is a particular need, DU and vDU may be described without distinction, and CU and vCU may be described without distinction.
 更に、RAN200における機器(CU、DU又はRUの要素)間及びCN500との接続については、O-RAN Alliance(O-RAN)等によって標準化された仕様に従っているとする。なお、RU230とvDU220との間をフロントホール、vDU220とvCU210との間をミッドホール、及び、vCU210とCN500との間をバックホールと呼ぶ。
 機器間の接続の標準化により、図1のRAN200におけるRU230、vDU220及びvCU210について、複数のベンダーが提供すること(マルチベンダー化)が可能であるとする。
 また、管理装置100がvCU210、vDU220及びRU230に接続し、それらを制御し得る。管理装置100はオーケストレータ(特にE2Eオーケストレータ)であってもよい。
Furthermore, it is assumed that connections between devices (CU, DU, or RU elements) in the RAN 200 and connections with the CN 500 follow specifications standardized by the O-RAN Alliance (O-RAN) or the like. Note that the space between the RU 230 and the vDU 220 is called a fronthaul, the space between the vDU 220 and the vCU 210 is called a midhaul, and the space between the vCU 210 and the CN 500 is called a backhaul.
It is assumed that by standardizing connections between devices, it is possible for multiple vendors to provide the RU 230, vDU 220, and vCU 210 in the RAN 200 in FIG. 1 (multivendorization).
Furthermore, the management device 100 can connect to the vCU 210, vDU 220, and RU 230 and control them. The management device 100 may be an orchestrator (especially an E2E orchestrator).
 図1のようにマルチベンダー化されたRAN200については、異なるベンダーが提供した機器を接続するために、RANとしての試験評価が複雑化している。
 また、マルチベンダー化により、仮想化された各機器におけるソフトウェア更新時に、品質劣化が生じた場合に、その発生場所の特定がより困難になっている。
Regarding the multi-vendor RAN 200 as shown in FIG. 1, testing and evaluation of the RAN is complicated because devices provided by different vendors are connected.
Furthermore, due to multi-vendor technology, it has become more difficult to identify the location where quality deterioration occurs when updating software in each virtualized device.
 図2を参照して実施形態に係る、RANの試験評価を実行するための管理装置100の動作の例を説明する。まず、試験評価の対象であるRAN200(図2の上段参照)について説明する。
 図2上段において、評価するRAN200(以下、「第1の構成」又は「実績算出構成」と呼ぶ。)は評価対象のCU210、DU220及びRU230を含む。また、UEとして、試験用データの送受信が可能なUEシミュレータ305を使用する。更に、CNとして、試験用データの送受信が可能なCNシミュレータ(コアシミュレータ)505を使用する。
An example of the operation of the management device 100 for executing RAN test evaluation according to the embodiment will be described with reference to FIG. 2. First, the RAN 200 (see the upper part of FIG. 2), which is the target of test evaluation, will be described.
In the upper part of FIG. 2, the RAN 200 to be evaluated (hereinafter referred to as "first configuration" or "performance calculation configuration") includes a CU 210, a DU 220, and an RU 230 to be evaluated. Furthermore, a UE simulator 305 capable of transmitting and receiving test data is used as the UE. Further, as a CN, a CN simulator (core simulator) 505 capable of transmitting and receiving test data is used.
 更に試験評価のために、RAN200内の信号等を収集するための測定器400(「第1の測定器」とも呼ぶ。)が使用される。測定器400は、電波キャプチャ440、フロントホールキャプチャ430、ミッドホールキャプチャ420、及び、バックホールキャプチャ410とを含んでいる。電波キャプチャ440は、RU230からUEシミュレータ305への電波の質を解析するために電波を捕捉する。フロントホールキャプチャ430は、RU230とDU220の間の信号を収集する。ミッドホールキャプチャ420は、DU220とCU210の間の信号を収集する。バックホールキャプチャ410は、CU210とCNシミュレータ505の間の信号を収集する。
 管理装置100はRAN200について、測定器400によって収集した信号又は電波を分析又は解析して、試験結果データ610(「第1のデータ」とも呼ぶ。)とする。実績算出構成についての試験結果データは、予め動作内容が確認されている構成(「第2の構成」又は「理論値算出構成」と呼ぶ。)についての試験結果データと比較される。
Further, for test evaluation, a measuring device 400 (also referred to as a "first measuring device") for collecting signals and the like within the RAN 200 is used. The measuring device 400 includes a radio wave capture 440, a fronthaul capture 430, a midhaul capture 420, and a backhaul capture 410. The radio wave capture 440 captures radio waves in order to analyze the quality of the radio waves from the RU 230 to the UE simulator 305. Fronthaul capture 430 collects signals between RU 230 and DU 220. Midhaul capture 420 collects signals between DU 220 and CU 210. Backhaul capture 410 collects signals between CU 210 and CN simulator 505.
The management device 100 analyzes or analyzes the signals or radio waves collected by the measuring device 400 for the RAN 200, and generates test result data 610 (also referred to as "first data"). The test result data for the performance calculation configuration is compared with the test result data for a configuration whose operation details have been confirmed in advance (referred to as the "second configuration" or "theoretical value calculation configuration").
 図2下段のRAN205は、DU220の試験評価のための理論値を算出する構成(理論値算出構成)である。
 図2下段のRAN205では、図2上段のRAN200において評価対象のDU220を、予め動作内容が確認されているDUシミュレータ225で置き換えている。CU210、RU230、UEシミュレータ305、及び、CNシミュレータ505についてはRAN200と同じであるので説明を省略する。
 また、RAN205において試験評価のための信号等を収集するための測定器400(「第2の測定器」と呼ぶ。)についても、RAN200と同じものを使用し得る。
The RAN 205 in the lower part of FIG. 2 is a configuration for calculating a theoretical value for test evaluation of the DU 220 (theoretical value calculation configuration).
In the RAN 205 in the lower part of FIG. 2, the DU 220 to be evaluated in the RAN 200 in the upper part of FIG. 2 is replaced with a DU simulator 225 whose operation contents have been confirmed in advance. Since the CU 210, RU 230, UE simulator 305, and CN simulator 505 are the same as the RAN 200, their explanations will be omitted.
Furthermore, the same measuring device 400 (referred to as a “second measuring device”) for collecting signals and the like for test evaluation in the RAN 205 can be used as in the RAN 200.
 RAN205について、測定器400によって収集した信号又は電波を分析又は解析して、試験結果データ615(「第2のデータ」とも呼ぶ。)とする。
 理論値算出構成(第2の構成)であるRAN205と実績算出構成(第1の構成)であるRAN200とは、CU210及びRU230、UEシミュレータ305、並びに、CNシミュレータ505が共通している。実績算出構成であるRAN200の1つの機器(つまりDU220)は、理論値算出構成であるRAN205では、動作が予め確認されているシミュレータ(DUシミュレータ225)で置換されている。
 よって、RAN200についての試験結果データ610と、RAN205についての試験結果データ615との差分を取ることにより、DU220による影響を算出することができる。
Regarding the RAN 205, the signals or radio waves collected by the measuring device 400 are analyzed or analyzed to obtain test result data 615 (also referred to as "second data").
The CU 210 and RU 230, the UE simulator 305, and the CN simulator 505 are common to the RAN 205 that is the theoretical value calculation configuration (second configuration) and the RAN 200 that is the actual performance calculation configuration (first configuration). One device (that is, DU 220) in the RAN 200 that has a performance calculation configuration is replaced with a simulator (DU simulator 225) whose operation has been confirmed in advance in the RAN 205 that has a theoretical value calculation configuration.
Therefore, by taking the difference between the test result data 610 for the RAN 200 and the test result data 615 for the RAN 205, the influence of the DU 220 can be calculated.
 管理装置100は試験シナリオ180に従って実績算出構成についての試験評価を行い、試験結果データ610を作成する。試験シナリオ180は試験評価の内容を特定するために予め準備された、管理装置100で読み込み可能な電子ファイルであってよい。例えば、試験シナリオ180は試験評価の対象機器(図2の例ではDU)を指定し得る。特に、試験シナリオ180はチケット管理システム700が供給してもよい。
 管理装置100は試験シナリオ180に従って実績算出構成における評価対象となる機器をシミュレータで置き換えて理論値算出構成を構成して、試験評価を行い、試験結果データ615を作成する。
The management device 100 performs a test evaluation of the performance calculation configuration according to the test scenario 180 and creates test result data 610. The test scenario 180 may be an electronic file readable by the management device 100 that is prepared in advance to specify the content of the test evaluation. For example, the test scenario 180 may specify a device (DU in the example of FIG. 2) to be tested and evaluated. In particular, test scenario 180 may be provided by ticket management system 700.
The management device 100 configures a theoretical value calculation configuration by replacing the device to be evaluated in the performance calculation configuration with a simulator according to the test scenario 180, performs test evaluation, and creates test result data 615.
 図2の例では試験シナリオ180が評価対象要素としてDU220を指定している。この場合、実績算出構成のDU220はDUシミュレータ225で置換される。
 特に、実績算出構成においてDUは仮想化基盤上で仮想化されているので、DU220のDUシミュレータ225による置換は人手を介すことなくソフトウェア的に自動で行うことができる。
 管理装置100は測定器400によって、実績算出構成及び理論値算出構成の各機器の動作に係るデータを収集し、分析又は解析をして、それぞれ試験結果データ610及び試験結果データ615として保存する。
In the example of FIG. 2, the test scenario 180 specifies the DU 220 as the element to be evaluated. In this case, the DU 220 in the performance calculation configuration is replaced by the DU simulator 225.
In particular, in the performance calculation configuration, since the DU is virtualized on the virtualization platform, the replacement of the DU 220 by the DU simulator 225 can be automatically performed by software without any human intervention.
The management device 100 uses the measuring instrument 400 to collect data related to the operation of each device in the performance calculation configuration and the theoretical value calculation configuration, analyzes or analyzes the data, and stores the data as test result data 610 and test result data 615, respectively.
 管理装置100は更に試験結果データ610及び試験結果データ615の差分から差分プロファイル650を作成する。管理装置100は差分プロファイルに基づいて、DU220をRAN200に組み込んだことによる動作の正常性等を判定する。この判定結果をRAN200に対する品質評価という。品質評価は管理装置100からチケット管理システム700に送られる。
 これにより、チケット管理システム700はRAN200に対する品質評価をチケットに記載することができる。チケットについては後述する。
The management device 100 further creates a difference profile 650 from the difference between the test result data 610 and the test result data 615. The management device 100 determines the normality of the operation caused by incorporating the DU 220 into the RAN 200 based on the difference profile. This determination result is called quality evaluation for the RAN 200. The quality evaluation is sent from the management device 100 to the ticket management system 700.
Thereby, the ticket management system 700 can write the quality evaluation for the RAN 200 in the ticket. Tickets will be discussed later.
 管理装置100によれば、与えられた試験シナリオ180に基づいて、自動で差分プロファイルを得ることができる。つまり、実績算出構成及び理論値算出構成を仮想化基盤(テストマシン)上に提供し、実績算出構成及び理論値算出構成のそれぞれに対する測定器を設定して試験結果データを得るまで、人手を介すことがない。 According to the management device 100, a difference profile can be automatically obtained based on the given test scenario 180. In other words, the actual calculation configuration and the theoretical value calculation configuration are provided on the virtualization platform (test machine), and the human intervention is required until the test result data is obtained by setting the measuring instruments for each of the actual value calculation configuration and the theoretical value calculation configuration. I have nothing to do.
 図3は実施形態に係る、RANの試験評価を実施するための管理装置100の構成例を示す模式図である。管理装置100は送受信部110と処理部120とを含む。管理装置100は、図3に示していない構成を更に含んでいてもよい。
 送受信部110は、図2のCU210、DU220、RU230、DUシミュレータ225、測定器400、及び、チケット管理システム(バグトラッキングシステム)700との間でデータを送受信する。送受信部110はUEシミュレータ305又はCNシミュレータ505との間でもデータを送受信するようにされていてもよい。
FIG. 3 is a schematic diagram illustrating a configuration example of the management device 100 for performing RAN test evaluation according to the embodiment. Management device 100 includes a transmitting/receiving section 110 and a processing section 120. The management device 100 may further include a configuration not shown in FIG. 3.
The transmitting/receiving unit 110 transmits and receives data to and from the CU 210, DU 220, RU 230, DU simulator 225, measuring instrument 400, and ticket management system (bug tracking system) 700 in FIG. The transmitting/receiving unit 110 may also be configured to transmit and receive data with the UE simulator 305 or the CN simulator 505.
 処理部120は、プロセッサ122及びメモリ124を含む。なお、プロセッサ122及びメモリ124は1個でも複数でもよい。処理部120は更にストレージ126を含んでもよい。処理部120は送受信部110を動作させるとともに、プロセッサ122及びメモリ124によって、管理装置100としてのデータ処理を実行することができる。ストレージ126には試験シナリオ180が保存されていてもよい。 The processing unit 120 includes a processor 122 and a memory 124. Note that the number of processor 122 and memory 124 may be one or more. The processing unit 120 may further include storage 126. The processing unit 120 operates the transmitting/receiving unit 110, and can perform data processing as the management device 100 using the processor 122 and memory 124. A test scenario 180 may be stored in the storage 126.
 処理部120のプロセッサ122は、図2を参照して説明したRANの試験評価を実施するために、管理装置100における動作を実行することができる。 The processor 122 of the processing unit 120 can execute operations in the management device 100 in order to perform the RAN test evaluation described with reference to FIG. 2.
 図4に実績算出構成及び理論値算出構成に対する試験結果データ(図2の符号610及び符号615参照)の差分(差分プロファイル650)の適用例を示す。
 図4において差分プロファイル650はバグトラッキングシステム(BTS)のチケット660と対応付けられる。特に、チケット660はDU220の新規リリース時の品質評価を含むチケットとすることができる。あるいは、チケット660はDU220に係るソフトウェアのバージョンアップに関する品質評価を含むリリースプロセスチケットとすることができる。新規リリース時のチケット及びリリースプロセスチケットはチケット駆動開発(TiDD)で使用することができる。
FIG. 4 shows an example of application of the difference (difference profile 650) of test result data (see reference numerals 610 and 615 in FIG. 2) to the actual value calculation configuration and the theoretical value calculation configuration.
In FIG. 4, the difference profile 650 is associated with a bug tracking system (BTS) ticket 660. In particular, ticket 660 may include a quality assessment of a new release of DU 220. Alternatively, the ticket 660 may be a release process ticket that includes a quality evaluation regarding the software version upgrade related to the DU 220. New release tickets and release process tickets can be used in ticket-driven development (TiDD).
 図4にはチケットに記載される内容の例が示されている。
 チケットにはRANの自動生成(Production Automation)についての情報が記載される。当該情報には以下に例示するもののうちいくつかが含まれ得る。
 (1)RANのデプロイメント構成。例として「1vCU/4DU/3ORU」はRANが1台のvCU、4台のDU、3台のORU(O-RAN仕様のRU)からなることを示す。
 (2)RANの閾値。例えばビームID又はMIMO(Multi Input Multi Output)レイヤ数の情報に関する。
 (3)O-RAN証明書の制限(O-RAN Certification Limited)。例えばIPv6のみをサポートする、などの制限に関する。
 (4)仮想化COTS(Commercial Off The Shelf)のタイプ。例としてFPGAのサポートについての情報に関する。
FIG. 4 shows an example of the contents written on the ticket.
Information regarding automatic generation (Production Automation) of the RAN is written in the ticket. The information may include some of the following examples.
(1) RAN deployment configuration. As an example, "1vCU/4DU/3ORU" indicates that the RAN consists of one vCU, four DUs, and three ORUs (RUs with O-RAN specifications).
(2) RAN threshold. For example, it relates to information on the beam ID or the number of MIMO (Multi Input Multi Output) layers.
(3) O-RAN Certification Limited. For example, it relates to restrictions such as only supporting IPv6.
(4) Type of virtualization COTS (Commercial Off The Shelf). For example, information regarding FPGA support.
 チケットには監視システムについての情報が記載される。監視システムについての情報には、監視条件の定義が含まれ得る。監視条件の定義は、例えば、測定データに対する許容値を示す閾値、監視対象のデータ個数を示す監視数、又は、監視項目に関する情報を含む。 The ticket contains information about the monitoring system. Information about the monitoring system may include definitions of monitoring conditions. The definition of the monitoring condition includes, for example, a threshold value indicating a permissible value for the measurement data, a monitoring number indicating the number of data items to be monitored, or information regarding monitoring items.
 図5は実施形態に係る、管理装置100とチケット管理システム700を含む試験評価システムによる、DUのソフトウェアのバージョンアップの際の試験評価のためのシーケンス図である。チケット管理システム(バグトラッキングシステム)700は、DUのバージョンアップにチケットを割当て、バグ対応等の管理を行う。
 チケット管理システム700は、DUについての新ソフトウェアリリースの際に、チケットを発行するとともに、生成した試験シナリオ(図2の符号180参照)を管理装置100に送付する。
FIG. 5 is a sequence diagram for test evaluation when upgrading the DU software by the test evaluation system including the management device 100 and the ticket management system 700 according to the embodiment. A ticket management system (bug tracking system) 700 allocates tickets to version upgrades of DUs and manages bug handling and the like.
The ticket management system 700 issues a ticket and sends the generated test scenario (see reference numeral 180 in FIG. 2) to the management device 100 when new software for DU is released.
 管理装置100は試験シナリオに基づいて、実績算出構成(単に「構成A」と言う)を生成するように仮想化基盤に指示する。構成Aは図2に示したように、CU210、バージョンアップしたDU220及びRU230を有するRAN200を含む。また、実績算出構成のために測定器400(単に「測定器#A」と言う)を設定する。
 測定器#Aは構成Aにおける測定データを取得して管理装置100に送付する。管理装置100は当該データより試験結果データ(図2の符号610参照)を作成する。
The management device 100 instructs the virtualization infrastructure to generate a performance calculation configuration (simply referred to as "configuration A") based on the test scenario. As shown in FIG. 2, configuration A includes a RAN 200 having a CU 210, an upgraded DU 220, and an RU 230. Also, a measuring instrument 400 (simply referred to as "measuring instrument #A") is set for the performance calculation configuration.
Measuring device #A acquires measurement data in configuration A and sends it to management device 100. The management device 100 creates test result data (see reference numeral 610 in FIG. 2) from the data.
 管理装置100は試験シナリオに基づいて、理論値算出構成(単に「構成B」と言う)を生成するように仮想化基盤に指示する。構成Bは図2に示したように、CU210、DUシミュレータ225及びRU230を有するRAN205を含む。また、管理装置100は理論値算出構成のために測定器400(単に「測定器#B」と言う)を設定する。
 前述のように、DUは仮想化基盤上で仮想化されているので、構成AのDUを構成BのDUシミュレータで、ソフトウェア的に自動で置換することができる。
 測定器#Bは構成Bにおけるデータを取得して管理装置100に送付する。管理装置100は当該データより試験結果データ(図2の符号615参照)を作成する。
 なお、図5のシーケンス図については、構成Aの生成、測定器#Aの設定及びデータ取得、構成Bの生成、測定器#Bの設定及びデータ取得の順で説明したが、これらの工程を実行する順序については、これに限られない。例えば構成Bの生成、測定器#Bの設定及びデータ取得を、構成Aの生成、測定器#Aの設定及びデータ取得より先に実行されてもよい。
The management device 100 instructs the virtualization infrastructure to generate a theoretical value calculation configuration (simply referred to as "configuration B") based on the test scenario. Configuration B includes a RAN 205 with a CU 210, a DU simulator 225, and an RU 230, as shown in FIG. Furthermore, the management device 100 sets the measuring device 400 (simply referred to as “measuring device #B”) for the theoretical value calculation configuration.
As described above, since the DU is virtualized on the virtualization platform, the DU of configuration A can be automatically replaced by the DU simulator of configuration B using software.
Measuring device #B acquires data in configuration B and sends it to management device 100. The management device 100 creates test result data (see reference numeral 615 in FIG. 2) from the data.
Regarding the sequence diagram in FIG. 5, the explanation was given in the order of generation of configuration A, setting and data acquisition of measuring instrument #A, generation of configuration B, setting of measuring instrument #B and data acquisition, but these steps are explained below. The order of execution is not limited to this. For example, generation of configuration B, setting of measuring instrument #B, and data acquisition may be performed before generation of configuration A, setting of measuring instrument #A, and data acquisition.
 管理装置100は更に構成Aに対する試験結果データ(図2の符号610参照)及び構成Bに対する試験結果データ(図2の符号615参照)から差分プロファイル(図2の符号650参照)を作成する。管理装置100は差分プロファイルに基づいて、DUのバージョンアップに伴う動作の正常性等を判定し、判定結果(品質評価)をチケット管理システム700に送付する。チケット管理システム700は送付された判定結果をチケットに記載する。 The management device 100 further creates a difference profile (see 650 in FIG. 2) from the test result data for configuration A (see 610 in FIG. 2) and the test result data for configuration B (see 615 in FIG. 2). The management device 100 determines the normality of the operation associated with the version upgrade of the DU based on the difference profile, and sends the determination result (quality evaluation) to the ticket management system 700. The ticket management system 700 writes the sent determination result in the ticket.
 チケット管理システム700は、チケットの内容に基づいて、DUのソフトウェアバージョンとバグ数との関係を学習するようにしてもよい。
 チケット管理システム700は、送付された品質評価をリリースプロセスチケットに含ませることと、リリースプロセスチケットを使用して1つの機器のためのソフトウェアのバージョンとバグ数との関係を学習することと、を実行する1以上のプロセッサを備えていてよい。
The ticket management system 700 may learn the relationship between the software version of the DU and the number of bugs based on the contents of the ticket.
The ticket management system 700 includes the following steps: including the submitted quality evaluation in the release process ticket; and learning the relationship between the software version and the number of bugs for one device using the release process ticket. It may include one or more processors for execution.
 図6を参照して実施形態に係る試験評価方法1000について説明する。この方法は、次の工程を含む。
 実績データの算出のための第1の構成を提供する(1010)。第1の構成の例としては、図2における実績算出構成がある。
 第1の構成のための第1の測定器を設定する(1020)。第1の測定器の例としては、図2における実績算出構成のための測定器400がある。
 第1の測定器により、第1の構成に対する第1のデータを得る(1030)。第1のデータの例としては、図2における試験結果データ610がある。
A test evaluation method 1000 according to the embodiment will be described with reference to FIG. 6. This method includes the following steps.
A first configuration for calculating performance data is provided (1010). An example of the first configuration is the performance calculation configuration shown in FIG.
A first meter for a first configuration is configured (1020). An example of the first measuring device is the measuring device 400 for the performance calculation configuration shown in FIG.
A first measurement device obtains first data for a first configuration (1030). An example of the first data is test result data 610 in FIG.
 第1の構成における1つの機器をシミュレータで置き換えた第2の構成を提供する(1040)。第2の構成の例としては、図2において実績算出構成のDU220をDUシミュレータ225で置き換えた理論値算出構成がある。
 第2の構成のための第2の測定器を設定する(1050)。第2の測定器の例としては、図2における理論値算出構成のための測定器400がある。
 第2の測定器により、第2の構成に対する第2のデータを得る(1060)。第2のデータの例としては、図2における試験結果データ615がある。
A second configuration is provided in which one device in the first configuration is replaced with a simulator (1040). An example of the second configuration is a theoretical value calculation configuration in which the DU 220 in the actual performance calculation configuration in FIG. 2 is replaced with the DU simulator 225.
A second meter for a second configuration is configured (1050). An example of the second measuring device is the measuring device 400 for the theoretical value calculation configuration shown in FIG.
A second measurement device obtains second data for a second configuration (1060). An example of the second data is the test result data 615 in FIG.
 第1のデータと第2のデータの差分プロファイルを得る(1070)。差分プロファイルの例としては、図2における差分プロファイル650がある。
 なお、以上の工程(1010から1070)については、第1のデータと第2のデータの差分プロファイルを得ることができるのであれば、上記の順序での実行に限られない。例えば、第2の構成及び第2の測定器によって第2のデータを得た後に、第1の構成及び第1の測定器によって第1のデータを得るようにしてもよい。
A differential profile between the first data and the second data is obtained (1070). An example of a differential profile is differential profile 650 in FIG. 2 .
Note that the above steps (1010 to 1070) are not limited to execution in the above order as long as a difference profile between the first data and the second data can be obtained. For example, after obtaining the second data using the second configuration and the second measuring device, the first data may be obtained using the first configuration and the first measuring device.
 なお、試験評価方法1000は更に次の工程を含んでもよい。
 差分プロファイルに応じて1つの機器のためのソフトウェアのバージョンに対応する第1の構成の品質評価をする(1080)。第1の構成の品質評価の例としては、図4の監視条件にある測定データ値の許容値を与える閾値に応じて、監視数及び監視項目のうち、正常動作と認められない事項(バグ)の数とすることができる。また、1つの機器のためのソフトウェアのバージョンの例は、DUのソフトウェアのバージョンがある。
 第1の構成の品質評価をリリースプロセスチケットに含ませる(1090)。例えば、バグトラッキングシステム(BTS)におけるチケットにリリースバージョンごとのバグ数を記載してもよい。
Note that the test evaluation method 1000 may further include the following steps.
Quality evaluation of a first configuration corresponding to a software version for one device is performed according to the difference profile (1080). As an example of the quality evaluation of the first configuration, among the number of monitoring items and monitoring items, items (bugs) that are not recognized as normal operation according to the threshold value that gives the permissible value of the measured data value in the monitoring conditions in Figure 4. can be the number of Further, an example of a software version for one device is a DU software version.
A quality evaluation of the first configuration is included in the release process ticket (1090). For example, the number of bugs for each release version may be written on a ticket in a bug tracking system (BTS).
 リリースプロセスチケットの内容に基づいて、バージョンとバグ数との関係を学習する(1100)。例えば、チケットに記載されたリリースバージョンごとのバグ数を集計して、リリースバージョンとバグ数との関係を機械学習してもよい。 Based on the contents of the release process ticket, the relationship between the version and the number of bugs is learned (1100). For example, the number of bugs for each release version written in a ticket may be aggregated, and the relationship between the release version and the number of bugs may be learned by machine.
 更に上述の試験評価方法1000をシステムに実行させるためのプログラムも本開示に含まれる。当該プログラムは、コンピュータ読み取り可能で非一時的な(non-transitory)記憶媒体に記録されて提供されてよい。 Furthermore, the present disclosure also includes a program for causing a system to execute the above-described test evaluation method 1000. The program may be provided recorded on a computer-readable non-transitory storage medium.
 以上のように、本開示によれば、実績算出構成及び理論値算出構成を仮想化基盤上に提供し、実績算出構成及び理論値算出構成のそれぞれに対する測定器を設定して試験結果データを得るまで、人手を介すことがない。 As described above, according to the present disclosure, the performance calculation configuration and the theoretical value calculation configuration are provided on the virtualization platform, and the test result data is obtained by setting measuring instruments for each of the performance calculation configuration and the theoretical value calculation configuration. There is no need for human intervention.
 本開示は、上述の実施形態に限定されるものではなく、上述の構成に対して、構成要素の付加、削除又は転換を行った様々な変形例も含むものとする。また、各実施例が様々に組み合わせることが可能である。 The present disclosure is not limited to the above-described embodiments, but also includes various modifications in which components are added, deleted, or converted to the above-described configuration. Moreover, each embodiment can be combined in various ways.
 なお、本説明において用いられた「接続」という用語は、通信のための論理的接続を意味する。例えば、「vDUに接続しているRU」とは、vDUとRUとが通信可能なように論理的に接続されていることを意味する。vDU及びRUが物理的なケーブル等で必ずしも物理的に直接接続されている必要はないし、vDUとRUの間に複数の機器や無線通信が介在していてもよい。 Note that the term "connection" used in this description means a logical connection for communication. For example, "RU connected to vDU" means that the vDU and RU are logically connected so that they can communicate. The vDU and RU do not necessarily have to be physically directly connected by a physical cable or the like, and a plurality of devices or wireless communication may be interposed between the vDU and RU.
 更に、本開示は次の態様を含む。 Further, the present disclosure includes the following aspects.
[1]管理装置であって、
  仮想化基盤上の第1の構成のための第1の測定器を設定することと、
  前記第1の測定器による前記第1の構成に対する第1のデータを得ることと、
  前記第1の構成における1つの機器をシミュレータで置き換えた第2の構成を提供することと、
  前記第2の構成のための第2の測定器を設定することと、
  前記第2の測定器による前記第2の構成に対する第2のデータを得ることと、
  前記第1のデータと前記第2のデータの差分プロファイルを得ることと、
を実行するプロセッサを備える、管理装置。
[1] A management device,
configuring a first measurement device for a first configuration on a virtualization infrastructure;
obtaining first data for the first configuration by the first measuring device;
providing a second configuration in which one device in the first configuration is replaced with a simulator;
configuring a second measuring device for the second configuration;
obtaining second data for the second configuration by the second measuring device;
obtaining a differential profile between the first data and the second data;
A management device comprising a processor that executes.
[2]前記プロセッサが、
  前記差分プロファイルに応じて前記第1の構成の品質評価をすることと、
  前記第1の構成の品質評価をチケット管理システムに送付することと、
を更に実行する、[1]に記載の管理装置。
[2] The processor,
Evaluating the quality of the first configuration according to the difference profile;
Sending a quality evaluation of the first configuration to a ticket management system;
The management device according to [1], which further executes.
[3]前記第1の構成の品質評価が、前記1つの機器のためのソフトウェアのバージョンに対応している、[2]に記載の管理装置。 [3] The management device according to [2], wherein the quality evaluation of the first configuration corresponds to a software version for the one device.
[4]管理装置であって、
  仮想化基盤上の第1の構成のための第1の測定器を設定することと、
  前記第1の測定器による前記第1の構成に対する第1のデータを得ることと、
  前記第1の構成における1つの機器をシミュレータで置き換えた第2の構成を提供することと、
  前記第2の構成のための第2の測定器を設定することと、
  前記第2の測定器による前記第2の構成に対する第2のデータを得ることと、
  前記第1のデータと前記第2のデータの差分プロファイルを得ることと、
  前記差分プロファイルに応じて、前記1つの機器のためのソフトウェアのバージョンに対応する前記第1の構成の品質評価をすることと、
  前記第1の構成の品質評価をチケット管理システムに送付することと、
を実行するプロセッサを備える、管理装置と、
 チケット管理システムであって、
  送付された前記第1の構成の品質評価をリリースプロセスチケットに含ませることと、
  前記リリースプロセスチケットを使用して前記1つの機器のためのソフトウェアのバージョンとバグ数との関係を学習することと、
を実行するプロセッサを備える、チケット管理システムと
を含む試験評価システム。
[4] A management device,
configuring a first measurement device for a first configuration on a virtualization infrastructure;
obtaining first data for the first configuration by the first measuring device;
providing a second configuration in which one device in the first configuration is replaced with a simulator;
configuring a second measuring device for the second configuration;
obtaining second data for the second configuration by the second measuring device;
obtaining a differential profile between the first data and the second data;
Evaluating the quality of the first configuration corresponding to a software version for the one device according to the difference profile;
Sending a quality evaluation of the first configuration to a ticket management system;
a management device comprising a processor that executes;
A ticket management system,
including a quality evaluation of the sent first configuration in a release process ticket;
learning a relationship between a software version and a number of bugs for the one device using the release process ticket;
a ticket management system; and a ticket management system.
[5]実績データの算出のための第1の構成を仮想化基盤上に提供することと、
 前記第1の構成のための第1の測定器を設定することと、
 前記第1の測定器による前記第1の構成に対する第1のデータを得ることと、
 前記第1の構成における1つの機器をシミュレータで置き換えた第2の構成を提供することと、
 前記第2の構成のための第2の測定器を設定することと、
 前記第2の測定器による前記第2の構成に対する第2のデータを得ることと、
 前記第1のデータと前記第2のデータの差分プロファイルを得ることと、
を含む試験評価方法。
[5] Providing a first configuration for calculating performance data on a virtualization platform;
configuring a first measuring device for the first configuration;
obtaining first data for the first configuration by the first measuring device;
providing a second configuration in which one device in the first configuration is replaced with a simulator;
configuring a second measuring device for the second configuration;
obtaining second data for the second configuration by the second measuring device;
obtaining a differential profile between the first data and the second data;
test evaluation methods including;
[6]前記差分プロファイルに応じて前記1つの機器のためのソフトウェアのバージョンに対応する前記第1の構成の品質評価をすることと、
 前記第1の構成の品質評価をリリースプロセスチケットに含ませることと、
 前記リリースプロセスチケットの内容に基づいて、前記バージョンとバグ数との関係を学習することと、を更に含む、[5]に記載の試験評価方法。
[6] Evaluating the quality of the first configuration corresponding to a software version for the one device according to the difference profile;
including a quality evaluation of the first configuration in a release process ticket;
The test evaluation method according to [5], further comprising learning the relationship between the version and the number of bugs based on the contents of the release process ticket.
1    無線通信システム
100  管理装置
110  送受信部
120  処理部
122  プロセッサ
124  メモリ
126  ストレージ
180  試験シナリオ
200、205  無線アクセスネットワーク(RAN)
210  CU
220  DU
225  DUシミュレータ
230  RU
300  ユーザ端末(UE)
305  UEシミュレータ
400  測定器
410  バックホールキャプチャ
420  ミッドホールキャプチャ
430  フロントホールキャプチャ
440  電波キャプチャ
500  コアネットワーク(CN)
505  CNシミュレータ
610、615  試験結果データ
650  差分プロファイル
660  チケット
700  チケット管理システム
1000 試験評価方法
1 Wireless communication system 100 Management device 110 Transmission/reception unit 120 Processing unit 122 Processor 124 Memory 126 Storage 180 Test scenario 200, 205 Radio access network (RAN)
210 cu.
220 D.U.
225 DU Simulator 230 RU
300 User terminal (UE)
305 UE simulator 400 Measuring device 410 Backhaul capture 420 Midhaul capture 430 Fronthaul capture 440 Radio wave capture 500 Core network (CN)
505 CN simulator 610, 615 Test result data 650 Difference profile 660 Ticket 700 Ticket management system 1000 Test evaluation method

Claims (6)

  1.  管理装置であって、
      仮想化基盤上の第1の構成のための第1の測定器を設定することと、
      前記第1の測定器による前記第1の構成に対する第1のデータを得ることと、
      前記第1の構成における1つの機器をシミュレータで置き換えた第2の構成を提供することと、
      前記第2の構成のための第2の測定器を設定することと、
      前記第2の測定器による前記第2の構成に対する第2のデータを得ることと、
      前記第1のデータと前記第2のデータの差分プロファイルを得ることと、
    を実行するプロセッサを備える、管理装置。
    A management device,
    configuring a first measurement device for a first configuration on a virtualization infrastructure;
    obtaining first data for the first configuration by the first measuring device;
    providing a second configuration in which one device in the first configuration is replaced with a simulator;
    configuring a second measuring device for the second configuration;
    obtaining second data for the second configuration by the second measuring device;
    obtaining a differential profile between the first data and the second data;
    A management device comprising a processor that executes.
  2.  前記プロセッサが、
      前記差分プロファイルに応じて前記第1の構成の品質評価をすることと、
      前記第1の構成の品質評価をチケット管理システムに送付することと、
    を更に実行する、請求項1に記載の管理装置。
    The processor,
    Evaluating the quality of the first configuration according to the difference profile;
    Sending a quality evaluation of the first configuration to a ticket management system;
    The management device according to claim 1, further performing:
  3.  前記第1の構成の品質評価が、前記1つの機器のためのソフトウェアのバージョンに対応している、請求項2に記載の管理装置。 The management device according to claim 2, wherein the quality evaluation of the first configuration corresponds to a software version for the one device.
  4.  管理装置であって、
      仮想化基盤上の第1の構成のための第1の測定器を設定することと、
      前記第1の測定器による前記第1の構成に対する第1のデータを得ることと、
      前記第1の構成における1つの機器をシミュレータで置き換えた第2の構成を提供することと、
      前記第2の構成のための第2の測定器を設定することと、
      前記第2の測定器による前記第2の構成に対する第2のデータを得ることと、
      前記第1のデータと前記第2のデータの差分プロファイルを得ることと、
      前記差分プロファイルに応じて、前記1つの機器のためのソフトウェアのバージョンに対応する前記第1の構成の品質評価をすることと、
      前記第1の構成の品質評価をチケット管理システムに送付することと、
    を実行するプロセッサを備える、管理装置と、
     チケット管理システムであって、
      送付された前記第1の構成の品質評価をリリースプロセスチケットに含ませることと、
      前記リリースプロセスチケットを使用して前記1つの機器のためのソフトウェアのバージョンとバグ数との関係を学習することと、
    を実行するプロセッサを備える、チケット管理システムと
    を含む試験評価システム。
    A management device,
    configuring a first measurement device for a first configuration on a virtualization infrastructure;
    obtaining first data for the first configuration by the first measuring device;
    providing a second configuration in which one device in the first configuration is replaced with a simulator;
    configuring a second measuring device for the second configuration;
    obtaining second data for the second configuration by the second measuring device;
    obtaining a differential profile between the first data and the second data;
    Evaluating the quality of the first configuration corresponding to a software version for the one device according to the difference profile;
    Sending a quality evaluation of the first configuration to a ticket management system;
    a management device comprising a processor that executes;
    A ticket management system,
    including a quality evaluation of the sent first configuration in a release process ticket;
    learning a relationship between a software version and a number of bugs for the one device using the release process ticket;
    a ticket management system; and a ticket management system.
  5.  実績データの算出のための第1の構成を仮想化基盤上に提供することと、
     前記第1の構成のための第1の測定器を設定することと、
     前記第1の測定器による前記第1の構成に対する第1のデータを得ることと、
     前記第1の構成における1つの機器をシミュレータで置き換えた第2の構成を提供することと、
     前記第2の構成のための第2の測定器を設定することと、
     前記第2の測定器による前記第2の構成に対する第2のデータを得ることと、
     前記第1のデータと前記第2のデータの差分プロファイルを得ることと、
    を含む試験評価方法。
    Providing a first configuration for calculating performance data on a virtualization platform;
    configuring a first measuring device for the first configuration;
    obtaining first data for the first configuration by the first measuring device;
    providing a second configuration in which one device in the first configuration is replaced with a simulator;
    configuring a second measuring device for the second configuration;
    obtaining second data for the second configuration by the second measuring device;
    obtaining a differential profile between the first data and the second data;
    Test evaluation methods including.
  6.  前記差分プロファイルに応じて前記1つの機器のためのソフトウェアのバージョンに対応する前記第1の構成の品質評価をすることと、
     前記第1の構成の品質評価をリリースプロセスチケットに含ませることと、
     前記リリースプロセスチケットの内容に基づいて、前記バージョンとバグ数との関係を学習することと、を更に含む、請求項5に記載の試験評価方法。
    evaluating the quality of the first configuration corresponding to a software version for the one device according to the difference profile;
    including a quality evaluation of the first configuration in a release process ticket;
    The test evaluation method according to claim 5, further comprising: learning a relationship between the version and the number of bugs based on the contents of the release process ticket.
PCT/JP2022/026386 2022-06-30 2022-06-30 Automatic profile generation on test machine WO2024004174A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/026386 WO2024004174A1 (en) 2022-06-30 2022-06-30 Automatic profile generation on test machine

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/026386 WO2024004174A1 (en) 2022-06-30 2022-06-30 Automatic profile generation on test machine

Publications (1)

Publication Number Publication Date
WO2024004174A1 true WO2024004174A1 (en) 2024-01-04

Family

ID=89382550

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/026386 WO2024004174A1 (en) 2022-06-30 2022-06-30 Automatic profile generation on test machine

Country Status (1)

Country Link
WO (1) WO2024004174A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000322283A (en) * 1999-05-06 2000-11-24 Fujitsu Ltd Fault detecting method of electronic computer
JP2005020420A (en) * 2003-06-26 2005-01-20 Sumitomo Electric Ind Ltd Passive optical network testing apparatus
JP2005149043A (en) * 2003-11-14 2005-06-09 Nec Corp Transaction processing system and method, and program
JP2017135653A (en) * 2016-01-29 2017-08-03 富士通株式会社 Test device, network system, and test method
JP2021117666A (en) * 2020-01-24 2021-08-10 株式会社デンソー Code inspection tool and code inspection method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000322283A (en) * 1999-05-06 2000-11-24 Fujitsu Ltd Fault detecting method of electronic computer
JP2005020420A (en) * 2003-06-26 2005-01-20 Sumitomo Electric Ind Ltd Passive optical network testing apparatus
JP2005149043A (en) * 2003-11-14 2005-06-09 Nec Corp Transaction processing system and method, and program
JP2017135653A (en) * 2016-01-29 2017-08-03 富士通株式会社 Test device, network system, and test method
JP2021117666A (en) * 2020-01-24 2021-08-10 株式会社デンソー Code inspection tool and code inspection method

Similar Documents

Publication Publication Date Title
US10705808B2 (en) Software defined network controller
CN111124850A (en) MQTT server performance testing method, system, computer equipment and storage medium
US7908531B2 (en) Networked test system
CN102298365B (en) Method for automatically identifying and managing spaceflight measurement and control earth station device change
EP3886367A1 (en) Automating 5g slices using real-time analytics
CN108959059B (en) Test method and test platform
CN109634843A (en) A kind of distributed automatization method for testing software and platform towards AI chip platform
US20130139130A1 (en) System and method for performance assurance of applications
CN105094783A (en) Method and device for testing Android application stability
CN109656820A (en) Intelligent automation test macro based on CBTC
CN112241360A (en) Test case generation method, device, equipment and storage medium
CN111092767B (en) Method and device for debugging equipment
WO2022142931A1 (en) Network device inspection method, apparatus, and device, and storage medium
Kanstrén et al. Architectures and experiences in testing IoT communications
CN114915643A (en) Configuration method, device, equipment and medium of railway signal centralized monitoring system
CN112527247A (en) LED display control system simulation method, device and system
WO2024004174A1 (en) Automatic profile generation on test machine
CN112506772B (en) Web automatic test method, device, electronic equipment and storage medium
US11829335B1 (en) Using machine learning to provide a single user interface for streamlines deployment and management of multiple types of databases
WO2023276039A1 (en) Server management device, server management method, and program
CN113094266B (en) Fault testing method, platform and equipment for container database
CN113986758A (en) Regression testing method and device, electronic equipment and computer readable medium
CN112395176A (en) Method, device, system, equipment, processor and storage medium for testing distributed cloud storage performance
CN114578786A (en) Vehicle test system
CN113535575A (en) Benchmark testing method and device for basic environment of software and hardware product

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22949447

Country of ref document: EP

Kind code of ref document: A1