WO2024113128A1 - A code branch managing system for comparing and/or updating a master code branch with a new code branch - Google Patents

A code branch managing system for comparing and/or updating a master code branch with a new code branch Download PDF

Info

Publication number
WO2024113128A1
WO2024113128A1 PCT/CN2022/134865 CN2022134865W WO2024113128A1 WO 2024113128 A1 WO2024113128 A1 WO 2024113128A1 CN 2022134865 W CN2022134865 W CN 2022134865W WO 2024113128 A1 WO2024113128 A1 WO 2024113128A1
Authority
WO
WIPO (PCT)
Prior art keywords
code branch
test
code
branch
configuration information
Prior art date
Application number
PCT/CN2022/134865
Other languages
French (fr)
Inventor
Ziyuan Liu
Shijie Zhang
Dong Sun
Original Assignee
Huawei Cloud Computing Technologies Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Cloud Computing Technologies Co., Ltd. filed Critical Huawei Cloud Computing Technologies Co., Ltd.
Priority to PCT/CN2022/134865 priority Critical patent/WO2024113128A1/en
Publication of WO2024113128A1 publication Critical patent/WO2024113128A1/en

Links

Images

Definitions

  • the disclosure relates to managing code branches.
  • the disclosure proposes a code branch managing system, and a corresponding method for operating the code branch managing system.
  • the code branch managing system comprises a user interface, a code base, and a test server, and is configured to compare and/or update a master code branch with a new code branch.
  • multiple code branches may need to be generated on the basis of a master code branch.
  • developers need to integrate said code branches into the master code branch. If not carefully inspected, the fusion of new code branches can introduce code defects and degrade the performance of the entire code system.
  • code quality of a software should be ensured in the case of multiple people developing the same set of code at the same time.
  • collaborative code development efficiency may be very low.
  • code defects may be incorporated into the master code branch and may be accumulated. These code errors may cause various operational errors in the future, and may be difficult to locate.
  • an objective of this disclosure is to provide a code testing system for efficiently comparing and/or updating a master code branch with one or more new code branches. Another objective is to provide an automated code testing system to effectively improve team code development efficiency and/or to support simultaneous automated performance comparison of multiple code branches.
  • a first aspect of this disclosure provides a method of operating a code branch managing system for comparing and/or updating a master code branch with a new code branch
  • the managing system comprises a user interface, a code base, and a test server
  • the method comprises: obtaining, with the user interface, configuration information, providing, with the user interface, the configuration information to the test server, obtaining, with the test server, the new code branch from the code base based on the configuration information, determining, with the test server, a configured test based on the new code branch and the configuration information, and determining, with the test server, a result file using the configured test.
  • the code managing system may comprise a mobile device, wherein the mobile device may comprise at least one of the user interface, the code base, and the test server.
  • the mobile device may comprise the user interface, wherein the code base and the test server may be comprised in one or more external devices.
  • the processing requirements for the mobile device may be efficiently reduced.
  • the configuration information comprises at least one of a code branch information, a data selection information, and a running mode information.
  • the test server may obtain the new code branch from the code base based on the code branch information.
  • the test server may determine the configured test based on the new code branch and the running mode information and/or based on the data selection information.
  • determining the configured test comprises: generating an executable file by code compilation using the new code branch and the configuration information, for example the running mode information, and/or generating a running script for the configured test based on the configuration information, for example based on the running mode information.
  • determining the result file comprises: obtaining a test dataset from a predetermined test databank based on the configuration information, for example based on the data selection information, and executing the executable file based on the running script and the test dataset to generate a log or the result file.
  • the data selection information may comprise indication information for a test dataset of the test databank.
  • the test databank may comprise one or more test datasets, which may be predetermined.
  • determining the result file further comprises: extracting and/or analysing information from the log to generate the result file.
  • the method further comprises: determining, with the test server, a performance change between the master code branch and the new code branch by comparing the result file with a baseline data.
  • the method may further comprise summarizing the performance change to generate one or more test results.
  • the test server may automatically determine whether the new code branch is preferred over the master code branch.
  • the method further comprises: selecting the baseline data from a predetermined baseline dataset based on the configuration information, for example based on the data selection information.
  • the baseline dataset may be based on the master code branch and/or the configuration information, for example the data selection information and/or the code branch information.
  • the result file is a first result file associated with the new code branch, and wherein the baseline data is comprised in a second result file associated with the master code branch.
  • the method further comprises: determining the performance change based on one or more performance indicators.
  • the method further comprises: summarizing, for example visualizing, the performance change based on the one or more performance indicators to generate one or more test results.
  • the performance changes may be summarized with at least one of: one or more scores, one or more graphs, one or more bar charts, and one or more box charts.
  • determining whether to merge new code branch and the master code branch may be efficient, for example, by determining if one score is larger than another score and/or above a threshold, or if a characteristic value of a graph is above another characteristic value of another graph and/or above a threshold.
  • the on one or more performance indicators are determined based on the configuration information, for example based on the running mode information.
  • the result file comprises one or more test results indicating a performance change between the master code branch and the new code branch.
  • the performance change may be indicated by one or more scores.
  • determining whether to merge new code branch and the master code branch may be efficient, for example, by determining if one score is larger than another score and/or above a threshold.
  • the method further comprises: providing, with the test server, the one or more test results to the user interface and/or to the code base.
  • the user interface comprises a display
  • the method further comprises: displaying, with the user interface, the one or more test results on the display.
  • the one or more test results may be displayed and/or visualized for a user.
  • the method further comprises: determining, with the test server, whether the new code branch should be merged with the master code branch, based on the one or more test results, and merging, if the new code branch should be merged with the master code branch, with the code base, the new code branch and the master code branch.
  • the code branch can be automatically and efficiently updated.
  • the one or more tests results comprise one or more scores
  • determining whether the new code branch should be merged with the master code branch comprises: comparing the one or more scores to one or more threshold values.
  • the one or more threshold values may be predetermined, based on the configuration information, and/or based on the baseline data.
  • the one or more threshold values may be based on at least one of: the running mode information, the code branch information, and the data selection information.
  • the method further comprises: providing, with the test server, notification information to the user interface to indicate if the new code branch was merged with the master code branch.
  • a user may be notified.
  • a user may accept the merger or revoke the merger.
  • the method further comprises: modifying, if the new code branch was merged with the master code branch, with the test server, the baseline data based on the result file.
  • the baseline data may be automatically updated.
  • it may be automatically determined whether another code branch may be merged with the current master code branch based on the modified based line data.
  • a second aspect of this disclosure provides a code branch managing system for comparing and/or updating a master code branch with a new code branch
  • the managing system comprises a user interface, a code base, and a test server, wherein the user interface is configured to obtain configuration information, and to provide the configuration information to the test server, wherein the test server is configured to obtain the new code branch from the code base based on the configuration information, determine a configured test based on the new code branch and the configuration information, and determine a result file based on the configured test.
  • the code managing system comprises a mobile device, and wherein the mobile device comprises at least one of the user interface, the code base, and the test server.
  • a third aspect of this disclosure provides a computer program product comprising a program code for performing, when the program code is executed on a computer, the method according to any one of the implementation forms of the first aspect.
  • the system of the second aspect may have implementation forms that correspond to the implementation forms of the method of the first aspect.
  • the system of the second aspect and its implementation forms achieve the advantages and effects described above for the method of the first aspect and its respective implementation forms.
  • code repository and “code base” may be used interchangeably.
  • FIG. 1 shows a managing system according to an embodiment of this disclosure.
  • FIG. 2 shows an exemplary code branch managing system architecture according to an embodiment of this disclosure.
  • FIG. 3 shows an exemplary test result comprising a trajectory visualization of a mapping result according to an embodiment of this disclosure.
  • FIG. 4 shows a visualization of exemplary test results according to an embodiment of this disclosure.
  • FIG. 5 shows a method according to an embodiment of this disclosure.
  • FIG. 1 shows a code branch managing system 100 according to an embodiment of this disclosure.
  • the managing system 100 is configured to compare and/or update a master code branch 100a with a new code branch 100b.
  • the managing system 100 comprises a user interface 101, a code base 102, and a test server 103.
  • the user interface 101 is configured to obtain configuration information 104, for example, based on input information from a user, and to provide the configuration information 104 to the test server 103.
  • the test server 103 is configured to obtain the new code branch 100b from the code base 102 based on the configuration information 104.
  • the test server 103 is further configured to determine a configured test 105 based on the new code branch 100b and the configuration information 104, and to determine a result file 106 using the configured test 105.
  • FIG. 2 shows an exemplary code branch managing system architecture according to an embodiment of this disclosure.
  • the managing system 100 comprises three modules: the user interface 101, the code base 102 (can also be referred to as a code repository) , and the test server 103.
  • the three modules can be deployed on a same computer or device, or can be distributed on different computers or devices.
  • said device or devices may be mobile phones.
  • the test server may be comprised in a dedicated processing hardware.
  • a user may initiate a test configuration by entering configuration information 104 in the user interface 101.
  • the configuration information 104 may include, but is not be limited to, code branch information, data selection information, and running mode information.
  • the configuration information 104 may be sent to the test server 103 for running a corresponding configured test 105.
  • the test server 103 may be configured to obtain the corresponding code branch, for example the new code branch 100b, from the code base 102 to run the configured test 105 to generate a result file 106.
  • the result file 106 may comprise one or more test results 107 indicating a performance change between the master code branch 100a and the new code branch 100b.
  • said performance change may be indicated with a summarization and/or visualisation of the performance change.
  • the test server 103 may be configured to determine a performance change between the master code branch 100a and the new code branch 100b by comparing the result file 105 with a baseline data. Further, said performance change may be summarized, for example visualized, based on one or more performance indicators to generate one or more test results 107.
  • test server 103 may be configured to send the one or more test results 107 of the configured test 105 to the user interface 101 and code repository 102. After receiving the one or more test result 107, a user may determine whether to merge the tested code branch code 100b into the master code branch 100a. Alternatively or additionally, the test server may automatically determine whether to merge the tested code branch code 100b into the master code branch 100a.
  • test parameters for example running mode, test data, and/or code branch information
  • Said test parameters may describe the most important characteristics of the test 105 that is to be conducted, for example which branch of code is tested, which result files 106 to compare, or using which metrics, for example performance indicators, to evaluate result files 106 and generate the one or more test results 107.
  • These test characteristics may be user-defined and can be extended based on their requirements. When these test characteristics are assigned to their corresponding values, the configured test 105 may start running.
  • the test server 103 may be configured to obtain the code branch to be tested 100b from the code base 102. Said code branch may be used to generate executable files through code compilation. Based on the configuration information 104, a corresponding running script may be generated. A code running module may extract the corresponding test data, execute the executable file, and generate a log. Then, a log analysis module may extract and analyse the important information of this operation, and generate a corresponding result file 106 for subsequent performance evaluation. A performance evaluation module may compare the result file 106 with a latest baseline data, find the performance changes, and summarize, for example display or visualize, said performance changes based on one or more performance indicators, for example, in a result visualization module.
  • the result visualization module may generate scores, graphs, bar charts, and/or box charts.
  • the one or more test results 107 of said performance evaluations may be transmitted back to the test user interface 101 and to the code repository 102 and/or may be used to automatically merge the new code branch 100b and the master code branch 100a. Developers or users may observe the one or more test results 107 and determine whether to merge the tested code branch 100b into the master code branch 100a.
  • the user inputs important test parameters, such as running mode, test data, etc.
  • important test parameters such as running mode, test data, etc.
  • two different code branches are compared.
  • FIG. 3 shows an exemplary test result comprising a trajectory visualization of a mapping result according to an embodiment of this disclosure.
  • the managing system may extract trajectory information of the current running.
  • the performance evaluation module may calculate an error between the trajectory information and a recorded ground truth value.
  • the error metrics may be, for example, relative position error (RPE) and absolute translation error (ATE) , wherein RPE is an error of an inter-frame variation between a real trajectory and an estimated trajectory and ATE is the absolute trajectory error that directly measures the difference between the ground truth and estimated trajectory points.
  • RPE relative position error
  • ATE absolute translation error
  • the estimated pose sequence may be P_1, P_2, ..., P_n
  • the ground truth pose sequence may be Q_1, Q2, ..., Q_n
  • the time interval may be ⁇ .
  • the RPE of the i th frame may be defined as follows:
  • the ATE of the i th frame may be defined as follows:
  • the results 106 of said calculation may be compared with the latest baseline data to find the performance changes, and display said changes in the result visualization module, as shown in FIG. 4.
  • FIG. 4 shows a visualization of exemplary test results according to an embodiment of this disclosure.
  • the one or more test results 107 may be transmitted back to the user interface 101 as well as to the code base 102.
  • the user may view the one or more test result 107 and determine whether to merge the new code branch 100b into the master code branch 100a.
  • the managing system may be configured to automatically determine based on the one or more test results 107 whether to merge the code branch 100b into the master code branch 100a.
  • FIG. 5 shows a method 200 according to an embodiment of this disclosure.
  • the method 200 may be performed by the managing system 100.
  • the managing system 100 comprises a user interface 101, a code base 102, and a test server 103.
  • the method 200 comprises a step 201 of obtaining, with the user interface 101, configuration information 104.
  • the method 200 comprises a step 202 of providing, with the user interface 101, the configuration information 104 to the test server 103.
  • the method 200 comprises a step 203 of obtaining, with the test server 103, the new code branch 100b from the code base 102 based on the configuration information 104.
  • the method 200 comprises a step 204 of determining, with the test server 103, a configured test 105 based on the new code branch 100b and the configuration information 104. Further, the method 200 comprises a step 205 of determining, with the test server 103, a result file 106 using the configured test 105.
  • This disclosure provides a design scheme for high efficiency software testing.
  • This scheme can be applied to the testing of any software and algorithm system. Unlike general-purpose web interface testing, this test scheme allows developers or users to customize the system performance they want to test. For example, developers or users can define baseline data and test characteristics.
  • the test system can be used to test software performance automatically, which increases the development efficiency for the case of multiple developers or users working on the same code set and reduces the software performance degradation caused by new code merges.
  • the managing system 100 may comprise a processor.
  • the user interface 101 and/or test server 102 may comprise a processor.
  • the processor may be configured to perform, conduct or initiate the various operations of the managing system 100 described herein.
  • the processor may comprise hardware and/or may be controlled by software.
  • the hardware may comprise analog circuitry or digital circuitry, or both analog and digital circuitry.
  • the digital circuitry may comprise components such as application-specific integrated circuits (ASICs) , field-programmable arrays (FPGAs) , digital signal processors (DSPs) , or multi-purpose processors.
  • the managing system 100 may further comprise memory circuitry, which stores one or more instruction (s) that can be executed by the processor, in particular under control of the software.
  • the memory circuitry may comprise a non-transitory storage medium storing executable software code which, when executed by the processor, causes the various operations of the managing system 100 to be performed.
  • the managing system 100 may comprises one or more processors and a non-transitory memory connected to the one or more processors.
  • the non-transitory memory may carry executable program code which, when executed by the one or more processors, causes the managing system 100 to perform, conduct or initiate the operations or methods described herein.

Landscapes

  • Stored Programmes (AREA)

Abstract

The disclosure relates to managing code branches. The disclosure proposes a code branch managing system, and a corresponding method for operating the code branch managing system. The code branch managing system comprises a user interface, a code base, and a test server and is configured to compare and/or update a master code branch with a new code branch. The method comprises obtaining, with the user interface, configuration information, providing, with the user interface, the configuration information to the test server, obtaining, with the test server, the new code branch from the code base based on the configuration information, determining, with the test server, a configured test based on the new code branch and the configuration information, and determining, with the test server, a result file using the configured test.

Description

A CODE BRANCH MANAGING SYSTEM FOR COMPARING AND/OR UPDATING A MASTER CODE BRANCH WITH A NEW CODE BRANCH TECHNICAL FIELD
The disclosure relates to managing code branches. The disclosure proposes a code branch managing system, and a corresponding method for operating the code branch managing system. The code branch managing system comprises a user interface, a code base, and a test server, and is configured to compare and/or update a master code branch with a new code branch.
BACKGROUND
In the process of developing a same set of software codes at the same time, multiple code branches may need to be generated on the basis of a master code branch. After developing and testing code branches, developers need to integrate said code branches into the master code branch. If not carefully inspected, the fusion of new code branches can introduce code defects and degrade the performance of the entire code system.
Conventional support systems for code updates include basic test standards that are defined for simple development tasks such as web page development. Generally, standard interfaces are tested and the service logic is checked. However, these conventional types of test system cannot be customized for in-depth algorithm development, and cannot meet the test requirements of multiple team members to jointly develop the same algorithm code.
SUMMARY
The code quality of a software should be ensured in the case of multiple people developing the same set of code at the same time. However, without a complete test system design, collaborative code development efficiency may be very low. As new code branches are incorporated into a master code branch, code defects may be incorporated into the master code branch and may be accumulated. These code errors may cause various operational errors in the future, and may be difficult to locate.
In view of the above, an objective of this disclosure is to provide a code testing system for efficiently comparing and/or updating a master code branch with one or more new code branches. Another objective is to provide an automated code testing system to effectively improve team code development efficiency and/or to support simultaneous automated performance comparison of multiple code branches.
These and other objectives are achieved by this disclosure as described in the enclosed independent claims. Advantageous implementations are further defined in the dependent claims.
A first aspect of this disclosure provides a method of operating a code branch managing system for comparing and/or updating a master code branch with a new code branch, wherein the managing system comprises a user interface, a code base, and a test server, and wherein the method comprises: obtaining, with the user interface, configuration information, providing, with the user interface, the configuration information to the test server, obtaining, with the test server, the new code branch from the code base based on the configuration information, determining, with the test server, a configured test based on the new code branch and the configuration information, and determining, with the test server, a result file using the configured test.
Thus, new software and/or new code branches can be efficiently tested.
The code managing system may comprise a mobile device, wherein the mobile device may comprise at least one of the user interface, the code base, and the test server.
For example, the mobile device may comprise the user interface, wherein the code base and the test server may be comprised in one or more external devices. Thus, the processing requirements for the mobile device may be efficiently reduced.
In an implementation form of the first aspect, the configuration information comprises at least one of a code branch information, a data selection information, and a running mode information.
The test server may obtain the new code branch from the code base based on the code branch information.
The test server may determine the configured test based on the new code branch and the running mode information and/or based on the data selection information.
In a further implementation form of the first aspect, determining the configured test comprises: generating an executable file by code compilation using the new code branch and the configuration information, for example the running mode information, and/or generating a running script for the configured test based on the configuration information, for example based on the running mode information.
In a further implementation form of the first aspect, wherein determining the result file comprises: obtaining a test dataset from a predetermined test databank based on the configuration information, for example based on the data selection information, and executing the executable file based on the running script and the test dataset to generate a log or the result file.
The data selection information may comprise indication information for a test dataset of the test databank. The test databank may comprise one or more test datasets, which may be predetermined.
In a further implementation form of the first aspect, determining the result file further comprises: extracting and/or analysing information from the log to generate the result file.
In a further implementation form of the first aspect, the method further comprises: determining, with the test server, a performance change between the master code branch and the new code branch by comparing the result file with a baseline data.
The method may further comprise summarizing the performance change to generate one or more test results.
The test server may automatically determine whether the new code branch is preferred over the master code branch.
In a further implementation form of the first aspect, the method further comprises: selecting the baseline data from a predetermined baseline dataset based on the configuration information, for example based on the data selection information.
The baseline dataset may be based on the master code branch and/or the configuration information, for example the data selection information and/or the code branch information.
In a further implementation form of the first aspect, the result file is a first result file associated with the new code branch, and wherein the baseline data is comprised in a second result file associated with the master code branch.
In a further implementation form of the first aspect, the method further comprises: determining the performance change based on one or more performance indicators.
In a further implementation form of the first aspect, the method further comprises: summarizing, for example visualizing, the performance change based on the one or more performance indicators to generate one or more test results.
The performance changes may be summarized with at least one of: one or more scores, one or more graphs, one or more bar charts, and one or more box charts.
Thus, determining whether to merge new code branch and the master code branch may be efficient, for example, by determining if one score is larger than another score and/or above a threshold, or if a characteristic value of a graph is above another characteristic value of another graph and/or above a threshold.
In a further implementation form of the first aspect, the on one or more performance indicators are determined based on the configuration information, for example based on the running mode information.
In a further implementation form of the first aspect, the result file comprises one or more test results indicating a performance change between the master code branch and the new code branch.
The performance change may be indicated by one or more scores. Thus, determining whether to merge new code branch and the master code branch may be efficient, for example, by determining if one score is larger than another score and/or above a threshold.
In a further implementation form of the first aspect, the method further comprises: providing, with the test server, the one or more test results to the user interface and/or to the code base.
In a further implementation form of the first aspect, the user interface comprises a display, and wherein the method further comprises: displaying, with the user interface, the one or more test results on the display.
The one or more test results may be displayed and/or visualized for a user.
In a further implementation form of the first aspect, the method further comprises: determining, with the test server, whether the new code branch should be merged with the master code branch, based on the one or more test results, and merging, if the new code branch should be merged with the master code branch, with the code base, the new code branch and the master code branch.
Thus, the code branch can be automatically and efficiently updated.
In a further implementation form of the first aspect, the one or more tests results comprise one or more scores, and wherein determining whether the new code branch should be merged with the master code branch comprises: comparing the one or more scores to one or more threshold values.
The one or more threshold values may be predetermined, based on the configuration information, and/or based on the baseline data.
The one or more threshold values may be based on at least one of: the running mode information, the code branch information, and the data selection information.
In a further implementation form of the first aspect, the method further comprises: providing, with the test server, notification information to the user interface to indicate if the new code branch was merged with the master code branch.
Thus, a user may be notified. Thus, a user may accept the merger or revoke the merger.
In a further implementation form of the first aspect, the method further comprises: modifying, if the new code branch was merged with the master code branch, with the test server, the baseline data based on the result file.
Thus, the baseline data may be automatically updated. Thus, it may be automatically determined whether another code branch may be merged with the current master code branch based on the modified based line data.
A second aspect of this disclosure provides a code branch managing system for comparing and/or updating a master code branch with a new code branch, the managing system comprises a user interface, a code base, and a test server, wherein the user interface is configured to obtain configuration information, and to provide the configuration information to the test server, wherein the test server is configured to obtain the new code branch from the code base based on the configuration information, determine a configured test based on the new code branch and the configuration information, and determine a result file based on the configured test.
In an implementation form of the second aspect, the code managing system comprises a mobile device, and wherein the mobile device comprises at least one of the user interface, the code base, and the test server.
A third aspect of this disclosure provides a computer program product comprising a program code for performing, when the program code is executed on a computer, the method according to any one of the implementation forms of the first aspect.
The system of the second aspect may have implementation forms that correspond to the implementation forms of the method of the first aspect. The system of the second aspect and its implementation forms achieve the advantages and effects described above for the method of the first aspect and its respective implementation forms.
Further, in this disclosure the phrase “code repository” and “code base” may be used interchangeably.
It has to be noted that all devices, elements, units and means described in the disclosure could be implemented in the software or hardware elements or any kind of combination thereof. All steps which are performed by the various entities described in the disclosure as well as the functionalities described to be performed by the various entities are intended to mean that the respective entity is adapted to or configured to perform the respective steps and functionalities. Even if, in the following description of specific embodiments, a specific functionality or step to be performed by external entities is not reflected in the description of a specific detailed element of that entity which performs  that specific step or functionality, it should be clear for a skilled person that these methods and functionalities can be implemented in respective software or hardware elements, or any kind of combination thereof.
BRIEF DESCRIPTION OF DRAWINGS
The above described aspects and implementation forms will be explained in the following description of specific embodiments in relation to the enclosed drawings, in which
FIG. 1 shows a managing system according to an embodiment of this disclosure.
FIG. 2 shows an exemplary code branch managing system architecture according to an embodiment of this disclosure.
FIG. 3 shows an exemplary test result comprising a trajectory visualization of a mapping result according to an embodiment of this disclosure.
FIG. 4 shows a visualization of exemplary test results according to an embodiment of this disclosure.
FIG. 5 shows a method according to an embodiment of this disclosure.
DETAILED DESCRIPTION OF EMBODIMENTS
FIG. 1 shows a code branch managing system 100 according to an embodiment of this disclosure. The managing system 100 is configured to compare and/or update a master code branch 100a with a new code branch 100b. The managing system 100 comprises a user interface 101, a code base 102, and a test server 103. The user interface 101 is configured to obtain configuration information 104, for example, based on input information from a user, and to provide the configuration information 104 to the test server 103. The test server 103 is configured to obtain the new code branch 100b from the code base 102 based on the configuration information 104. The test server 103 is further configured to determine a configured test 105 based on the new code branch 100b and the configuration information 104, and to determine a result file 106 using the configured test 105.
FIG. 2 shows an exemplary code branch managing system architecture according to an embodiment of this disclosure.
The managing system 100 comprises three modules: the user interface 101, the code base 102 (can also be referred to as a code repository) , and the test server 103. The three modules can be deployed on a same computer or device, or can be distributed on different computers or devices. For example, said device or devices may be mobile phones. In another example, the test server may be comprised in a dedicated processing hardware. A user may initiate a test configuration by entering configuration information 104 in the user interface 101. The configuration information 104 may include, but is not be limited to, code branch information, data selection information, and running mode information. The configuration information 104 may be sent to the test server 103 for running a corresponding configured test 105. After receiving a command, for example based on the configuration information and/or running mode information, the test server 103 may be configured to obtain the corresponding code branch, for example the new code branch 100b, from the code base 102 to run the configured test 105 to generate a result file 106. The result file 106 may comprise one or more test results 107 indicating a performance change between the master code branch 100a and the new code branch 100b. For example, said performance change may be indicated with a summarization and/or visualisation of the performance change. Alternatively, the test server 103 may be configured to determine a performance change between the master code branch 100a and the new code branch 100b by comparing the result file 105 with a baseline data. Further, said performance change may be summarized, for example visualized, based on one or more performance indicators to generate one or more test results 107.
Further, the test server 103 may be configured to send the one or more test results 107 of the configured test 105 to the user interface 101 and code repository 102. After receiving the one or more test result 107, a user may determine whether to merge the tested code branch code 100b into the master code branch 100a. Alternatively or additionally, the test server may automatically determine whether to merge the tested code branch code 100b into the master code branch 100a.
The user interface 101 can be implemented in multiple forms, such as command line and Jenkins. In the user interface 101, test parameters, for example running mode, test data, and/or code branch information, can be entered. Said test parameters may describe the  most important characteristics of the test 105 that is to be conducted, for example which branch of code is tested, which result files 106 to compare, or using which metrics, for example performance indicators, to evaluate result files 106 and generate the one or more test results 107. These test characteristics may be user-defined and can be extended based on their requirements. When these test characteristics are assigned to their corresponding values, the configured test 105 may start running.
After receiving the test instruction, the test server 103 may be configured to obtain the code branch to be tested 100b from the code base 102. Said code branch may be used to generate executable files through code compilation. Based on the configuration information 104, a corresponding running script may be generated. A code running module may extract the corresponding test data, execute the executable file, and generate a log. Then, a log analysis module may extract and analyse the important information of this operation, and generate a corresponding result file 106 for subsequent performance evaluation. A performance evaluation module may compare the result file 106 with a latest baseline data, find the performance changes, and summarize, for example display or visualize, said performance changes based on one or more performance indicators, for example, in a result visualization module. The result visualization module may generate scores, graphs, bar charts, and/or box charts. The one or more test results 107 of said performance evaluations may be transmitted back to the test user interface 101 and to the code repository 102 and/or may be used to automatically merge the new code branch 100b and the master code branch 100a. Developers or users may observe the one or more test results 107 and determine whether to merge the tested code branch 100b into the master code branch 100a.
The user inputs important test parameters, such as running mode, test data, etc. Here two different code branches are compared.
FIG. 3 shows an exemplary test result comprising a trajectory visualization of a mapping result according to an embodiment of this disclosure.
After running the log analysis module, the managing system may extract trajectory information of the current running. The performance evaluation module may calculate an error between the trajectory information and a recorded ground truth value. The error metrics may be, for example, relative position error (RPE) and absolute translation error (ATE) , wherein RPE is an error of an inter-frame variation between a real trajectory and  an estimated trajectory and ATE is the absolute trajectory error that directly measures the difference between the ground truth and estimated trajectory points. The estimated pose sequence may be P_1, P_2, ..., P_n, the ground truth pose sequence may be Q_1, Q2, ..., Q_n, and the time interval may be Δ. The RPE of the i th frame may be defined as follows:
Figure PCTCN2022134865-appb-000001
The ATE of the i th frame may be defined as follows:
Figure PCTCN2022134865-appb-000002
After the calculation is complete, the results 106 of said calculation may be compared with the latest baseline data to find the performance changes, and display said changes in the result visualization module, as shown in FIG. 4.
FIG. 4 shows a visualization of exemplary test results according to an embodiment of this disclosure.
The one or more test results 107 may be transmitted back to the user interface 101 as well as to the code base 102. The user may view the one or more test result 107 and determine whether to merge the new code branch 100b into the master code branch 100a. Alternatively or additionally, the managing system may be configured to automatically determine based on the one or more test results 107 whether to merge the code branch 100b into the master code branch 100a.
FIG. 5 shows a method 200 according to an embodiment of this disclosure. The method 200 may be performed by the managing system 100. Generally, the managing system 100 comprises a user interface 101, a code base 102, and a test server 103. The method 200 comprises a step 201 of obtaining, with the user interface 101, configuration information 104. Further, the method 200 comprises a step 202 of providing, with the user interface 101, the configuration information 104 to the test server 103. Further, the method 200 comprises a step 203 of obtaining, with the test server 103, the new code branch 100b from the code base 102 based on the configuration information 104. Further, the method 200 comprises a step 204 of determining, with the test server 103, a configured test 105 based on the new code branch 100b and the configuration information 104. Further, the method 200 comprises a step 205 of determining, with the test server 103, a result file 106 using the configured test 105.
This disclosure provides a design scheme for high efficiency software testing. This scheme can be applied to the testing of any software and algorithm system. Unlike general-purpose web interface testing, this test scheme allows developers or users to customize the system performance they want to test. For example, developers or users can define baseline data and test characteristics. The test system can be used to test software performance automatically, which increases the development efficiency for the case of multiple developers or users working on the same code set and reduces the software performance degradation caused by new code merges.
The managing system 100 may comprise a processor. For example, the user interface 101 and/or test server 102 may comprise a processor.
Generally, the processor may be configured to perform, conduct or initiate the various operations of the managing system 100 described herein. The processor may comprise hardware and/or may be controlled by software. The hardware may comprise analog circuitry or digital circuitry, or both analog and digital circuitry. The digital circuitry may comprise components such as application-specific integrated circuits (ASICs) , field-programmable arrays (FPGAs) , digital signal processors (DSPs) , or multi-purpose processors. The managing system 100 may further comprise memory circuitry, which stores one or more instruction (s) that can be executed by the processor, in particular under control of the software. For instance, the memory circuitry may comprise a non-transitory storage medium storing executable software code which, when executed by the processor, causes the various operations of the managing system 100 to be performed. In one embodiment, the managing system 100 may comprises one or more processors and a non-transitory memory connected to the one or more processors. The non-transitory memory may carry executable program code which, when executed by the one or more processors, causes the managing system 100 to perform, conduct or initiate the operations or methods described herein.
The disclosure has been described in conjunction with various embodiments as examples as well as implementations. However, other variations can be understood and effected by those persons skilled in the art and practicing the claimed matter, from the studies of the drawings, this disclosure and the independent claims. In the claims as well as in the description the word “comprising” does not exclude other elements or steps and the indefinite article “a” or “an” does not exclude a plurality. A single element or other unit  may fulfill the functions of several entities or items recited in the claims. The mere fact that certain measures are recited in the mutual different dependent claims does not indicate that a combination of these measures cannot be used in an advantageous implementation.

Claims (21)

  1. A method (200) of operating a code branch managing system (100) for comparing and/or updating a master code branch (100a) with a new code branch (100b) ,
    wherein the managing system (100) comprises a user interface (101) , a code base (102) , and a test server (103) , and
    wherein the method comprises:
    obtaining (201) , with the user interface (101) , configuration information (104) ,
    providing (202) , with the user interface (101) , the configuration information (104) to the test server (103) ,
    obtaining (203) , with the test server (103) , the new code branch (100b) from the code base (102) based on the configuration information (104) ,
    determining (204) , with the test server (103) , a configured test (105) based on the new code branch (100b) and the configuration information (104) , and
    determining (205) , with the test server (103) , a result file (106) using the configured test (105) .
  2. The method (200) according to claim 1,
    wherein the configuration information (104) comprises at least one of a code branch information, a data selection information, and a running mode information.
  3. The method (200) according to any of the preceding claims,
    wherein determining (204) the configured test (105) comprises:
    generating an executable file by code compilation using the new code branch (100b) and the configuration information (104) , and/or
    generating a running script for the configured test (105) based on the configuration information (104) .
  4. The method (200) according to claim 3,
    wherein determining (205) the result file (106) comprises:
    obtaining a test dataset from a predetermined test databank based on the configuration information (104) , and
    executing the executable file based on the running script and the test dataset to generate a log or the result file (106) .
  5. The method (200) according to claim 4,
    wherein determining (205) the result file (106) further comprises:
    extracting and/or analysing information from the log to generate the result file (106) .
  6. The method (200) according to any of the preceding claims,
    wherein the method (200) further comprises:
    determining, with the test server (103) , a performance change between the master code branch (100a) and the new code branch (100b) by comparing the result file (106) with a baseline data.
  7. The method (200) according to claim 6,
    wherein the method (200) further comprises:
    selecting the baseline data from a predetermined baseline dataset based on the configuration information (104) .
  8. The method (200) according to claim 6 or 7,
    wherein the result file (106) is a first result file associated with the new code branch (100b) , and
    wherein the baseline data is comprised in a second result file associated with the master code branch (100a) .
  9. The method according to one of the claims 6 to 8,
    wherein the method further comprises:
    determining the performance change based on one or more performance indicators.
  10. The method (200) according to claim 9,
    wherein the method (200) further comprises:
    summarizing the performance change based on the one or more performance indicators to generate one or more test results.
  11. The method (200) according to claim 9 or 10,
    wherein the on one or more performance indicators are determined based on the configuration information (104) .
  12. The method (200) according to one of the claims 1 to 5,
    wherein the result file (106) comprises one or more test results indicating a performance change between the master code branch (100a) and the new code branch (100b) .
  13. The method (200) according to one of the claims 10 to 12,
    wherein the method (200) further comprises:
    providing, with the test server (103) , the one or more test results to the user interface (101) and/or to the code base (102) .
  14. The method (200) according to claim 13,
    wherein the user interface (101) comprises a display, and
    wherein the method (200) further comprises:
    displaying, with the user interface (101) , the one or more test results on the display.
  15. The method (200) according to one of the claims 10 to 14,
    wherein the method (200) further comprises:
    determining, with the test server (103) , whether the new code branch (100b) should be merged with the master code branch (100a) , based on the one or more test results, and
    merging, if the new code branch (100b) should be merged with the master code branch (100a) , with the code base (102) , the new code branch (100b) and the master code branch (100a) .
  16. The method (200) according to claim 15,
    wherein the one or more tests results comprise one or more scores, and
    wherein determining whether the new code branch (100b) should be merged with the master code branch (100a) comprises:
    comparing the one or more scores to one or more threshold values.
  17. The method (200) according to claim 15 or 16,
    wherein the method (200) further comprises:
    providing, with the test server (103) , notification information to the user interface (101) to indicate if the new code branch (100b) was merged with the master code branch (100a) .
  18. The method (200) according to one of the claims 15 to 17,
    wherein the method (200) further comprises:
    modifying, if the new code branch (100b) was merged with the master code branch (100a) , with the test server (103) , the baseline data based on the result file (106) .
  19. A code branch managing system (100) for comparing and/or updating a master code branch (100a) with a new code branch (100b) ,
    wherein the managing system (100) comprises a user interface (101) , a code base (102) , and a test server,
    wherein the user interface (101) is configured to obtain configuration information (104) , and to
    provide the configuration information (104) to the test server (103) ,
    wherein the test server (103) is configured to obtain the new code branch (100b) from the code base (102) based on the configuration information (104) , determine a configured test (105) based on the new code branch (100b) and the configuration information (104) , and determine a result file (106) using the configured test (105) .
  20. The code branch managing system (100) according to claim 19,
    wherein the code managing system (100) comprises a mobile device, and
    wherein the mobile device comprises at least one of the user interface (101) , the code base (102) , and the test server (103) .
  21. A computer program product comprising a program code for performing, when the program code is executed on a computer, the method (200) according to any one of the claims 1 to 18.
PCT/CN2022/134865 2022-11-29 2022-11-29 A code branch managing system for comparing and/or updating a master code branch with a new code branch WO2024113128A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/134865 WO2024113128A1 (en) 2022-11-29 2022-11-29 A code branch managing system for comparing and/or updating a master code branch with a new code branch

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/134865 WO2024113128A1 (en) 2022-11-29 2022-11-29 A code branch managing system for comparing and/or updating a master code branch with a new code branch

Publications (1)

Publication Number Publication Date
WO2024113128A1 true WO2024113128A1 (en) 2024-06-06

Family

ID=91322798

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/134865 WO2024113128A1 (en) 2022-11-29 2022-11-29 A code branch managing system for comparing and/or updating a master code branch with a new code branch

Country Status (1)

Country Link
WO (1) WO2024113128A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105468507A (en) * 2014-09-03 2016-04-06 腾讯科技(深圳)有限公司 Branch fulfillment detection method and apparatus
CN109656621A (en) * 2018-12-19 2019-04-19 睿驰达新能源汽车科技(北京)有限公司 A kind of method and device merging code
CN111382049A (en) * 2018-12-28 2020-07-07 阿里巴巴集团控股有限公司 Code submitting method and device and electronic equipment
CN115080382A (en) * 2022-04-29 2022-09-20 曙光信息产业股份有限公司 Code testing method, device, equipment and medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105468507A (en) * 2014-09-03 2016-04-06 腾讯科技(深圳)有限公司 Branch fulfillment detection method and apparatus
CN109656621A (en) * 2018-12-19 2019-04-19 睿驰达新能源汽车科技(北京)有限公司 A kind of method and device merging code
CN111382049A (en) * 2018-12-28 2020-07-07 阿里巴巴集团控股有限公司 Code submitting method and device and electronic equipment
CN115080382A (en) * 2022-04-29 2022-09-20 曙光信息产业股份有限公司 Code testing method, device, equipment and medium

Similar Documents

Publication Publication Date Title
JP7398068B2 (en) software testing
CN110377704B (en) Data consistency detection method and device and computer equipment
Ang et al. Revisiting the practical use of automated software fault localization techniques
US9703683B2 (en) Software testing coverage
US20210012219A1 (en) Dynamic generation of rule and logic statements
CN115630036A (en) Error information processing method, apparatus, device, storage medium and program product
US20160162539A1 (en) Computer executable method of generating analysis data and apparatus performing the same and storage medium for the same
Bajammal et al. A survey on the use of computer vision to improve software engineering tasks
CN110874364B (en) Query statement processing method, device, equipment and storage medium
CN113342692B (en) Automatic test case generation method and device, electronic equipment and storage medium
WO2024113128A1 (en) A code branch managing system for comparing and/or updating a master code branch with a new code branch
CN113760891A (en) Data table generation method, device, equipment and storage medium
CN115481025A (en) Script recording method and device for automatic test, computer equipment and medium
US20190265954A1 (en) Apparatus and method for assisting discovery of design pattern in model development environment using flow diagram
CN115934548A (en) Statement level software defect positioning method and system based on information retrieval
CN112783762B (en) Software quality assessment method, device and server
CN115310011A (en) Page display method and system and readable storage medium
US11119761B2 (en) Identifying implicit dependencies between code artifacts
CN110347577B (en) Page testing method, device and equipment thereof
EP3671467A1 (en) Gui application testing using bots
Andonov et al. logs2graphs: Data-driven graph representation and visualization of log data
US11907111B2 (en) Database troubleshooting with automated functionality
US11782817B2 (en) Aiding diagnosis of errors in code
CN114969759B (en) Asset security assessment method, device, terminal and medium of industrial robot system
US20210304070A1 (en) Machine learning model operation management system, operation management method, and computer readable recording medium