CN107704392B - Test case processing method and server - Google Patents

Test case processing method and server Download PDF

Info

Publication number
CN107704392B
CN107704392B CN201710924534.7A CN201710924534A CN107704392B CN 107704392 B CN107704392 B CN 107704392B CN 201710924534 A CN201710924534 A CN 201710924534A CN 107704392 B CN107704392 B CN 107704392B
Authority
CN
China
Prior art keywords
test
fault
code
failure
library
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710924534.7A
Other languages
Chinese (zh)
Other versions
CN107704392A (en
Inventor
王晓锋
李建新
崔成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Huawei Cloud Computing Technology Co ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201710924534.7A priority Critical patent/CN107704392B/en
Publication of CN107704392A publication Critical patent/CN107704392A/en
Application granted granted Critical
Publication of CN107704392B publication Critical patent/CN107704392B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The embodiment of the application discloses a test case processing method and a server. The method comprises the steps of receiving a description file sent by a master console, wherein the description file is used for describing a structured flow of one or a group of test cases; determining the test operation of the target test case according to the description file; determining a test flow of the test operation according to the description file; calling a test method corresponding to the test operation in the test library set, wherein the test method corresponding to the test operation is stored in the test library set; and generating a target test case according to the test flow and the test method. According to the embodiment of the application, the test operation can be realized by calling the test method, and the complex flow among the test operations can be realized through the test flow, so that the test case with complex structures such as branches and loops can be created, and the test method is a reusable test method, so that the development flow of the test case is simplified, and the development efficiency of the test case is improved.

Description

Test case processing method and server
Technical Field
The application relates to the field of software development, in particular to a test case processing method and a server.
Background
In the background of cloud computing, the scale and complexity of distributed systems such as cloud computing are increasing day by day, how to ensure that software with high availability is output in shorter and shorter development periods becomes a serious challenge, and reliability testing is more and more important as a key link for ensuring the quality of the software. In order to complete the reliability test with high efficiency and high quality, it is necessary to simulate a fault that may affect the reliability of the system, and test the reliability of the system in combination with the service. In the test acceptance stage of the system research and development process, abundant test cases are required to verify the quality of the system, so that higher requirements are provided for the efficiency and quality of the arrangement of the reliable test cases.
At present, because the test cases are generally written manually aiming at different tested systems, case developers are required to be familiar with a programming language, specific case development is completed according to the requirements of a service scene, and compiling, debugging and releasing of the cases are completed on the basis, so that the cases can be loaded and used by a test tool. And the other is written by adopting a table language.
However, the manual writing of test cases requires development experience of reliability testers to master a programming language, and since the case programming itself belongs to code development, errors are easily introduced in the process, the evaluation of reliability test results is affected, the production efficiency is low, and the maintenance is difficult. The table language writing can only write simple cases for executing specified operations in sequence, does not support complex cases containing branches, loops, concurrences and other flows, cannot meet the reliability test requirement, can only be stored in a private text format and executed by a proprietary interpreter, and is difficult to transplant and reuse.
Disclosure of Invention
The embodiment of the application provides a processing method of a test case and a server to solve the problem that the current test case written manually and the automatically generated test case cannot meet the requirement of reliability test.
A description file sent by a master console is received, where the description file is used to describe a structured flow of one or a group of test cases, so that a test operation of a target test case and a test flow of the test operation can be determined from the description file, and then a test method corresponding to the test operation in a test library set can be called, where, for the test library set, the test method corresponding to the test operation is stored; and finally, generating the target test case according to the test flow and the test method.
It can be seen that, since it can directly determine which test operations and test flows are needed according to the structured flow in the description file, so that the corresponding codes containing the test operation and the test flow can be generated according to the test operation and the test flow to finish the writing of the test case, for the test operation, the embodiment of the application is provided with a test library set, in which some test methods corresponding to the test operation are stored, the test methods are functions in nature, the embodiment of the application can realize test operation by calling the test method and realize complex flow between the test operations by the test flow, therefore, the test case with complex structures such as branches and loops can be created, and the test method is a reusable test method, so that the development flow of the test case is simplified, and the development efficiency of the test case is improved.
In some embodiments, the test library set may actually include a fault library, a service library, and a monitoring library, which have different functions when generating a target test case, specifically, fault test methods are stored in the fault library, interfaces corresponding to services of the system under test are stored in the service library, and in addition, the monitoring library may be used to monitor the system under test when executing the test case; wherein the test operation comprises a library method test operation, the library method test operation comprises a failure test operation, and the test method comprises a failure test method; the fault test operation corresponds to the fault test method; at this time, the method for calling the test method corresponding to the test operation in the test library set includes: determining fault modes according to the description file, wherein different fault scenes are stored in the fault library, and each fault scene corresponds to at least one fault mode; and then, calling fault test methods corresponding to the fault test operation from the fault library according to the fault modes, wherein each fault mode corresponds to one fault test method, and the fault test methods stored in the fault library are realized by adopting the same or different computer languages. It can be seen that the division of the fault scenario may be a tree structure, that is, the root node containing all fault scenarios, the refined sub-scenarios as branch nodes, and finally subdivided into leaf nodes, that is, fault modes, are instantiable faults. Thereby enhancing the realizability of the method of the embodiment of the application.
In some embodiments, each failure mode includes a failure injection interface having failure injection capability for providing a reusable failure injection method for the failure mode and a failure anticipation interface providing failure anticipation judgment for providing failure anticipation judgment logic for the failure mode. For each fault mode, when a test case is executed, a fault injection interface is required to be called to inject a fault, so that the tested system is in the fault state, then a fault expectation interface is called to perform a fault expectation test, some information of the tested system is monitored firstly, and then the information is compared with a fault expectation value of the fault mode, so that whether the tested system is abnormal or not is determined. Thereby enhancing the realizability of the method of the embodiment of the application.
In some embodiments, the value of the fault expectation for the fault mode may be some indicator including a business key performance indicator, KPI, class indicator and a system state class indicator, and the fault expectation interface includes both a fault expectation interface of the business KPI class and a fault expectation interface of the system state class; the fault expectation interface of the service KPI is used for determining the fault expectation of the service KPI through a service damage value when the tested system is injected into the fault, and the service KPI index comprises a preset service damage threshold value; and the fault expectation interface of the system state class is used for determining the fault expectation of the system state class by judging whether the system state and the system behavior conform to the preset system state and the preset system behavior, and the system state class index comprises the preset system state or the preset system behavior. The realizability of the method of the embodiment of the application can be enhanced.
In some embodiments, the specific process of generating the target test case may be generating some codes and composing the target test case, specifically, creating a first code that calls the fault injection interface in a synchronous manner; creating a second code for calling the service function of the tested system corresponding to the failure mode; creating a third code for calling a failure anticipation interface of the service KPI class in an asynchronous calling mode; and/or, creating fourth code of a failure anticipation interface for calling the system state class in an asynchronous calling mode; creating a fifth code for monitoring the failure mode by calling a monitoring interface of a monitoring library; finally, the target test case can be generated according to the testing step, the created first code, the created second code, the created fifth code, the created third code segment and/or the created fourth code segment. It can be seen that the generated target test case can have three situations, the first is to test only the service KPI index, the second is to test only the system state index, and the third is to test both the service KPI index and the system state index. The realizability of the method of the embodiment of the application can be enhanced.
In some embodiments, the method further includes two cases, that is, if the calling of the fault injection interface fails, executing a code generated by an end operation, where the end operation is used to end code creation and the code also belongs to a code in a target test case; secondly, if the expected failure interface of the service KPI class or the expected failure interface of the system state class is called in an asynchronous call mode, since the call is asynchronous, and for the ending operation, it is necessary to wait for the completion of the operations to perform the ending operation, so the ending operation generates a code waiting for the return of the asynchronous call, and the code also belongs to the code in the target test case. The realizability of the method of the embodiment of the application can be enhanced.
In some embodiments, the target test case may be executed in addition to the process of generating the target test case described above.
In some embodiments, if the target test case includes a fault test operation, when the target test case is executed, a fault injection interface of a fault mode corresponding to the fault test operation is first called to inject the fault mode into the system under test; secondly, collecting samples indicated by the service KPI every other first preset time length in a first preset time window; and analyzing the sample set collected in the preset time window to generate a fault expected result of the fault mode service KPI. That is, for the fault test operation, two interfaces, namely a fault injection interface and a fault anticipation interface, need to be called to execute the target test case and generate a fault anticipation result.
In some embodiments, a specific process of analyzing the sample set collected within the preset time window to generate the expected failure result of the failure mode service KPI class may have three situations, where, in the first case, at least one of a mean value, an extreme value, and a percentile of the samples in the sample set is calculated, and when at least one of the mean value, the extreme value, and the percentile exceeds at least one of a preset mean value threshold, an extreme value threshold, and a percentile threshold, the method can be used as a basic judgment class for determining the abnormal state of the system to be tested, and therefore, the expected failure result may be generated; secondly, analyzing the fluctuation of the sample, and when the fluctuation amplitude of the sample is larger than a preset fluctuation amplitude, determining that the system to be tested is abnormal as a judgment basis of the service abnormality and generating a failure expected result; thirdly, analyzing the variation trend of the sample, determining the abnormality of the system to be tested as the judgment basis of the business abnormality when the variation trend exceeds the preset variation trend, and generating a failure expectation result. It can be seen that the three modes can determine whether the tested system is abnormal independently, and any combination of the three modes can also determine whether the tested system is abnormal. Thereby, the expandability of the method of the embodiment of the application can be enhanced.
In some embodiments, the fault expectations of the system state class are also included in the fault modes in addition to the fault expectations of the service KPIs, at this time, when a target test case is executed, a fault injection interface of the fault mode corresponding to the fault test operation is first called to inject the fault mode into the system under test; then, collecting the system state information at a first moment, wherein the first moment is a moment when a second preset time window passes from the current moment; after the system state information is collected, the system state information can be judged, specifically, when the system state information exceeds a preset system state and system behavior expectation, the service abnormity is determined, and a fault expectation result is generated. Thereby enabling enhanced realizability of the methods of the embodiments of the present application.
In some embodiments, for a library method test operation, invoking a test method in the test library set corresponding to the test operation comprises: firstly, extracting library method test operation, parameter entering information and parameter exiting information from the structured flow of the description file; calling a test method of the test operation of the corresponding library method in the test library set in a function calling mode; in this case, the process of generating the target test case may be that the target test case is generated according to the test flow and using the entry parameter information and the exit parameter information as parameter information of the call function. Thereby enabling enhanced realizability of the methods of the embodiments of the present application.
In some embodiments, the process of extracting the library method, the entry information and the exit information from the structured flow of the description file may specifically be that the cycle number of the library method, the entry information and the exit information of the library method are determined according to the structured flow; the library method in the test library set is called in a function calling mode, and an internal circulation calling code corresponding to the circulation times is generated according to the circulation times of the library method; secondly, when a target test case is generated, two situations of generating codes occur, wherein the first situation is that when the calling is determined to be synchronous calling, a code waiting for the return of a calling result in the current thread is created; secondly, when the call is determined to be asynchronous call, generating a code for asynchronous call according to thread management information and signal processing information in the asynchronous call; and finally, generating the target test case according to the code generated by the synchronous call or the asynchronous call and the input parameter information and the output parameter information which are used as the function parameter information. Thereby enabling enhanced realizability of the methods of the embodiments of the present application.
In some embodiments, in addition to some test operations in the test library set, the description file itself may also carry some code fragments, which may also be some test operations but are not test operations that can be reused in the test library set, for a code fragment, because a target test case needs to be put into, some processing needs to be performed on the code fragment, specifically, the method may further include, first, obtaining the code fragment from the description file, then, checking syntax of the code fragment, and analyzing semantics of the code fragment; after the generation of the target test case is completed, the target test case is generated according to the test flow, the code segment and the code test method, that is, the code segment is included in the information required for generation. Thereby, the expandability of the method of the embodiment of the application can be enhanced.
In some embodiments, determining the test flow of the test operation from the description file includes determining a flow direction of the test flow of the test operation, the flow direction including a forward direction and a reverse direction, and generating code for the test flow. A forward flow direction indicates that a subsequent test operation is performed after the test operation is performed, and a reverse flow direction indicates that a test operation is performed after the test operation is performed, and the test operation may return to the previous test operation or previous test operations, which may constitute a loop operation. Therefore, the method of the embodiment of the application can support a complex test flow and enhance the realizability of the method of the embodiment of the application.
In some embodiments, when the test flow of the test operation has at least two parallel branches, it is necessary to determine the flow direction of each branch, and specifically, determining the flow direction of the test operation may be determining the flow direction of each branch of the test flow of the test operation and generating a code of the flow direction of each branch of the test flow.
In some embodiments, for the overall flow, some condition restrictions need to be performed, for example, all flow directions are merged into the same test operation to generate a correct test case, and therefore, the flow direction merging needs to be detected; the specific detection process may be that information of the flow direction of the test flow is first obtained, and then, when each flow direction of the test flow is fanned into the same converged test operation, a code of the test flow is created. Creating this code means that all streams flow into the same test operation. Thereby enhancing the realizability of the embodiments of the present application.
In some embodiments, if a test method corresponding to the test operation in the call test library set fails or some test flows fail, the end operation returns test case generation failure information, where the test case generation failure information includes information of the test method call failure, that is, information of whether the call failure or the flow direction in the flows cannot be imported into the same test operation, and the like, so that a developer can determine the situation of the test case generation failure. Thereby, the expandability of the embodiment of the application can be enhanced.
A fourth aspect of the present application provides a server, where the server includes at least one unit configured to execute the processing method for the test case provided in the first aspect or any implementation manner of the first aspect.
Yet another aspect of the present application provides a computer-readable storage medium having stored therein program code, which when executed by a terminal, causes a computer to perform the method of the above-described aspects. The storage medium includes, but is not limited to, a flash memory (flash memory), a Hard Disk Drive (HDD) or a Solid State Drive (SSD).
Drawings
FIG. 1 is a schematic diagram of an open source automated test framework use case editing system;
FIG. 2 is a schematic diagram of a reliability testing system of a distributed software system according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an architecture of a use case generator in the reliability testing system according to an embodiment of the present application;
FIG. 4 is a diagram of an embodiment of a method for processing test cases according to an embodiment of the present application;
FIG. 5 is a graphical representation of a test case structuring process;
FIG. 6 is a schematic structural diagram of a fault scenario module according to an embodiment of the present application;
FIG. 7 is a test case script generated by the test case processing method according to the embodiment of the present application;
FIG. 8 is a test case script generated by the test case processing method according to the embodiment of the present application;
FIG. 9 is a diagram of one embodiment of a server according to an embodiment of the present application;
fig. 10 is a diagram of one embodiment of a server according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a processing method of a test case and a server to solve the problem that the current test case written manually and the automatically generated test case cannot meet the requirement of reliability test.
In order to make the technical field better understand the scheme of the present application, the following description will be made on the embodiments of the present application with reference to the attached drawings.
The terms "first," "second," "third," "fourth," and the like in the description and in the claims of the present application and in the drawings described above, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprises," "comprising," or "having," and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
The reliability test is taken as a key link for ensuring the software quality, the reliability test becomes more and more important along with the higher and higher complexity of the software, and particularly, in the test acceptance stage of the system research and development process, abundant test cases are required to verify the quality of the system, so that higher requirements are provided for the efficiency and the quality of arranging the reliability test cases. The current test case mainly adopts a mode of manually writing program codes, so that case developers are required to be familiar with a programming language, can complete the development of a specific test case according to the requirements of a business scene, and can complete the compiling, debugging and releasing of the test case on the basis, and finally can be loaded and used by a test tool.
For example, referring to fig. 1, fig. 1 is a schematic diagram of an open source automated test framework (robot frame) use case editing system, which performs test case arrangement based on a reusable test method without mastering a general programming language. As shown in fig. 1, the upper half part is a flow required for writing a test case, such as those including documents (documentation), plans (setup), disassembles (teardown), timeouts (timeout), templates (templates), and tags (tags), each of which requires a developer to manually write a code; the table at the lower half represents that a plurality of steps of disassembling in the disassembling process are executed according to serialization, each line in the table represents one disassembled testing step, each testing step gives an operation method and an input parameter, and the table is stored in a text form after being completed and is executed by a background interpreter. For example, the first row of the table, step 1, is to open the browser (open browser), with references to the firefox browser and the website "http:// www.google.de", which is to open the website "http:// www.google.de" with the firefox browser; then step 2 is selecting window (select window), and the join is "main", and the step is selecting main window; step 3, setting a selenium speed (set selenium speed) as 2, namely setting the selenium speed as 2; step 4, maximizing the window (maxmize window), namely performing maximization operation on the selected window; step 5, inputting a text (input text), wherein the input references are q and codentric GmBH; step 6, clicking a button (click button), and referring to 'btnG'; step 7, the page should contain (page hould contact), and the reference is 'codentric GmBH'; step 8, capture screen shot (capture screen shot). Thereby completing the writing process of a test case. The above steps 1 to 8 are 8 steps of disassembling a test case in the disassembling process.
However, the above method adopts manual programming, on one hand, the use case is generated by using a programming method, a reliability tester needs to have development experience to master a programming language, and since the use case programming itself belongs to code development, errors are easily introduced in the process, and the evaluation of the reliability test result is influenced; on the other hand, the case is created in a manual programming mode, so that the working efficiency is low; on the other hand, the use case generated by the programming mode can only know the intention of the use case according to documents or comments, and is not easy to maintain. On the other hand, in a form language mode, on one hand, only simple test cases for executing specified operations in sequence can be compiled, complex cases comprising flows of branching, circulation, concurrency and the like are not supported, and the requirement of reliability test cannot be met; on the other hand, the use case can only be saved in a proprietary text format and executed by a proprietary interpreter, and is difficult to transplant and reuse.
For the reliability test implementation difficulty of a complex distributed software system, the writing and maintenance difficulty of test cases is mainly caused by the following aspects:
first, the technical threshold is high. The use case writer needs to know not only the function and the business flow of the target system under test, but also master the general programming language such as Python, and reliability domain knowledge such as fault injection and fault expectation analysis. Secondly, the use case compiling efficiency is low. The prior art has low efficiency of compiling test cases, and for a complex software distributed system, hundreds of test cases often take a lot of time. And finally, the case management and maintenance are difficult. Test cases compiled in the prior art are different in style, difficult to maintain and poor in reusability.
Therefore, the open-source automatic test framework cannot be well used for compiling reliable test cases of a load distributed software system due to the adoption of manual programming and a table language, and cannot solve the problems of difficult implementation of reliability tests of a complex distributed software system and difficult compiling and maintenance of the test cases.
In view of the difficulty in implementing the reliability test of the complex distributed software system and the difficulty in writing and maintaining the test case, the embodiment of the application provides a method for processing the test case, and the difficulty is solved by writing the test case automatically. A reliability test case (hereinafter, referred to as a test case) in the embodiment of the present application is automatically compiled by using a reliability test apparatus, a specific architecture of the reliability test case may refer to fig. 2, fig. 2 is a schematic diagram of a reliability test system of a distributed software system in the embodiment of the present application, and as shown in fig. 2, the reliability test apparatus in the system may include five logic entities, namely, a master console, a case generator, a case executor, a test monitor, a test library set, and a test agent. The master control platform, the case generator, the case executor and the test monitor can be intensively deployed on a single physical server or can be dispersedly deployed to a physical server cluster. The embodiment of the application does not limit the deployment mode. The test library monitoring module can be connected to a node of a target tested system, and the test library set can communicate with the test agent through remote procedure call and other technologies.
It should be noted that, based on consideration of engineering factors such as performance and reliability, the use case executor may be deployed to a dedicated use case execution cluster in a multi-instance deployment manner, so as to meet the requirement of large-scale concurrent testing. The test library set is essentially a set of function library sets, and is usually deployed on a physical server where a use case generator and a use case executor are located. And the test agent is deployed on each physical node of the tested distributed system for demand testing.
Wherein, the total console provides unified test control entry. A user can create and execute a test case through the master control console, observe the test process through the test monitoring module and analyze the reliability problem of the target tested system.
The test library is a group of reusable test method sets, each test method is essentially a function or a collection of several functions and is a basic unit for constructing test cases, and codes in different languages are stored in the test library sets. In the reliability testing apparatus provided in the embodiment of the present application, the test cases are constructed on a group of test libraries, which is called a test library set. Test libraries can be semantically divided into three categories: a service library, a fault library and a monitoring library.
The service library is a combination of the service function callable interfaces of the system to be tested and is an entrance for starting the service process of the system to be tested. For example, in a virtual distributed storage system, the virtual volume management function is provided in the form of a volume operation test library, and provides functions of creating, mounting, uninstalling, reading, writing, deleting, and the like of virtual volumes.
The fault library is a test library specific to performing reliability tests. A large number of fault mode libraries are accumulated in reliability engineering practice, and a test case based on fault injection is constructed for the fault modes, so that the method is a basic method for reliability testing. The current failure mode library is used as a reference for activities such as reliability test design and the like, and the propagation and multiplexing of reliability test capability code level can not be realized; the fault library in the embodiment of the application is an instantiated fault mode library, encapsulates the capabilities of fault injection, fault monitoring and fault expectation judgment aiming at the fault modes, and is a key for constructing the test case. The fault library classifies fault scenes based on reliability engineering field knowledge accumulated by negative direction analysis methods such as fault analysis, test analysis and the like, for example, classification based on a tree structure is carried out, a plurality of fault modes can be provided under each type of fault scenes, each fault mode provides a reusable interface related to fault injection capability and fault expected judgment capability, and a test case script is called in the form of the fault library to realize fault reproduction.
The monitoring library is used for acquiring the real-time state of the system to be tested. And calling a method in the monitoring library by the test case to acquire real-time data of a certain preset KPI or a certain group of KPI indexes, and determining a fault injection test result by matching with the fault expectation judgment capability of the fault library.
The universal library is a combination of common public methods for further improving the multiplexing capability of some codes which can be shared.
The specific execution process of the reliability test system in the embodiment of the application is that a master control console generates or acquires a description file with a description of a structured flow, specifically, the master control console can provide an editing interface of a certain structured flow, a user edits and outputs the structured flow description file of a certain test case or a certain group of test cases by using the interface, and after the file is led into a case generator, the case generator automatically generates an executable test case script or a script set; turning to a case generator, generating a test case by the case generator according to the structural flow involved in the description file, selecting corresponding different test methods from a test library set according to the requirements of a system to be tested in the generation process of the test case, wherein the test methods are embodied in a mode of a function or a function set, and when the test method is specifically executed, the test method needs to be linked to the test library to be combined and the corresponding function or function set is used for realizing the test methods; then, after the test case is generated, the case executor executes the test case, and the case executor executes the generated reliability test case script. When a reliable test case is executed, the test method in the test library set is communicated with a test agent through remote process calling and other technologies, and the test agent executes operations such as service control, fault injection, data acquisition management and the like; and meanwhile, the test monitoring module collects and stores data from the target system to be tested through the test agents installed on the nodes of the target system to be tested according to preset indexes. The test agent is a logic entity installed on each node of the target system to be tested, and realizes specific operations such as service control, fault injection, index data acquisition and the like of the target system. For the fault type test, whether the target tested system generates the expected effect when the tested fault occurs can be determined by comparing the collected data with the expected data of the fault, and if the target tested system does not generate the expected effect, the target tested system indicates that the tested system needs to be returned to a developer of the tested system to modify the system. And for the code fragments and some library method tests, if the test fails, returning to the developer of the tested system to modify the system.
It should be noted that the test cases or the target test cases in the embodiment of the present application have multiple types, for example, execution files with different formats or scripts in different computer languages, and the test case script appearing later in the embodiment of the present application should be regarded as a presentation type of the test case.
Please refer to fig. 3 and 4, fig. 3 is a schematic structural diagram of a case generator in a reliability testing system according to an embodiment of the present application, and fig. 4 is a diagram of an embodiment of a method for processing a test case according to an embodiment of the present application. In fig. 3, the input of the use case generator is a description file with a structured flow, the description file comes from the general console, and the output is an executable use case script generated for the execution of the use case executor. The use case generator comprises a parser and a converter; the parser is used for parsing a structured flow in the description file, where the structured flow includes information about how to perform testing, which testing steps are performed synchronously, which testing steps are performed asynchronously, and which testing steps are performed concurrently, and the parser may process files in formats such as extensible markup language (XML), Domain Specific Language (DSL), JS object markup (JavaScript on) and the like, or may also be a customized private-format file, as needed. The translator provides corresponding processing plug-ins for various basic units in the structured flow, each processing plug-in translating a basic unit into an executable test script based on a target language, which in a common embodiment is typically Python (an object-oriented interpreted computer programming language), according to predetermined rules. The translator may specifically include a start plug-in, a step plug-in, a flow plug-in, a branch plug-in, a concurrency plug-in, and an end plug-in. Of course, the specific division method is not limited to these plug-ins, as long as the structured flow can be converted into an executable script. The start plug-in is mainly used for extracting text information in the description file and generating use case description codes in a target language annotation mode, the codes mainly play a description role, and the codes cannot be run when the executable script is executed. The step plug-in is used for generating codes of various library method tests, code fragment tests and fault tests; the flow direction plug-in, the branch plug-in and the concurrent plug-in are mainly used for corresponding to the structured flow to each step and flow direction generated in the step plug-in; and the plug-in is used for generating some cleaning codes to avoid resource leakage, and if no test method in the test library set fails to be called in the generation process of the test case, the test case script is returned to be successfully generated.
As shown in fig. 4, the method may include:
401. and receiving the description file sent by the master control station.
The description file is used for describing a structured flow of one or a group of test cases. The description file can be directly obtained from the console, the description file can be written by a user on an imaging interface of the console, or the console can be directly obtained from other devices or a network, and the file format of the description file has various formats, such as XML, DSL, JSON and the like. For example, referring to fig. 5, fig. 5 is a schematic diagram illustrating a structured flow of a test case, where a step in each block is a test operation, and it can be seen that, in fig. 5, after the start, four branches are executed in parallel, where a first branch has three test operations, and the other three branches have one test operation, and after all branches are executed, the same test operation is fanned in to execute, then a subsequent test operation is executed in sequence, and after the execution is completed, the subsequent three end operations are finally executed to release resources occupied in the test case execution process.
402. And determining the test operation of the target test case according to the description file.
The test operation includes library method test operation, that is, the test methods used by the operation all come to the test library set. Of course, the test operation may be a test operation corresponding to a code segment carried in the description file, besides the library method test operation, and of course, the code segment may not necessarily form a complete test operation, and may be other contents in the generated test case script. The library method test operations and code fragments are described below separately:
for the code fragments carried in the description file, the use case generator of the embodiment of the present application detects and analyzes the code fragments before adding the code fragments to the target test case. For example, after performing syntax checking and certain semantic analysis on the code fragments to be tested, the code containing the code fragments may be generated, and the part of the code may appear in the target test case script. Wherein the semantic analysis may include: checking whether a variable and a global variable in the code segment conflict or not, if so, automatically creating a test script to fail, and returning an error; checking whether the code segments refer to unidentifiable global variables or not, if so, automatically creating the test script in a failure mode, and returning an error; and checking whether the code segment refers to a structural flow preorder unit, the participation of a test library method called asynchronously, and the like, if so, generating a code to wait for the call return of the method.
For the library method test operation, common or common test operations in some test libraries are mainly targeted, of course, the common or common test operations may be different for different systems under test, and of course, the library method test operation may also include some test operations specific to the systems under test.
It should be noted that, for library method test operations, common or common test operations, or test operations specific to the system under test, only one interface is generally required to be used, and for some library method test operations, such as fault test operations, a fault injection interface and a fault anticipation interface need to be called to implement. The failure test operation is explained below.
Referring to fig. 6, fig. 6 is a schematic structural diagram of a fault scenario module according to an embodiment of the present application; the fault scenario module is used for fault injection and fault anticipation analysis capabilities. The fault scenario module in the embodiment of the application classifies fault scenarios into a tree structure, all faults serve as root fault scenarios, various different fault scenarios serve as sub-scenarios, for example, a resource fault scenario serves as a sub-scenario, and each sub-scenario may further have different sub-scenarios until being subdivided into a specific instantiable fault mode, such as a memory leak fault mode. The fault scene module of the embodiment of the application can comprise a scene analysis submodule for analyzing a specific fault mode under which fault scene specifically belongs; the fault injection submodule is used for fault injection; the fault expectation submodule is used for carrying out fault expectation analysis; for a fault mode, there may be corresponding indexes, which are preset with corresponding value ranges, and the indexes may be divided into two types according to the difference of measurement angles, one is a service KPI type index, which corresponds to the service damage angle and performs fault test on the system to be tested by using the index as the fault expectation for the service damage degree, and the other is a system state type index, which corresponds to the angle whether the system state is the expected system state or not, and performs fault test on the system to be tested by using the index as the fault expectation for the preset system state change. The fault scene module is also connected to a fault library, and fault injection interfaces, service KPI-type fault expectations and system state-type fault expectations are stored in the fault library. When the fault scenario module executes a specific fault mode, a corresponding fault injection interface, the fault expectation of a service KPI class and the fault expectation of a system state class are called from a fault library.
403. And determining the testing steps of the testing method according to the description file.
In which the test step is a step of each branch, i.e. flow direction, as shown in fig. 5, of course, for a certain failure mode, the test method that may be called includes multiple functions, and then both synchronous calls and asynchronous calls between these functions may be used as the test step. The synchronous call needs to wait for a return result in the current thread, otherwise, the thread executing the call is suspended, and a plurality of functions of the synchronous call may have no dependency relationship with each other. Instead of returning results immediately, an asynchronous call may have the thread executing the asynchronous call continue to execute other functions until the asynchronous call is completed. Of course, in some cases, a function may need to be referenced or the argument of the last function may be taken as an argument. It can be seen that, if there are multiple branches, when the test operation is merged into the next test operation, if the test operation requires the execution results of all the branches as entries for execution, since the execution speeds of the branches are not completely the same, and the test operations of some branches will be more, and the execution time is longer, a wait condition will occur on the small side of the merged test, and the test operation will be executed after all the branches are completed.
Specifically, for the flow direction in the test flow of the test operation, the flow direction plug-in fig. 3 is used for processing, and if the flow direction is the forward direction, the pointed flow basic unit is selected and processed by the corresponding plug-in; if the flow direction is reverse, namely, the unit points to a preamble, a code is generated, and the program flow is transferred to the code corresponding to the basic unit pointed by the flow direction. Meanwhile, if a certain test operation fans out multiple steps or branches at the same time, that is, there are multiple flow arrows, the test operation is processed by the concurrency plug-in fig. 3.
For the branches of the test operation, the branches are processed by the branch plug-in fig. 3, the branch plug-in extracts the binary judgment logic of the branch unit in the structured flow, generates the branch code supported by the system under test, and the flow direction plug-in fig. 3 handles each branch. The branching plug-in must perform the necessary syntactic analysis of the decision logic, which is referred to as the semantic analysis process in the code fragment test operation, and needs to avoid introducing syntactic errors.
For the processing of the concurrent process, the flow direction information of a plurality of fan-outs is obtained from the flow direction plug-in processed from the preamble through the concurrent plug-in fig. 3, and the following processing is performed: checking each flow direction, ensuring that all flow directions are finally fanned into the same convergence step or branch unit (called convergence unit), if not, automatically creating a failure test script, and returning an error; generating codes before reaching the convergence unit for each flow direction, independently creating a thread, handing the subsequent steps or branches of the flow direction by corresponding plug-ins, and ensuring that the codes generated by the plug-ins are executed in a new thread; before processing the convergence unit, generating a code, and waiting for all steps in each flow to be executed. Note that if there is asynchronous call in the flow, it must wait for the call to return, if the return fails, then generate code, and turn the flow to the code generated by the end plug-in; the convergence unit is processed by the plug-in corresponding to the convergence unit, which may be a test operation in which the respective branches converge as in fig. 5.
404. And calling a test method corresponding to the test operation in the test library set.
And the test library set stores test methods corresponding to the test operations. For a library method test operation that invokes an interface, step 404 may specifically include:
and extracting library method test operation, parameter entering information and parameter exiting information from the structured flow of the description file.
The parameters of the test method corresponding to the library method test operation are automatically used as global variables of the use case script program and can be used in subsequent basic units of the structured flow.
And calling the test method of the test operation of the corresponding library method in the test library set in a function calling mode.
When the library method test case is generated, the test method of the corresponding library method test operation in the test library set is called in a function call mode to generate a code.
It should be noted that, for the invocation of the library method, besides the standard function call semantics, the number of times of invocation and the invocation method may be limited to match the requirement of the reliability test. The calling times are the times that the library method can be executed, and after the times are determined, internal circulation calling codes of the designated times can be automatically generated, so that repeated calling of the method is realized. The function calling mode comprises a synchronous calling mode and an asynchronous calling mode, and for the synchronous calling, when the code is constructed, the calling result is waited to be returned in the current thread. For asynchronous call, the characteristics of thread management, signal processing and the like are utilized to automatically generate codes, library methods are called in a new thread, and the execution result of the asynchronously called library methods is informed to a main thread through signals. The point where the main thread waits for the return of the asynchronous call can be as follows, 1, before the library method for the next synchronous bar is executed; 2. a next branch whose decision logic references the out-reference of the asynchronous call; 3. a next arbitrary code fragment, the code fragment referencing an exit of the asynchronous call; 4. if the three waiting points do not exist, jumping to the waiting point at the ending flow and waiting before ending the flow execution. Where, whether synchronous or asynchronous, whenever a library method call returns a failure, the code of the calling procedure must be generated and the flow of execution diverted to the code produced by the end plug-in (i.e., the end flow).
For the fault test operation, any fault mode needs to perform fault injection and fault anticipation, where the fault injection, i.e., a test case script, may inject a corresponding fault mode into a target object of a system under test by using the fault injection interface, and the implementation of the fault injection interface generally simulates a fault in a software simulation manner, for example, an operating system restarts to simulate a node to power off, and the like. And the fault expectation is matched with the monitoring library to monitor the fault expectation generated by the tested system after fault injection, and the fault expectation is compared with a preset service KPI (key performance indicator) or system state indicator to determine whether the tested system is abnormal.
And for the service KPI indexes, the service KPI indexes are defined from the perspective of whether the service is damaged, and the test case calls a service KPI fault expectation interface to judge whether the current service is normal. The interface acquires a service class KPI index from a target system to be detected and judges whether the service is normal. The KPI indicator type and the decision logic are bound to the failure mode and built into the interface implementation code. Typical service KPI indicators include error rate, latency, throughput, etc. of request processing. The specific implementation of the interface comprises:
1. setting a time window T1;
2. setting the current time as ct, acquiring the current value of a service KPI index at the sampling frequency of every T1 in the (ct, ct + T1) time period, and recording the current value into a time sequence data list L;
3. after the last sampling is finished, the list L is processed as follows by adopting a general numerical analysis technology;
3a, performing statistical analysis, calculating a mean value, an extreme value, a percentile and the like, and judging whether the service is abnormal or not based on a threshold value;
3b, anomaly detection, namely checking whether the KPI fluctuates greatly, and judging whether the business is abnormal or not based on the anomaly detection;
and 3c, trend analysis, judging whether the KPI has trend variation by adopting technologies such as regression analysis and the like, and judging whether the business is abnormal based on the trend.
It can be seen that, the implementation process of the interface is a detection process performed when the tested system executes the test case and the fault occurs in the tested system. In the detection process, if it is found that the detected system does not generate corresponding KPI index change or execute corresponding operation according to the fault expectation after the fault occurs, it is determined that the detected system has a function abnormality, and a result of the function abnormality is returned, where the result indicates that a developer needs to modify the result.
For example, the first functional entity and the second functional entity have a process of information interaction through a network, for example, the first functional entity sends information stream, such as web page content (including text, pictures, animation, and the like), to the second functional entity. If a packet loss fault is added, the packet loss rate of the tested system is 10% due to the fault, at this time, the integrity of the webpage content is monitored, if the packet loss rate of 10% according to the fault expectation corresponds to the webpage content display of 95%, and in actual detection, if the webpage content display data in multiple periods are detected, if the webpage content display is only 80%, which is far less than 95% of the fault expectation, at this time, the tested system is considered not to reach the fault expectation, and the function is abnormal, and a result is returned.
The system state class index is defined from the perspective of whether the system state and the behavior conform to expectations or not; and calling a system state type fault expected interface to judge whether the system state is normal. The interface may obtain behavior and status data from the system under test and determine if it meets expectations. In particular, for some system status data, such as CPU occupation, etc., the test monitoring module in the test apparatus can be directly accessed to obtain the data, which is obtained by the test apparatus through the test agent installed on the system node under test. Typical system status class failure expectations include whether the system generates a failure related alarm, whether the primary and backup redundant systems switch to backup after a primary failure, and so on. The interface is specifically realized as follows:
1) setting a time window T2;
2) setting the current time as ct, acquiring system state information at the time point of ct + T2, and judging whether the system state and behavior meet expectations according to preset logic;
3) and returning a judgment result.
It can be seen that the implementation process of the interface is a detection process performed when the tested system executes the test case and the fault occurs in the tested system. In the detection process, if it is found that the system to be detected does not generate corresponding system state change according to the fault expectation after the fault occurs, it is determined that the system to be detected has a functional abnormality, and a result of the functional abnormality is returned, where the result indicates that a developer needs to modify the result.
For another example, such as a memory leak failure, the failure is expressed in the form that the remaining memory is less and less under the condition that the memory resources occupied by the system are basically fixed; if the system is expecting a failure, the system will release the memory when the remaining memory is reduced to a certain value. If the residual memory of the tested system is not reduced to a certain value to release the memory after the memory leakage fault is injected, but exceeds the certain value or the memory release operation is not executed all the time, the early summer abnormal function of the tested system is judged, and the abnormal function result is returned.
For another example, for a node with a primary/standby function, when a fault is injected to cause a master node to be down, whether the node is switched from the master node to the standby node within a preset time period is monitored; if the failure is expected, the switching operation needs to be generated within a first time period; if the tested system does not perform the switching operation within the first time length or does not perform the switching operation all the time, the abnormal function of the tested system is judged, and the result of the abnormal function is returned.
405. And generating the target test case according to the test flow and the test method.
After the test flow and the test method are called, the target test case can be generated according to the information. For the library method test operation, the corresponding test method in the test library set can be called in a function calling mode, a cycle with specified times is generated according to the calling times, and the calling mode can be determined, such as synchronous calling or asynchronous calling. Finally, the code is generated which includes the manner in which the function call is made and how many times it is looped.
For fault test operation, fault injection and fault anticipation codes of each test case are generated in a mode that codes are created, fault injection interfaces are called in a synchronous mode, and faults are injected. If the interface returns failure, the generated code turns the flow to the ending unit, and the code generated by the plug-in is executed. Because the fault injection speed is high, the fault injection interface can be called in a synchronous calling mode. And for the failure anticipation interface, the failure anticipation interface of the service KPI class is called in an asynchronous calling mode. And the asynchronous call must wait at the end of the structured flow, so the code waiting for the return portion is generated by the end plug-in; the failure anticipation interface of the system state class is also called by means of an asynchronous call, which likewise has to wait at the end of the structured flow, so that the code waiting for the return section is generated by the end plug-in.
It should be noted that, for the fault test operation, if a call failure occurs when a fault injection interface or a fault expected interface is called, the test operation is performed according to the library method, a code of a call process is generated, and the execution flow is shifted to a code generated by an end plug-in.
It should be noted that, besides the code generated by the test operation according to the library method, the defined code segment in the description file may also be added to the target test case script, but before adding to the target test case script, syntax check and certain semantic analysis need to be performed on the code segment, and after completion, the code including the code segment is generated, and the code may appear in the target test case. Among them, regarding semantic analysis, it may include: checking whether a variable and a global variable in the code segment conflict or not, if so, automatically creating a test script to fail, and returning an error; checking whether the code segments refer to unidentifiable global variables or not, if so, automatically creating the test script in a failure mode, and returning an error; and checking whether the code segment refers to a structural flow preorder unit, the participation of a test library method called asynchronously, and the like, if so, generating a code to wait for the call return of the method.
It should be noted that creating the code part of the test case may include:
creating first code that calls the fault injection interface in a synchronous manner;
creating a second code for calling the service function of the tested system corresponding to the failure mode;
creating a third code for calling a failure anticipation interface of the service KPI class in an asynchronous calling mode; and/or the presence of a gas in the gas,
creating fourth code of a failure anticipation interface for calling the system state class in an asynchronous calling mode;
and creating fifth code for monitoring the failure mode by calling a monitoring interface of a monitoring library.
It can be seen that generating the target test case may include three cases, the first case only includes the failure expectation of the service KPI class, and then the target test case may include a first code, a second code, a third code, and a fifth code. The second is to contain only the failure expectations of the system state class, then the target test case may include the first code, the second code, the fourth code, and the fifth code. And thirdly, the fault expectation of the service KPI class and the system state class are contained at the same time, the target test case can comprise a first code, a second code, a third code, a fourth code and a fifth code.
For the end plug-in, it is the last unit of the whole structured flow, and is mainly used for performing some end operations. And finishing the plug-in to generate some cleaning codes to avoid resource leakage. In addition, as previously described, the end plug-in handles several situations:
if synchronous calling failure of the test method exists in the preorder flow, generating a code, returning the execution failure of the whole test case, and attaching the information of calling failure of the test library, including the name of the test method, the entry parameter and the return value.
If the asynchronous call of the test library which is not returned exists in the preorder flow, generating a code, judging a return code, if the call fails, returning the execution failure of the whole test case, and attaching the information of the call failure of the test library, including the name, the entry parameter and the return value of the test method; in particular, as indicated previously, calls to the service KPI class and the system status class failure anticipation interface, whose return values are both awaited and processed there.
If the calling of the method of the test library fails, the plug-in generating codes is finished, and the test case is executed and returns success.
It should be noted that, for the ending plug-in, different ending logics may be designed, one may be to collect all returned results in the process of generating the test case after all preamble flows including asynchronous call, synchronous call and the like are completed, and return information of test case generation failure, where the information includes all results of the preamble flows. Another method may be that, as long as a result of the call failure jumps to the end plug-in, information of the test case generation failure is returned, and the information only contains the result of the call failure jumped to the end plug-in. For the logics of the two ending plug-ins, corresponding setting can be carried out according to different actual conditions, and for a system to be tested which is about to be on-line, information of test case generation failure is immediately generated as long as a problem is found, so that the problem can be rapidly processed; if the system needs to be comprehensively tested, the comprehensive detection can be performed according to a mode of waiting for all the preorder processes, including asynchronous call, synchronous call and the like, to be completed, collecting all returned results in the process of generating the test case, and returning information of test case generation failure.
For example, after the structured flow shown in fig. 5 is processed according to the steps 401 to 405 shown in fig. 4, the test case script shown in fig. 7 and 8 is generated, where fig. 7 is a schematic diagram of the test case script generated by the test case processing method according to the embodiment of the present application; fig. 8 is a schematic diagram of a test case script generated by the test case processing method according to the embodiment of the present application. FIG. 7 in addition to FIG. 8 form a test case script for the structured flow shown in FIG. 5.
In the above description, the processing method of the test case in the embodiment of the present application is introduced, and a server in the embodiment of the present application is introduced below, please refer to fig. 9, where fig. 9 is a diagram of an embodiment of a server in the embodiment of the present application, where the server may include:
a transceiver module 901, configured to receive a description file sent by a master console, where the description file is used to describe a structured flow of one or a group of test cases;
a processing module 902, configured to determine, according to the description file, a test operation of a target test case and a test flow of the test operation;
the processing module 902 is further configured to invoke a test method corresponding to the test operation in a test library set, and generate the target test case according to the test flow and the test method, where the test method corresponding to the test operation is stored in the test library set.
The transceiver module 901 implements step 401 in fig. 4, and the processing module 902 can implement steps 402 to 405 in fig. 4, and the specific functions of the transceiver module 901 and the processing module 902 may refer to the embodiment shown in fig. 4, which is not described herein again.
Optionally, the test library set includes a fault library, a service library and a monitoring library, the test operation includes a library method test operation, the library method test operation includes a fault test operation, and the test method includes a fault test method; the fault test operation corresponds to the fault test method, and the processing module 902 is specifically configured to:
determining fault modes according to the description file, wherein different fault scenes are stored in the fault library, and each fault scene corresponds to at least one fault mode;
and calling fault testing methods corresponding to the fault testing operation from the fault library according to the fault modes, wherein each fault mode corresponds to one fault testing method, and the fault testing methods stored in the fault library are realized by adopting the same or different computer languages.
The functions of the processing module 902 can refer to the embodiment shown in fig. 4, and are not described herein again.
Optionally, each of the failure modes includes a failure injection interface having a failure injection capability and a failure anticipation interface providing failure anticipation judgment, where the failure injection interface is configured to provide a reusable failure injection method for the failure mode, and the failure anticipation interface is configured to provide a failure anticipation judgment method of failure anticipation judgment logic for the failure mode.
For a detailed description of the failure mode, reference may be made to the embodiment shown in fig. 4, which is not described herein again.
Optionally, an indicator is set in the fault library corresponding to the fault mode, the indicator includes a service key performance indicator KPI class indicator and a system state class indicator, and the fault expected interface includes a fault expected interface of the service KPI class and a fault expected interface of the system state class; the fault expectation interface of the service KPI is used for determining the fault expectation of the service KPI through a service damage value when the tested system is injected into the fault, and the service KPI index comprises a preset service damage threshold value; the system state class fault expectation interface is used for determining the fault expectation of the system state class by judging whether the system state and the system behavior conform to the preset system state and the preset system behavior, and the system state class index comprises the preset system state or the preset system behavior.
For a detailed description of the index, reference may be made to the embodiment shown in fig. 4, which is not described herein again.
Optionally, the processing module 902 is specifically configured to:
creating first code that calls the fault injection interface in a synchronous manner;
creating a second code for calling the service function of the tested system corresponding to the failure mode;
creating a third code for calling a failure anticipation interface of the service KPI class in an asynchronous calling mode; and/or the presence of a gas in the gas,
creating fourth code of a failure anticipation interface for calling the system state class in an asynchronous calling mode;
creating a fifth code for monitoring the failure mode by calling a monitoring interface of a monitoring library;
and generating the target test case according to the test step, the created first code, the created second code, the created fifth code, the created third code section and/or the created fourth code section.
For a detailed description of the processing module 902, reference may be made to the embodiment shown in fig. 4, which is not described herein again.
Optionally, the processing module 902 is further configured to:
if the calling of the fault injection interface fails, executing a code generated by an ending operation, wherein the ending operation is used for ending code creation; or the like, or, alternatively,
and if the failure expectancy interface of the service KPI class or the failure expectancy interface of the system state class is called in an asynchronous calling mode, the ending operation generates a code waiting for the return of the asynchronous calling.
For a detailed description of the processing module 902, reference may be made to the embodiment shown in fig. 4, which is not described herein again.
Optionally, the processing module 902 is further configured to:
and executing the target test case.
For a detailed description of the processing module 902, reference may be made to the embodiment shown in fig. 4, which is not described herein again.
Optionally, the target test case includes a failure test operation, and the processing module 902 is specifically configured to:
calling a fault injection interface of a fault mode corresponding to the fault test operation to inject the fault mode into the tested system;
collecting samples of the service KPI indication every other first preset time length in a first preset time window;
and analyzing the sample set collected in the preset time window to generate a fault expected result of the fault mode service KPI.
For a detailed description of the processing module 902, reference may be made to the embodiment shown in fig. 4, which is not described herein again.
Optionally, the processing module 902 is specifically configured to:
calculating at least one of a mean value, an extreme value and a percentile of the samples in the sample set, and determining that the service is abnormal and generating a fault expected result when the at least one of the mean value, the extreme value and the percentile exceeds at least one of a preset mean value threshold, an extreme value threshold and a percentile threshold; or the like, or, alternatively,
analyzing the fluctuation of the sample, determining that the service is abnormal when the fluctuation amplitude of the sample is larger than a preset fluctuation amplitude, and generating a fault expected result; or the like, or, alternatively,
and analyzing the variation trend of the sample, determining the abnormal business when the variation trend exceeds the preset variation trend, and generating a fault expected result.
For a detailed description of the processing module 902, reference may be made to the embodiment shown in fig. 4, which is not described herein again.
Optionally, the target test case includes a failure test operation, and the processing module 902 is specifically configured to:
calling a fault injection interface of a fault mode corresponding to the fault test operation to inject the fault mode into a tested system;
collecting the system state information at a first moment, wherein the first moment is the moment when a second preset time window passes from the current moment;
and when the system state information exceeds the preset system state and system behavior expectation, determining that the service is abnormal and generating a fault expectation result.
For a detailed description of the processing module 902, reference may be made to the embodiment shown in fig. 4, which is not described herein again.
Optionally, the test operation includes a library method test operation, and the processing module 902 is specifically configured to:
extracting library method test operation, parameter entering information and parameter exiting information from the structured flow of the description file;
calling a test method of the test operation of the corresponding library method in the test library set in a function calling mode;
the processing module is specifically configured to:
and generating the target test case by taking the input parameter information and the output parameter information as parameter information of the calling function according to the test flow.
For a detailed description of the processing module 902, reference may be made to the embodiment shown in fig. 4, which is not described herein again.
Optionally, the processing module 902 is specifically configured to:
determining the library method, the cycle number of the library method, the parameter entering information and the parameter exiting information according to the structured flow;
the processing module is specifically configured to:
generating an internal loop calling code corresponding to the loop times according to the loop times of the library method;
the processing module is specifically configured to:
when the call is determined to be synchronous call, creating a code waiting for the return of a call result in the current thread; or the like, or, alternatively,
when the call is determined to be asynchronous call, generating a code for asynchronous call according to the thread management information and the signal processing information in the asynchronous call;
and generating the target test case according to the code generated by the synchronous call or the asynchronous call and by taking the input parameter information and the output parameter information as the function parameter information.
For a detailed description of the processing module 902, reference may be made to the embodiment shown in fig. 4, which is not described herein again.
Optionally, the processing module 902 is further configured to:
acquiring a code segment from the description file;
detecting the grammar of the code segment and analyzing the semantics of the code segment;
the processing module is specifically configured to:
and generating the target test case according to the test flow, the code segment and the code test method.
For a detailed description of the processing module 902, reference may be made to the embodiment shown in fig. 4, which is not described herein again.
Optionally, the processing module 902 is specifically configured to:
determining a flow direction of a test flow of the test operation, the flow direction including a forward direction and a reverse direction, and generating code of the test flow.
For a detailed description of the processing module 902, reference may be made to the embodiment shown in fig. 4, which is not described herein again.
Optionally, when the test flow of the test operation has at least two parallel branches, the processing module 902 is specifically configured to:
determining a flow direction of each branch of a test flow of the test operation, and generating code of the flow direction of each branch of the test flow.
For a detailed description of the processing module 902, reference may be made to the embodiment shown in fig. 4, which is not described herein again.
Optionally, the processing module 902 is further configured to:
acquiring information of the flow direction of the test flow;
creating a code for a test flow when each flow direction of the test flow is fanned into the same converged test operation.
For a detailed description of the processing module 902, reference may be made to the embodiment shown in fig. 4, which is not described herein again.
Optionally, the processing module 902 is further configured to:
and when the test method corresponding to the test operation in the calling test library set fails or the target test case generated according to the test flow and the test method fails, returning test case generation failure information, wherein the test case failure generation failure information comprises the test method calling failure information.
For a detailed description of the processing module 902, reference may be made to the embodiment shown in fig. 4, which is not described herein again.
Having described the server of the embodiments of the present application, the structure of the server of the embodiments of the present application is described below, please refer to fig. 10, fig. 10 is a diagram of an embodiment of the server of the embodiments of the present application, wherein the server 10 may include at least one processor 1002, at least one transceiver 1001 and a memory 1003 connected, the server of the embodiments of the present application may have more or less components than those shown in fig. 10, may combine two or more components, or may have different configurations or arrangements of components, and each component may be implemented in hardware, software, or a combination of hardware and software including one or more signal processing and/or application specific integrated circuits.
Specifically, for the embodiment shown in fig. 9, the processor 1002 can implement the function of the processing module 902 of the device in the embodiment shown in fig. 9, the transceiver 1101 can implement the function of the transceiver module 901 of the server in the embodiment shown in fig. 8, and the memory 1103 is used for a program instruction, and the method for processing the test case in the embodiment shown in fig. 4 is implemented by executing the program instruction.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (34)

1. A method for processing a test case is characterized by comprising the following steps:
receiving a description file sent by a master console, wherein the description file is used for describing a structured flow of one or a group of test cases;
determining the test operation of the target test case according to the description file;
determining a test flow of the test operation according to the description file;
calling a test method corresponding to the test operation in a test library set, wherein the test method corresponding to the test operation is stored in the test library set;
generating the target test case according to the test flow and the test method;
acquiring information of the flow direction of the test flow;
creating a code for a test flow when each flow direction of the test flow is fanned into the same converged test operation.
2. The method for processing the test case according to claim 1, wherein the test library set comprises a fault library, a service library and a monitoring library, the test operation comprises a library method test operation, the library method test operation comprises a fault test operation, and the test method comprises a fault test method; the fault test operation corresponds to the fault test method, and the calling of the test method corresponding to the test operation in the test library set comprises:
determining fault modes according to the description file, wherein different fault scenes are stored in the fault library, and each fault scene corresponds to at least one fault mode;
and calling fault testing methods corresponding to the fault testing operation from the fault library according to the fault modes, wherein each fault mode corresponds to one fault testing method, and the fault testing methods stored in the fault library are realized by adopting the same or different computer languages.
3. The method for processing the test case according to claim 2, wherein each of the failure modes includes a failure injection interface having a failure injection capability and a failure anticipation interface providing failure anticipation judgment, the failure injection interface is configured to provide a reusable failure injection method for the failure mode, and the failure anticipation interface is configured to provide a failure anticipation judgment method of failure anticipation judgment logic for the failure mode.
4. The method for processing the test case according to claim 3, wherein indexes are set in the fault library corresponding to the fault modes, the indexes include service Key Performance Indicators (KPI) and system status indicators, and the fault expected interfaces include a fault expected interface of a service KPI and a fault expected interface of a system status indicator; the fault expectation interface of the service KPI is used for determining the fault expectation of the service KPI through a service damage value when the tested system is injected into the fault, and the service KPI index comprises a preset service damage threshold value; the system state class fault expectation interface is used for determining the fault expectation of the system state class by judging whether the system state and the system behavior conform to the preset system state and the preset system behavior, and the system state class index comprises the preset system state or the preset system behavior.
5. The method for processing the test case according to claim 4, wherein the generating the target test case according to the test flow and the test method comprises:
creating first code that calls the fault injection interface in a synchronous manner;
creating a second code for calling the service function of the tested system corresponding to the failure mode;
creating a third code for calling a failure anticipation interface of the service KPI class in an asynchronous calling mode; and/or the presence of a gas in the gas,
creating fourth code of a failure anticipation interface for calling the system state class in an asynchronous calling mode;
creating a fifth code for monitoring the failure mode by calling a monitoring interface of a monitoring library;
and generating the target test case according to the test steps, the created first code, the created second code, the created fifth code, the created third code and/or the created fourth code.
6. The method for processing the test case according to claim 5, wherein the method further comprises:
if the calling of the fault injection interface fails, executing a code generated by an ending operation, wherein the ending operation is used for ending code creation; or the like, or, alternatively,
and if the failure expectancy interface of the service KPI class or the failure expectancy interface of the system state class is called in an asynchronous calling mode, the ending operation generates a code waiting for the return of the asynchronous calling.
7. The method for processing the test case according to any one of claims 4 to 6, wherein the method further comprises:
and executing the target test case.
8. The method for processing the test case according to claim 7, wherein the target test case includes a failure test operation, and the executing the target test case includes:
calling a fault injection interface of a fault mode corresponding to the fault test operation to inject the fault mode into the tested system;
collecting samples of the service KPI indication every other first preset time length in a first preset time window;
and analyzing the sample set collected in the preset time window to generate a fault expected result of the fault mode service KPI.
9. The method for processing the test case according to claim 8, wherein the analyzing the sample set collected in the preset time window and generating the expected failure result of the KPI class comprises:
calculating at least one of a mean value, an extreme value and a percentile of the samples in the sample set, and determining that the service is abnormal and generating a fault expected result when the at least one of the mean value, the extreme value and the percentile exceeds at least one of a preset mean value threshold, an extreme value threshold and a percentile threshold; or the like, or, alternatively,
analyzing the fluctuation of the sample, determining that the service is abnormal when the fluctuation amplitude of the sample is larger than a preset fluctuation amplitude, and generating a fault expected result; or the like, or, alternatively,
and analyzing the variation trend of the sample, determining the abnormal business when the variation trend exceeds the preset variation trend, and generating a fault expected result.
10. The method for processing the test case according to claim 7, wherein the target test case includes a failure test operation, and the executing the target test case includes:
calling a fault injection interface of a fault mode corresponding to the fault test operation to inject the fault mode into a tested system;
collecting the system state information at a first moment, wherein the first moment is the moment when a second preset time window passes from the current moment;
and when the system state information exceeds the preset system state and system behavior expectation, determining that the service is abnormal and generating a fault expectation result.
11. The method for processing the test case according to claim 1, wherein the test operation includes a library method test operation, and the invoking of the test method corresponding to the test operation in the test library set includes:
extracting library method test operation, parameter entering information and parameter exiting information from the structured flow of the description file;
calling a test method of the test operation of the corresponding library method in the test library set in a function calling mode;
the generating the target test case according to the test flow and the test method comprises:
and generating the target test case by taking the input parameter information and the output parameter information as parameter information of the calling function according to the test flow.
12. The method for processing the test case according to claim 11, wherein the extracting library methods, the entry-reference information, and the exit-reference information from the structured flow of the description file includes:
determining the library method, the cycle number of the library method, the parameter entering information and the parameter exiting information according to the structured flow;
the method for calling the library in the test library set in the function calling mode comprises the following steps:
generating an internal loop calling code corresponding to the loop times according to the loop times of the library method;
the generating the target test case by using the parameter entering information and the parameter exiting information as the function parameter information includes:
when the call is determined to be synchronous call, creating a code waiting for the return of a call result in the current thread; or the like, or, alternatively,
when the call is determined to be asynchronous call, generating a code for asynchronous call according to the thread management information and the signal processing information in the asynchronous call;
and generating the target test case according to the code generated by the synchronous call or the asynchronous call and by taking the input parameter information and the output parameter information as the function parameter information.
13. The method for processing the test case according to claim 1, wherein the method further comprises:
acquiring a code segment from the description file;
detecting the grammar of the code segment and analyzing the semantics of the code segment;
the generating the target test case according to the test flow and the test method comprises:
and generating the target test case according to the test flow, the code segment and the test method.
14. The method for processing the test case according to any one of claims 1 to 6 and 11 to 13, wherein the determining the test flow of the test operation according to the description file includes:
determining a flow direction of a test flow of the test operation, the flow direction including a forward direction and a reverse direction, and generating code of the test flow.
15. The method for processing the test case according to claim 14, wherein when the test flow of the test operation has at least two parallel branches, the determining the flow direction of the test flow of the test operation comprises:
determining a flow direction of each branch of a test flow of the test operation, and generating code of the flow direction of each branch of the test flow.
16. The method for processing the test case according to any one of claims 1 to 6 and 11 to 13, wherein the method further comprises:
and when the test method corresponding to the test operation in the calling test library set fails or the target test case generated according to the test flow and the test method fails, returning test case generation failure information, wherein the test case failure generation failure information comprises the test method calling failure information.
17. A server, comprising:
the system comprises a receiving and sending module, a test case and a test data processing module, wherein the receiving and sending module is used for receiving a description file sent by a master control console, and the description file is used for describing a structured flow of one or a group of test cases;
the processing module is used for determining the test operation of the target test case and the test flow of the test operation according to the description file;
the processing module is further configured to call a test method corresponding to the test operation in a test library set, and generate the target test case according to the test flow and the test method, where the test method corresponding to the test operation is stored in the test library set;
the processing module is further configured to:
acquiring information of the flow direction of the test flow;
creating a code for a test flow when each flow direction of the test flow is fanned into the same converged test operation.
18. The server according to claim 17, wherein the set of test libraries includes a failure library, a service library, and a monitoring library, the test operations include library method test operations, the library method test operations include failure test operations, and the test methods include failure test methods; the fault test operation corresponds to the fault test method, and the processing module is specifically configured to:
determining fault modes according to the description file, wherein different fault scenes are stored in the fault library, and each fault scene corresponds to at least one fault mode;
and calling fault testing methods corresponding to the fault testing operation from the fault library according to the fault modes, wherein each fault mode corresponds to one fault testing method, and the fault testing methods stored in the fault library are realized by adopting the same or different computer languages.
19. The server according to claim 18, wherein each of the failure modes comprises a failure injection interface having failure injection capability and a failure anticipation interface providing failure anticipation judgment, the failure injection interface being configured to provide a reusable failure injection method for the failure mode, and the failure anticipation interface being configured to provide a failure anticipation judgment method of failure anticipation judgment logic for the failure mode.
20. The server according to claim 19, wherein indicators are set in the fault library corresponding to the fault modes, the indicators include service Key Performance Indicators (KPI) class indicators and system status class indicators, and the fault anticipation interfaces include a fault anticipation interface of a service KPI class and a fault anticipation interface of a system status class; the fault expectation interface of the service KPI is used for determining the fault expectation of the service KPI through a service damage value when the tested system is injected into the fault, and the service KPI index comprises a preset service damage threshold value; the system state class fault expectation interface is used for determining the fault expectation of the system state class by judging whether the system state and the system behavior conform to the preset system state and the preset system behavior, and the system state class index comprises the preset system state or the preset system behavior.
21. The server according to claim 20, wherein the processing module is specifically configured to:
creating first code that calls the fault injection interface in a synchronous manner;
creating a second code for calling the service function of the tested system corresponding to the failure mode;
creating a third code for calling a failure anticipation interface of the service KPI class in an asynchronous calling mode; and/or the presence of a gas in the gas,
creating fourth code of a failure anticipation interface for calling the system state class in an asynchronous calling mode;
creating a fifth code for monitoring the failure mode by calling a monitoring interface of a monitoring library;
and generating the target test case according to the test steps, the created first code, the created second code, the created fifth code, the created third code and/or the created fourth code.
22. The server according to claim 21, wherein the processing module is further configured to:
if the calling of the fault injection interface fails, executing a code generated by an ending operation, wherein the ending operation is used for ending code creation; or the like, or, alternatively,
and if the failure expectancy interface of the service KPI class or the failure expectancy interface of the system state class is called in an asynchronous calling mode, the ending operation generates a code waiting for the return of the asynchronous calling.
23. The server according to any one of claims 20 to 22, wherein the processing module is further configured to:
and executing the target test case.
24. The server according to claim 23, wherein the target test case includes a failure test operation, and the processing module is specifically configured to:
calling a fault injection interface of a fault mode corresponding to the fault test operation to inject the fault mode into the tested system;
collecting samples of the service KPI indication every other first preset time length in a first preset time window;
and analyzing the sample set collected in the preset time window to generate a fault expected result of the fault mode service KPI.
25. The server according to claim 24, wherein the processing module is specifically configured to:
calculating at least one of a mean value, an extreme value and a percentile of the samples in the sample set, and determining that the service is abnormal and generating a fault expected result when the at least one of the mean value, the extreme value and the percentile exceeds at least one of a preset mean value threshold, an extreme value threshold and a percentile threshold; or the like, or, alternatively,
analyzing the fluctuation of the sample, determining that the service is abnormal when the fluctuation amplitude of the sample is larger than a preset fluctuation amplitude, and generating a fault expected result; or the like, or, alternatively,
and analyzing the variation trend of the sample, determining the abnormal business when the variation trend exceeds the preset variation trend, and generating a fault expected result.
26. The server according to claim 23, wherein the target test case includes a failure test operation, and the processing module is specifically configured to:
calling a fault injection interface of a fault mode corresponding to the fault test operation to inject the fault mode into a tested system;
collecting the system state information at a first moment, wherein the first moment is the moment when a second preset time window passes from the current moment;
and when the system state information exceeds the preset system state and system behavior expectation, determining that the service is abnormal and generating a fault expectation result.
27. The server according to claim 17, wherein the test operation comprises a library method test operation, and the processing module is specifically configured to:
extracting library method test operation, parameter entering information and parameter exiting information from the structured flow of the description file;
calling a test method of the test operation of the corresponding library method in the test library set in a function calling mode;
and generating the target test case by taking the input parameter information and the output parameter information as parameter information of the calling function according to the test flow.
28. The server according to claim 27, wherein the processing module is specifically configured to:
determining the library method, the cycle number of the library method, the parameter entering information and the parameter exiting information according to the structured flow;
generating an internal loop calling code corresponding to the loop times according to the loop times of the library method;
when the call is determined to be synchronous call, creating a code waiting for the return of a call result in the current thread; or the like, or, alternatively,
when the call is determined to be asynchronous call, generating a code for asynchronous call according to the thread management information and the signal processing information in the asynchronous call;
and generating the target test case according to the code generated by the synchronous call or the asynchronous call and by taking the input parameter information and the output parameter information as the function parameter information.
29. The server according to claim 17, wherein the processing module is further configured to:
acquiring a code segment from the description file;
detecting the grammar of the code segment and analyzing the semantics of the code segment;
and generating the target test case according to the test flow, the code segment and the test method.
30. The server according to any one of claims 17 to 22 and 27 to 29, wherein the processing module is specifically configured to:
determining a flow direction of a test flow of the test operation, the flow direction including a forward direction and a reverse direction, and generating code of the test flow.
31. The server according to claim 30, wherein when the test flow of the test operation has at least two parallel branches, the processing module is specifically configured to:
determining a flow direction of each branch of a test flow of the test operation, and generating code of the flow direction of each branch of the test flow.
32. The server according to any one of claims 17 to 22 and 27 to 29, wherein the processing module is further configured to:
and when the test method corresponding to the test operation in the calling test library set fails or the target test case generated according to the test flow and the test method fails, returning test case generation failure information, wherein the test case failure generation failure information comprises the test method calling failure information.
33. A computer-readable storage medium comprising instructions that, when executed on a computer, cause the computer to perform the method for processing test cases of any of claims 1-16.
34. A computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of processing test cases according to any one of claims 1 to 16.
CN201710924534.7A 2017-09-30 2017-09-30 Test case processing method and server Active CN107704392B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710924534.7A CN107704392B (en) 2017-09-30 2017-09-30 Test case processing method and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710924534.7A CN107704392B (en) 2017-09-30 2017-09-30 Test case processing method and server

Publications (2)

Publication Number Publication Date
CN107704392A CN107704392A (en) 2018-02-16
CN107704392B true CN107704392B (en) 2021-05-18

Family

ID=61184534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710924534.7A Active CN107704392B (en) 2017-09-30 2017-09-30 Test case processing method and server

Country Status (1)

Country Link
CN (1) CN107704392B (en)

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108563579B (en) * 2018-04-23 2021-06-18 苏州科达科技股份有限公司 White box testing method, device and system and storage medium
CN110399284A (en) * 2018-04-24 2019-11-01 中移(杭州)信息技术有限公司 A kind of test case writes and executes method and device
CN108763089B (en) * 2018-05-31 2022-04-22 新华三信息安全技术有限公司 Test method, device and system
CN109344061B (en) * 2018-09-25 2022-09-16 创新先进技术有限公司 Method, device, equipment and system for detecting abnormity of interface
CN111427760B (en) * 2019-01-09 2024-05-28 阿里巴巴集团控股有限公司 Page test method, device, equipment and storage medium
CN109800167B (en) * 2019-01-17 2022-06-21 网宿科技股份有限公司 Test method, test client and test system
CN109977017B (en) * 2019-03-28 2022-09-02 北京粉笔蓝天科技有限公司 System performance test case screening method and system
CN110334003A (en) * 2019-05-22 2019-10-15 梁俊杰 A kind of flow designing method and relevant device
CN110727432B (en) * 2019-10-08 2022-04-12 支付宝(杭州)信息技术有限公司 Risk injection method and system based on target injection object
CN110825618B (en) * 2019-10-10 2024-01-26 天航长鹰(江苏)科技有限公司 Method and related device for generating test case
CN111522728A (en) * 2019-12-31 2020-08-11 支付宝实验室(新加坡)有限公司 Method for generating automatic test case, electronic device and readable storage medium
CN113326159B (en) * 2020-02-29 2023-02-03 华为技术有限公司 Method, apparatus, system and computer readable storage medium for fault injection
CN111967123B (en) * 2020-06-30 2023-10-27 中汽数据有限公司 Method for generating simulation test cases in simulation test
CN111835590A (en) * 2020-07-03 2020-10-27 紫光云技术有限公司 Automatic interface test architecture and test method for cloud host product
CN111865719A (en) * 2020-07-17 2020-10-30 苏州浪潮智能科技有限公司 Automatic testing method and device for fault injection of switch
CN111813605A (en) * 2020-07-20 2020-10-23 北京百度网讯科技有限公司 Disaster recovery method, platform, electronic device, and medium
CN112051835A (en) * 2020-09-08 2020-12-08 浙江中控技术股份有限公司 DCS redundancy function testing method and device
CN112363908A (en) * 2020-09-16 2021-02-12 贝壳技术有限公司 Asynchronous interface test method, system, electronic device and storage medium
CN112286741A (en) * 2020-10-09 2021-01-29 湖南中联重科智能技术有限公司 Hardware testing method and device, electronic equipment and storage medium
CN112162927B (en) * 2020-10-13 2024-04-26 网易(杭州)网络有限公司 Testing method, medium, device and computing equipment of cloud computing platform
CN112214411B (en) * 2020-10-20 2024-05-14 腾讯科技(深圳)有限公司 Disaster recovery system testing method, device, equipment and storage medium
CN112463609B (en) * 2020-11-30 2024-02-09 重庆长安汽车股份有限公司 Function test method, device, controller and computer readable storage medium for transverse control fault of control system
CN113419952B (en) * 2021-06-22 2023-06-27 中国联合网络通信集团有限公司 Cloud service management scene testing device and method
CN113722214B (en) * 2021-08-16 2024-05-03 上海创米数联智能科技发展股份有限公司 Test method, test equipment and test system
CN113704769A (en) * 2021-08-25 2021-11-26 深圳市中博科创信息技术有限公司 Safety monitoring method, device, equipment and storage medium of talent management system
CN114116452A (en) * 2021-10-29 2022-03-01 北京达佳互联信息技术有限公司 Test case generation method and device, electronic equipment and storage medium
CN115048293A (en) * 2022-06-07 2022-09-13 中国电力科学研究院有限公司 Method and system for testing electric energy meter application program of embedded operating system
CN115242804B (en) * 2022-06-10 2023-07-21 河南信大网御科技有限公司 Method for detecting random number of mimicry executor
CN116541312B (en) * 2023-07-06 2023-09-22 广汽埃安新能源汽车股份有限公司 Continuous integration test method and system for automobile software
CN116990699B (en) * 2023-07-24 2024-02-06 北京三维天地科技股份有限公司 New energy battery detection method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1744054A (en) * 2004-08-31 2006-03-08 中国银联股份有限公司 Automatic test auxiliary system and corresponding software automatic test method
CN102880474A (en) * 2012-10-09 2013-01-16 无锡江南计算技术研究所 Test method for parallel source code generation, compilation and driven execution
CN103186460A (en) * 2011-12-30 2013-07-03 金蝶软件(中国)有限公司 Method, device and system for generating script of test case
CN105306272A (en) * 2015-11-10 2016-02-03 中国建设银行股份有限公司 Method and system for collecting fault scene information of information system

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102110048B (en) * 2009-12-28 2014-07-09 国际商业机器公司 Regression testing selection method and device for frame-based application program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1744054A (en) * 2004-08-31 2006-03-08 中国银联股份有限公司 Automatic test auxiliary system and corresponding software automatic test method
CN103186460A (en) * 2011-12-30 2013-07-03 金蝶软件(中国)有限公司 Method, device and system for generating script of test case
CN102880474A (en) * 2012-10-09 2013-01-16 无锡江南计算技术研究所 Test method for parallel source code generation, compilation and driven execution
CN105306272A (en) * 2015-11-10 2016-02-03 中国建设银行股份有限公司 Method and system for collecting fault scene information of information system

Also Published As

Publication number Publication date
CN107704392A (en) 2018-02-16

Similar Documents

Publication Publication Date Title
CN107704392B (en) Test case processing method and server
JP7371141B2 (en) Tools and methods for real-time dataflow programming languages
US10102113B2 (en) Software test automation systems and methods
US11169902B2 (en) Techniques for evaluating collected build metrics during a software build process
Nguyen et al. GUITAR: an innovative tool for automated testing of GUI-driven software
US9727407B2 (en) Log analytics for problem diagnosis
Lai A survey of communication protocol testing
US8381184B2 (en) Dynamic test coverage
US9519571B2 (en) Method for analyzing transaction traces to enable process testing
US20110145653A1 (en) Method and system for testing complex machine control software
US9697104B2 (en) End-to end tracing and logging
CN106227654B (en) A kind of test platform
CN103186463B (en) Determine the method and system of the test specification of software
Sun et al. Fault localisation for WS-BPEL programs based on predicate switching and program slicing
CN113505895B (en) Machine learning engine service system, model training method and configuration method
Mao et al. FAUSTA: scaling dynamic analysis with traffic generation at whatsapp
Abbad-Andaloussi On the relationship between source-code metrics and cognitive load: A systematic tertiary review
CN114840410A (en) Test analysis method and device, computer equipment and storage medium
Matsumoto et al. Service oriented framework for mining software repository
US20220206773A1 (en) Systems and methods for building and deploying machine learning applications
Lima et al. An approach for automated scenario-based testing of distributed and heterogeneous systems
CN115454702A (en) Log fault analysis method and device, storage medium and electronic equipment
Merayo et al. Passive testing of communicating systems with timeouts
Lima et al. Automated testing of distributed and heterogeneous systems based on UML sequence diagrams
Rajarathinam et al. Test suite prioritisation using trace events technique

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20220215

Address after: 550025 Huawei cloud data center, jiaoxinggong Road, Qianzhong Avenue, Gui'an New District, Guiyang City, Guizhou Province

Patentee after: Huawei Cloud Computing Technology Co.,Ltd.

Address before: 518129 Bantian HUAWEI headquarters office building, Longgang District, Guangdong, Shenzhen

Patentee before: HUAWEI TECHNOLOGIES Co.,Ltd.

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221212

Address after: 518000 Huawei Headquarters Office Building 101, Wankecheng Community, Bantian Street, Longgang District, Shenzhen, Guangdong

Patentee after: Shenzhen Huawei Cloud Computing Technology Co.,Ltd.

Address before: 550025 Huawei cloud data center, jiaoxinggong Road, Qianzhong Avenue, Gui'an New District, Guiyang City, Guizhou Province

Patentee before: Huawei Cloud Computing Technology Co.,Ltd.

TR01 Transfer of patent right