CN117130945B - Test method and device - Google Patents

Test method and device Download PDF

Info

Publication number
CN117130945B
CN117130945B CN202311403362.0A CN202311403362A CN117130945B CN 117130945 B CN117130945 B CN 117130945B CN 202311403362 A CN202311403362 A CN 202311403362A CN 117130945 B CN117130945 B CN 117130945B
Authority
CN
China
Prior art keywords
test
failure
fault
data
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311403362.0A
Other languages
Chinese (zh)
Other versions
CN117130945A (en
Inventor
陈林博
颜挺进
何支军
陈带军
焦振海
陈心亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Securities Depository And Clearing Corp ltd
Original Assignee
China Securities Depository And Clearing Corp ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Securities Depository And Clearing Corp ltd filed Critical China Securities Depository And Clearing Corp ltd
Priority to CN202311403362.0A priority Critical patent/CN117130945B/en
Publication of CN117130945A publication Critical patent/CN117130945A/en
Application granted granted Critical
Publication of CN117130945B publication Critical patent/CN117130945B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The invention discloses a testing method and device, and relates to the technical field of computers. The method comprises the steps that a test object is obtained, so that a corresponding reference test case is selected from a reference test library; disassembling the reference test case to obtain a plurality of reference data, and calling a preset fault model to generate fault data corresponding to each reference data respectively; for each fault data: replacing corresponding reference data in the reference test cases to generate corresponding fault test cases, inputting the corresponding fault test cases to the test objects, and monitoring to obtain corresponding test results; and calling a preset classification model to determine the failure type corresponding to each test result, and calculating the corresponding failure characteristic value. Therefore, the embodiment of the invention can solve the technical problem of low processing efficiency of the existing software robustness testing method.

Description

Test method and device
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a testing method and apparatus.
Background
At present, the application scene of the robustness test for the software system is very wide, and various extreme trigger events can be simulated in the development stage of the software system, so that the software system in the current stage can be judged in an auxiliary mode whether to realize delivery.
In the process of implementing the present invention, the inventor finds that at least the following problems exist in the prior art:
the existing software robustness testing method is generally based on the mode of fault injection to test corresponding software operation feedback, however, when corresponding fault test cases are generated, the prior art generally has the technical defects of non-uniform offset and low efficiency of generating the fault test cases, and is also unfavorable for analyzing software operation data for subsequent processing and positioning software defects.
Disclosure of Invention
In view of the above, the embodiment of the invention provides a testing method and a testing device, which can solve the technical problem of low processing efficiency of the existing software robustness testing method.
To achieve the above object, according to an aspect of the embodiments of the present invention, there is provided a test method, including, in response to acquiring a test object, selecting to acquire a corresponding benchmark test case in a benchmark test library;
disassembling the reference test case to obtain a plurality of reference data, and calling a preset fault model to generate fault data corresponding to each reference data respectively;
for each fault data: replacing corresponding reference data in the reference test cases to generate corresponding fault test cases, inputting the corresponding fault test cases to the test objects, and monitoring to obtain corresponding test results;
and calling a preset classification model to determine the failure type corresponding to each test result, and calculating the corresponding failure characteristic value.
Optionally, calling a preset fault model to generate fault data corresponding to each datum respectively, including:
using a preset fault model to, for each datum:
judging the data type of the reference data,
in response to the reference data belonging to the first type, it is determined to perform a conversion process on the reference data using a preset pseudo-random processing method,
determining to perform conversion processing on the reference data by using a preset step change processing method in response to the reference data belonging to the second type;
to obtain corresponding fault data.
Optionally, inputting to the test object, including:
and determining a plurality of parameter entering interfaces corresponding to the test objects respectively according to a plurality of datum data and fault data included in the fault test cases so as to inject the fault test cases into the objects correspondingly.
Optionally, monitoring to obtain a corresponding test result includes:
invoking a preset monitoring tool to stake the test object so as to implant corresponding monitoring logic into the test object;
and acquiring program running data of the test object based on the monitoring logic, and analyzing to obtain corresponding program crash information to be used as a test result.
Optionally, invoking a preset classification model to determine a failure type corresponding to each test result, including:
for each test result: calling a preset classification model to analyze and obtain a corresponding program crash propagation path and a program return value, and matching a target failure type in a plurality of preset failure types according to the program crash propagation path and the program return value;
the plurality of failure types includes: system crash failure, restart failure, silence failure, interference success failure, and no failure.
Optionally, after matching the target failure type in the preset multiple failure types, the method includes:
in response to determining that the target failure type is a failed failure type or a non-failed type, marking the test result as test passing;
in response to determining that the target failure type is not a failed failure type and a failed type, marking the test result as failed;
determining a test unit corresponding to the test result, wherein the test unit is a function or a module included by the test object;
and recording the test result mark and the corresponding test unit to a test result table.
Optionally, calculating the corresponding failure feature value includes:
for each test unit:
inquiring the corresponding total test times and test passing times according to the test record table, calculating the corresponding test passing proportion,
inquiring the corresponding importance weight of the test unit, and multiplying the importance weight by the test passing proportion to obtain a failure characteristic value of the test unit;
and summing a plurality of failure characteristic values corresponding to the test units respectively to obtain the failure characteristic value corresponding to the test object.
In addition, the invention also provides a testing device, which comprises an acquisition module, a test object acquisition module and a test module, wherein the acquisition module is used for acquiring a corresponding reference test case from a reference test library; the processing module is used for disassembling the reference test case to obtain a plurality of reference data, and calling a preset fault model to generate fault data corresponding to each reference data respectively; an injection module for, for each fault data: replacing corresponding reference data in the reference test cases to generate corresponding fault test cases, inputting the corresponding fault test cases to the test objects, and monitoring to obtain corresponding test results; and the analysis module is used for calling a preset classification model to determine the failure type corresponding to each test result and calculating the corresponding failure characteristic value.
One embodiment of the above invention has the following advantages or benefits: according to the invention, the test object is acquired to select and acquire the corresponding reference test case in the reference test library, so that a data source for generating a large amount of fault injection data is obtained; and a plurality of datum data are obtained by disassembling the datum test case, and a preset fault model is called to generate fault data corresponding to each datum respectively, so that effective identifiable fault injection data corresponding to each datum respectively are obtained; meanwhile, by referring to each failure data: replacing corresponding reference data in the reference test cases to generate corresponding fault test cases, inputting the corresponding fault test cases to the test objects, monitoring to obtain corresponding test results, completing the technical purposes of corresponding fault injection and program crash process tracking, and obtaining the judgment basis of program robustness analysis; in addition, the invention determines the failure type corresponding to each test result by calling the preset classification model, calculates the corresponding failure characteristic value, and accurately quantifies the robustness of the program, thereby being capable of rapidly judging that the program can be used for realizing delivery.
Further effects of the above-described non-conventional alternatives are described below in connection with the embodiments.
Drawings
The drawings are included to provide a better understanding of the invention and are not to be construed as unduly limiting the invention. Wherein:
FIG. 1 is a schematic diagram of the main flow of a test method according to a first embodiment of the present invention;
FIG. 2 is a data relationship diagram of fault data according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of a software testing framework according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of the main flow of a test method according to a second embodiment of the present invention;
FIG. 5 is a schematic diagram of the main flow of a test method according to a third embodiment of the present invention;
FIG. 6 is a schematic diagram of the main modules of a test device according to an embodiment of the invention;
FIG. 7 is an exemplary system architecture diagram in which embodiments of the present invention may be applied;
fig. 8 is a schematic diagram of a computer system suitable for use in implementing an embodiment of the invention.
Detailed Description
Exemplary embodiments of the present invention will now be described with reference to the accompanying drawings, in which various details of the embodiments of the present invention are included to facilitate understanding, and are to be considered merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the invention. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram of the main flow of a test method according to a first embodiment of the present invention, and as shown in fig. 1, the test method includes:
step S101, obtaining a test object so as to select and obtain a corresponding reference test case from a reference test library.
In an embodiment, through the processing of this step, a reference test case for performing a software test on the test object is obtained in a reference database, where the reference test case is a test data resource that is verified by a test and can enable the test object to normally operate, so as to provide a data support for generating fault injection data for the present technical solution. As shown in fig. 2, through the processing of this step, the reference test data (corresponding to the P1 part in the figure) included in the reference data can be effectively expanded, so as to obtain a large amount of associated fault test data (corresponding to the P4 part and the P5 part in the figure), so that the technical problem of low generation efficiency of the fault test data in the prior art can be solved; the P1 area corresponds to test data capable of meeting the current normal function requirement of a tested object, the P2 area corresponds to data capable of meeting more function requirements of the tested object, the test data corresponding to the non-overlapping part of P2 and P1 represents fault data which are regarded as fault input in the current tested object, the P4 area represents fault data which are generated by adopting a pseudo-random processing method on the basis of the P2 area data, and the P5 area represents fault data which are generated by adopting boundary values of parameter types and adopting drift quantities of fixed length types on the basis of the P2 area data.
In the embodiment, if a plurality of reference test cases corresponding to the test object are screened from the reference database, corresponding software automation test can be performed on each reference test case by using the technical scheme of the application, so that sufficient test results are obtained, and the test object can be comprehensively compared and analyzed.
Step S102, disassembling the reference test case to obtain a plurality of reference data, and calling a preset fault model to generate fault data corresponding to each reference data.
In an embodiment, through the processing of this step, the plurality of reference data corresponding to the test object can be efficiently utilized, and through the establishment and use of the fault model, the unification and standardized generation of the plurality of fault data can be ensured, so that the offset and the offset direction of the plurality of fault data can be effectively constrained, and a large amount of fault data conforming to expectations can be obtained.
In some embodiments, in order to perform corresponding conversion processing on each datum according to a corresponding data type to avoid that the obtained fault data is an invalid input that cannot be identified by the tested object, a preset fault model may be invoked to generate fault data corresponding to each datum respectively, which specifically includes: using a preset fault model to, for each datum: judging the data type of the reference data, determining to use a preset pseudo-random processing method to perform conversion processing on the reference data in response to the reference data belonging to the first type, and determining to use a preset step change processing method to perform conversion processing on the reference data in response to the reference data belonging to the second type; to obtain corresponding fault data. By way of example, the data types described above may include integer type, floating point type, character type, and so forth; the first type may be a character type field, and the second type may be a numeric field; the above pseudo-random processing method may include a direct method, a reverse method, a reception rejection method, and the like. Through the processing of the step, the corresponding reference data and fault data can be determined to correspond to the same data type, so that when fault injection is carried out by using the fault data, the corresponding reference borrowing port can successfully acquire the fault data.
Step S103, for each failure data: and replacing corresponding reference data in the reference test cases to generate corresponding fault test cases, inputting the corresponding fault test cases to the test objects, and monitoring to obtain corresponding test results.
In the embodiment, the process of carrying out the software test based on the fault injection for the tested object for many times is completed through the processing of the step, and meanwhile, the condition that only one fault injection data exists in each software test is ensured, so that the technical effects of conveniently comparing other test results, directionally analyzing and determining the defects of the tested object (namely the tested software) are achieved.
In some embodiments, in order to correctly and orderly introduce the fault test case into a plurality of parameter entering interfaces of the tested object in each fault injection process, when the fault test case is input to the tested object, a plurality of parameter entering interfaces corresponding to the tested object respectively can be determined according to a plurality of reference data and fault data included in the fault test case so as to inject the fault test case to the object correspondingly. In a further embodiment, the reference database may be queried for the reference interfaces corresponding to each reference data, and the reference interfaces corresponding to each fault data may be further determined according to the one-to-one correspondence between the plurality of reference data and the plurality of fault data, so as to correctly inject the reference data and the fault data included in each fault test case.
In some embodiments, in order to directionally and completely monitor the whole process of the program crash event of the tested object, so as to comprehensively mine the program bug of the tested object, when a corresponding test result is monitored, a preset monitoring tool instrumentation is called to test the object, so that corresponding monitoring logic is implanted into the test object; and acquiring program running data of the test object based on the monitoring logic, and analyzing to obtain corresponding program crash information to be used as a test result. In a further embodiment, the crash analysis method may be used to mark the running process of the test object with stains and to check the collected crash information of the program after the running process is finished.
Step S104, calling a preset classification model to determine the failure type corresponding to each test result, and calculating the corresponding failure characteristic value.
In the embodiment, through the processing of this step, each test result can be analyzed in a targeted manner, so that potential program defects of the test object can be exposed comprehensively, and further, the test object can be optimized correspondingly, so that the robustness of the test object can be improved.
In some embodiments, in order to determine a corresponding influence range according to the whole feedback of the tested object on fault injection so as to determine the failure type of the corresponding test result, when a preset classification model is called to determine the failure type corresponding to each test result, the corresponding failure type of each test result may be determined for each test result: calling a preset classification model to analyze and obtain a corresponding program crash propagation path and a program return value, and matching a target failure type in a plurality of preset failure types according to the program crash propagation path and the program return value; the plurality of failure types includes: system crash failure, restart failure, silence failure, interference success failure, and no failure.
The system crash failure may be defined as an event that the service processing system (i.e. the test object) crashes and needs to be restarted to recover operation; the restart aging can be defined as an event that the tested object does not return when running, the job is suspended, cannot respond, and the job must be forcedly closed; the failure aging can be defined as an abnormal event generated when the test object runs, so that the program is abnormally exited and an error code is returned, and the error code exactly corresponds to an error or an abnormal event generated by an illegal test value; the silence failure can be defined as that when the operation of the test object is abnormal, the system returns a corresponding error code, but the actual test result shows no abnormal event; the above-mentioned interference failure can be defined as an event in which an abnormality occurs when a test object runs a certain module under test, and an error code is returned, but the error code does not correspond to an actual abnormal condition; the above-mentioned interference success failure can be defined as an event that the test object is operating normally, but its return value or output value is erroneous; the above-described non-failure may be defined as an event in which the test object is operating normally, the return value or the output value is normal.
In some embodiments, in order to record in detail the local defect of the test object exposed by each test result, after a target failure type is obtained by matching in a plurality of preset failure types, in response to determining that the target failure type is a failed failure type or a non-failed type, marking the test result as passing the test; in response to determining that the target failure type is not a failed failure type and a failed type, marking the test result as failed; and determining a test unit corresponding to the test result, wherein the test unit is a function or a module included in the test object so as to record the test result mark and the corresponding test unit into a test result table.
In some embodiments, in order to avoid repeated use of the fault test cases passing through the test, thereby causing subsequent calculation to obtain an erroneous failure feature value, each fault test case passing through the test may be recorded to the test optimization database, and compared with the test case to be used before each software test, and in response to determining that the test case to be used is not repeated, corresponding software test processing is performed.
In some embodiments, to perform corresponding quantization statistics on the robustness of the test object according to each test result, when calculating the corresponding failure feature value, the method may include: for each test unit: inquiring the corresponding total test times and test passing times according to the test record table, calculating the corresponding test passing proportion, inquiring the corresponding importance weight of the test unit, and multiplying the importance weight by the test passing proportion to obtain a failure characteristic value of the test unit; and summing a plurality of failure characteristic values corresponding to the test units respectively to obtain the failure characteristic value corresponding to the test object. Through the processing of the step, whether the test object can reach the delivery standard can be rapidly and accurately judged according to the finally obtained failure characteristic value of the test object. In a further embodiment, as shown in fig. 5, a test report corresponding to the test object may be generated according to the failure types of the plurality of test units, the test result identifiers (MuT, module under Test in the figure, i.e., each test unit) and the failure characteristic values of the test object, so as to implement code improvement on the test object in downstream service processing.
Fig. 4 is a schematic diagram of the main flow of a test method according to a second embodiment of the present invention, the test method including:
step S401, obtaining a test object to select and obtain a corresponding reference test case from the reference test library.
Step S402, disassembling the reference test case to obtain a plurality of reference data.
Step S403, determining the data type to which each piece of reference data belongs.
Step S404, calling a fault model corresponding to each data type respectively to perform conversion processing on each datum data respectively.
Step S405, obtaining fault data corresponding to each datum.
Step S406, for each failure data: and replacing corresponding reference data in the reference test cases to generate corresponding fault test cases.
Step S407, calling a preset monitoring tool to stake the test object so as to implant corresponding monitoring logic into the test object.
Step S408, a plurality of fault test cases are input to the test object one by one.
Step S409, acquiring program running data of the test object based on the monitoring logic, and analyzing to obtain corresponding program crash information, so as to serve as a test result corresponding to each fault test case.
Step S410, a preset classification model is called to analyze and obtain a program crash propagation path and a program return value corresponding to each test result.
Step S411, according to the program crash propagation path and the program return value, matching to obtain a target failure type corresponding to each test result.
In step S412, in response to determining that the target failure type is a failed failure type or a non-failed type, the test result is marked as a test pass.
In step S413, in response to determining that the target failure type is not the failed failure type or the failed failure type, the test result is marked as failed.
Step S414, determining a test unit corresponding to the test result, so as to record the test result mark and the corresponding test unit to a test result table.
Step S415, calculating corresponding failure characteristic values of each test object according to the test result table.
Preferably, for each test unit: inquiring the corresponding total test times and test passing times according to the test record table, calculating the corresponding test passing proportion, inquiring the corresponding importance weight of the test unit, and multiplying the importance weight by the test passing proportion to obtain a failure characteristic value of the test unit; and summing a plurality of failure characteristic values corresponding to the test units respectively to obtain the failure characteristic value corresponding to the test object.
Fig. 5 is a schematic diagram of the main flow of a test method according to a third embodiment of the present invention, the test method including:
step S501, a test object is obtained so as to select and obtain a corresponding reference test case from a reference test library.
Step S502, the reference test case is disassembled to obtain a plurality of reference data, and a preset fault model is called to generate fault data corresponding to each reference data.
Step S503, for each failure data: and replacing corresponding reference data in the reference test cases to generate corresponding fault test cases.
Step S504, calling a preset monitoring tool to stake the test object so as to implant corresponding monitoring logic into the test object.
Step S505, a plurality of fault test cases are input to the test object one by one.
Step S506, program operation data of the test object are obtained based on the monitoring logic, and corresponding program crash information is obtained through analysis to serve as a test result corresponding to each fault test case.
Step S507, a preset classification model is called to determine failure types corresponding to each test result.
Step S508, calculating corresponding failure characteristic values.
Fig. 6 is a schematic diagram of main modules of a test apparatus according to an embodiment of the present invention, and as shown in fig. 6, the test apparatus 600 includes an acquisition module 601, a processing module 602, an injection module 603, and an analysis module 604. The acquiring module 601 is configured to acquire a test object, so as to select and acquire a corresponding reference test case from the reference test library; the processing module 602 is configured to disassemble the reference test case to obtain a plurality of reference data, and call a preset fault model to generate fault data corresponding to each reference data respectively; the injection module 603 is configured to, for each fault data: replacing corresponding reference data in the reference test cases to generate corresponding fault test cases, inputting the corresponding fault test cases to the test objects, and monitoring to obtain corresponding test results; and the analysis module 604 is configured to invoke a preset classification model to determine a failure type corresponding to each test result, and calculate a corresponding failure feature value.
In some embodiments, the processing module 602 is further configured to: when a preset fault model is called to generate fault data corresponding to each datum respectively, the preset fault model is used to generate each datum: judging the data type of the reference data, determining to use a preset pseudo-random processing method to perform conversion processing on the reference data in response to the reference data belonging to the first type, and determining to use a preset step change processing method to perform conversion processing on the reference data in response to the reference data belonging to the second type; to obtain corresponding fault data.
In some embodiments, the injection module 603 is further configured to: when the fault test case is input to the test object, a plurality of parameter input interfaces corresponding to the test object are determined according to a plurality of datum data and fault data included in the fault test case, so that the fault test case is correspondingly injected to the object.
In some embodiments, the injection module 603 is further configured to: when a corresponding test result is obtained through monitoring, a preset monitoring tool is called to insert the test object so as to implant corresponding monitoring logic into the test object; and acquiring program running data of the test object based on the monitoring logic, and analyzing to obtain corresponding program crash information to be used as a test result.
In some embodiments, the analysis module 604 is further configured to: when a preset classification model is called to determine failure types corresponding to each test result, the method comprises the following steps of: calling a preset classification model to analyze and obtain a corresponding program crash propagation path and a program return value, and matching a target failure type in a plurality of preset failure types according to the program crash propagation path and the program return value; the plurality of failure types includes: system crash failure, restart failure, silence failure, interference success failure, and no failure.
In some embodiments, the analysis module 604 is further configured to: after a target failure type is obtained by matching among a plurality of preset failure types, marking the test result as passing test in response to determining that the target failure type is an failed failure type or a non-failed type; in response to determining that the target failure type is not a failed failure type and a failed type, marking the test result as failed; determining a test unit corresponding to the test result, wherein the test unit is a function or a module included by the test object; and recording the test result mark and the corresponding test unit to a test result table.
In some embodiments, the analysis module 604 is further configured to: in calculating the corresponding failure feature values, for each test unit: inquiring the corresponding total test times and test passing times according to the test record table, calculating the corresponding test passing proportion, inquiring the corresponding importance weight of the test unit, and multiplying the importance weight by the test passing proportion to obtain a failure characteristic value of the test unit; and summing a plurality of failure characteristic values corresponding to the test units respectively to obtain the failure characteristic value corresponding to the test object.
It should be noted that, in the test method and the test device of the present invention, there is a corresponding relationship between the implementation contents, so the repetitive contents will not be described.
Fig. 7 illustrates an exemplary system architecture 700 to which the test methods or test apparatus of embodiments of the present invention may be applied.
As shown in fig. 7, a system architecture 700 may include terminal devices 701, 702, 703, a network 704, and a server 705. The network 704 is the medium used to provide communication links between the terminal devices 701, 702, 703 and the server 705. The network 704 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 705 via the network 704 using the terminal devices 701, 702, 703 to receive or send messages or the like. Various communication client applications can be installed on the terminal devices 701, 702, 703.
The terminal devices 701, 702, 703 may be various electronic devices having a page display screen and supporting web browsing, including but not limited to smartphones, tablets, laptop and desktop computers, and the like.
The server 705 may be a server providing various services, such as a background management server (by way of example only) providing support for users with the terminal devices 701, 702, 703. The background management server may analyze and process the received data such as the product information query request, and feedback the processing result (e.g., the target push information, the product information—only an example) to the terminal device.
It should be noted that the testing method provided by the embodiment of the present invention is generally executed by the server 705, and accordingly, the computing device is generally disposed in the server 705.
It should be understood that the number of terminal devices, networks and servers in fig. 7 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
Referring now to FIG. 8, there is illustrated a schematic diagram of a computer system 800 suitable for use in implementing an embodiment of the present invention. The terminal device shown in fig. 8 is only an example, and should not impose any limitation on the functions and the scope of use of the embodiment of the present invention.
As shown in fig. 8, the computer system 800 includes a Central Processing Unit (CPU) 801 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. In the RAM803, various programs and data required for the operation of the computer system 800 are also stored. The CPU801, ROM802, and RAM803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to the bus 804.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, mouse, etc.; an output portion 807 including a display device such as a Cathode Ray Tube (CRT), a liquid crystal display processor (LCD), and a speaker; a storage section 808 including a hard disk or the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. The drive 810 is also connected to the I/O interface 805 as needed. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as needed so that a computer program read out therefrom is mounted into the storage section 808 as needed.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method shown in the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication section 809, and/or installed from the removable media 811. The above-described functions defined in the system of the present invention are performed when the computer program is executed by a Central Processing Unit (CPU) 801.
The computer readable medium shown in the present invention may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present invention, however, the computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules involved in the embodiments of the present invention may be implemented in software or in hardware. The described modules may also be provided in a processor, for example, as: a processor includes an acquisition module, a processing module, an injection module, and an analysis module. The names of these modules do not constitute a limitation on the module itself in some cases.
As another aspect, the present invention also provides a computer-readable medium that may be contained in the apparatus described in the above embodiments; or may be present alone without being fitted into the device. The computer readable medium carries one or more programs, which when executed by the device, cause the device to include an acquisition test object to select and acquire a corresponding reference test case from a reference test library; disassembling the reference test case to obtain a plurality of reference data, and calling a preset fault model to generate fault data corresponding to each reference data respectively; for each fault data: replacing corresponding reference data in the reference test cases to generate corresponding fault test cases, inputting the corresponding fault test cases to the test objects, and monitoring to obtain corresponding test results; and calling a preset classification model to determine the failure type corresponding to each test result, and calculating the corresponding failure characteristic value.
According to the technical scheme provided by the embodiment of the invention, the technical problem of low processing efficiency of the existing software robustness testing method can be solved.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives can occur depending upon design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.

Claims (9)

1. A method of testing, comprising:
obtaining a test object to select and obtain a corresponding reference test case from a reference test library;
disassembling the reference test case to obtain a plurality of reference data, and calling a preset fault model to generate fault data corresponding to each reference data, wherein the method comprises the following steps: using a preset fault model to, for each datum: judging the data type of the reference data, determining to use a preset pseudo-random processing method to perform conversion processing on the reference data in response to the reference data belonging to the first type, and determining to use a preset step change processing method to perform conversion processing on the reference data in response to the reference data belonging to the second type; to obtain corresponding fault data;
for each fault data: replacing corresponding reference data in the reference test cases to generate corresponding fault test cases, inputting the corresponding fault test cases to the test objects, and monitoring to obtain corresponding test results;
and calling a preset classification model to determine the failure type corresponding to each test result, and calculating the corresponding failure characteristic value.
2. The method of claim 1, wherein inputting to the test object comprises:
and determining a plurality of parameter entering interfaces corresponding to the test objects respectively according to a plurality of datum data and fault data included in the fault test cases so as to inject the fault test cases into the objects correspondingly.
3. The method of claim 1, wherein listening for a corresponding test result comprises:
invoking a preset monitoring tool to stake the test object so as to implant corresponding monitoring logic into the test object;
and acquiring program running data of the test object based on the monitoring logic, and analyzing to obtain corresponding program crash information to be used as a test result.
4. A method according to claim 3, wherein invoking a predetermined classification model to determine a respective failure type for each test result comprises:
for each test result: calling a preset classification model to analyze and obtain a corresponding program crash propagation path and a program return value, and matching a target failure type in a plurality of preset failure types according to the program crash propagation path and the program return value;
the plurality of failure types includes: system crash failure, restart failure, silence failure, interference success failure, and no failure.
5. The method of claim 4, wherein after matching the target failure type from the preset plurality of failure types, the method comprises:
in response to determining that the target failure type is a failed failure type or a non-failed type, marking the test result as test passing;
in response to determining that the target failure type is not a failed failure type and a failed type, marking the test result as failed;
determining a test unit corresponding to the test result, wherein the test unit is a function or a module included by the test object;
and recording the test result mark and the corresponding test unit to a test result table.
6. The method of claim 5, wherein calculating the corresponding failure feature value comprises:
for each test unit:
inquiring the corresponding total test times and test passing times according to the test record table, calculating the corresponding test passing proportion,
inquiring the corresponding importance weight of the test unit, and multiplying the importance weight by the test passing proportion to obtain a failure characteristic value of the test unit;
and summing a plurality of failure characteristic values corresponding to the test units respectively to obtain the failure characteristic value corresponding to the test object.
7. A test device, comprising:
the acquisition module is used for acquiring a test object so as to select and acquire a corresponding reference test case from the reference test library;
the processing module is used for disassembling the reference test case to obtain a plurality of reference data and calling a preset fault model to generate fault data corresponding to each reference data, and comprises the following steps: using a preset fault model to, for each datum: judging the data type of the reference data, determining to use a preset pseudo-random processing method to perform conversion processing on the reference data in response to the reference data belonging to the first type, and determining to use a preset step change processing method to perform conversion processing on the reference data in response to the reference data belonging to the second type; to obtain corresponding fault data;
an injection module for, for each fault data: replacing corresponding reference data in the reference test cases to generate corresponding fault test cases, inputting the corresponding fault test cases to the test objects, and monitoring to obtain corresponding test results;
and the analysis module is used for calling a preset classification model to determine the failure type corresponding to each test result and calculating the corresponding failure characteristic value.
8. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
when executed by the one or more processors, causes the one or more processors to implement the method of any of claims 1-6.
9. A computer readable medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any of claims 1-6.
CN202311403362.0A 2023-10-26 2023-10-26 Test method and device Active CN117130945B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311403362.0A CN117130945B (en) 2023-10-26 2023-10-26 Test method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311403362.0A CN117130945B (en) 2023-10-26 2023-10-26 Test method and device

Publications (2)

Publication Number Publication Date
CN117130945A CN117130945A (en) 2023-11-28
CN117130945B true CN117130945B (en) 2024-02-09

Family

ID=88860381

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311403362.0A Active CN117130945B (en) 2023-10-26 2023-10-26 Test method and device

Country Status (1)

Country Link
CN (1) CN117130945B (en)

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1420344A2 (en) * 2002-11-13 2004-05-19 Imbus Ag Method and device for prediction of the reliability of software programs
CN108694104A (en) * 2017-04-12 2018-10-23 北京京东尚科信息技术有限公司 A kind of interface function contrast test method, apparatus, electronic equipment and storage medium
CN109062782A (en) * 2018-06-27 2018-12-21 阿里巴巴集团控股有限公司 A kind of selection method of regression test case, device and equipment
CN111414310A (en) * 2020-04-01 2020-07-14 国网新疆电力有限公司电力科学研究院 Method and system for testing safety and stability control device of power grid capable of automatically generating test cases
CN111597122A (en) * 2020-07-24 2020-08-28 四川新网银行股份有限公司 Software fault injection method based on historical defect data mining
CN111831556A (en) * 2020-06-18 2020-10-27 中国科学院空间应用工程与技术中心 Software multi-fault decoupling and parallel positioning method and device
CN112306877A (en) * 2020-10-30 2021-02-02 山东山大电力技术股份有限公司 Power system fault operation and maintenance method and system
CN112506757A (en) * 2020-11-17 2021-03-16 中广核工程有限公司 Automatic test method, system, computer device and medium thereof
CN112527649A (en) * 2020-12-15 2021-03-19 建信金融科技有限责任公司 Test case generation method and device
CN113127331A (en) * 2019-12-31 2021-07-16 航天信息股份有限公司 Fault injection-based test method and device and computer equipment
CN113704079A (en) * 2020-05-22 2021-11-26 北京沃东天骏信息技术有限公司 Interface testing method and device based on Protobuf
CN114115168A (en) * 2020-09-01 2022-03-01 上汽通用汽车有限公司 Fault injection test system
CN114500345A (en) * 2022-01-25 2022-05-13 上海安般信息科技有限公司 Fuzzy test and diagnosis system based on custom protocol configuration
CN114510381A (en) * 2021-12-30 2022-05-17 锐捷网络股份有限公司 Fault injection method, device, equipment and storage medium
CN114741284A (en) * 2022-03-30 2022-07-12 中国电子产品可靠性与环境试验研究所((工业和信息化部电子第五研究所)(中国赛宝实验室)) Task reliability evaluation method and device, computer equipment and storage medium
CN115185832A (en) * 2022-06-25 2022-10-14 平安银行股份有限公司 Test case generation method and device, computer equipment and readable storage medium
CN115291585A (en) * 2022-07-13 2022-11-04 合众新能源汽车有限公司 Method for acquiring fault data of VCU and related device
CN115328771A (en) * 2022-08-02 2022-11-11 交控科技股份有限公司 Fault testing method, device, equipment and medium of testing tool
CN115470064A (en) * 2022-07-29 2022-12-13 重庆长安汽车股份有限公司 Security test method and device for device to be tested, electronic device and storage medium
CN115616372A (en) * 2022-08-30 2023-01-17 超聚变数字技术有限公司 Fault injection test method and system
CN115993812A (en) * 2023-01-19 2023-04-21 重庆长安新能源汽车科技有限公司 Whole vehicle fault diagnosis test method, device, system, equipment and medium
CN116204428A (en) * 2023-02-27 2023-06-02 中国建设银行股份有限公司 Test case generation method and device
CN116302766A (en) * 2022-09-09 2023-06-23 山东有人物联网股份有限公司 Fault injection testing method and device, electronic equipment and readable storage medium
CN116737538A (en) * 2023-04-13 2023-09-12 武汉铁路职业技术学院 Automatic software testing system and method for rail transit traction transmission control unit
CN116915442A (en) * 2023-06-12 2023-10-20 中国工商银行股份有限公司 Vulnerability testing method, device, equipment and medium
CN116932265A (en) * 2023-07-24 2023-10-24 中国建设银行股份有限公司 Fault simulation processing method, device, equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10942841B2 (en) * 2017-12-07 2021-03-09 Conformiq Holding LLC User assisted automated test case generation
KR20210004656A (en) * 2019-07-05 2021-01-13 현대자동차주식회사 Apparatus and control method for vehicle function test

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1420344A2 (en) * 2002-11-13 2004-05-19 Imbus Ag Method and device for prediction of the reliability of software programs
CN108694104A (en) * 2017-04-12 2018-10-23 北京京东尚科信息技术有限公司 A kind of interface function contrast test method, apparatus, electronic equipment and storage medium
CN109062782A (en) * 2018-06-27 2018-12-21 阿里巴巴集团控股有限公司 A kind of selection method of regression test case, device and equipment
CN113127331A (en) * 2019-12-31 2021-07-16 航天信息股份有限公司 Fault injection-based test method and device and computer equipment
CN111414310A (en) * 2020-04-01 2020-07-14 国网新疆电力有限公司电力科学研究院 Method and system for testing safety and stability control device of power grid capable of automatically generating test cases
CN113704079A (en) * 2020-05-22 2021-11-26 北京沃东天骏信息技术有限公司 Interface testing method and device based on Protobuf
CN111831556A (en) * 2020-06-18 2020-10-27 中国科学院空间应用工程与技术中心 Software multi-fault decoupling and parallel positioning method and device
CN111597122A (en) * 2020-07-24 2020-08-28 四川新网银行股份有限公司 Software fault injection method based on historical defect data mining
CN114115168A (en) * 2020-09-01 2022-03-01 上汽通用汽车有限公司 Fault injection test system
CN112306877A (en) * 2020-10-30 2021-02-02 山东山大电力技术股份有限公司 Power system fault operation and maintenance method and system
CN112506757A (en) * 2020-11-17 2021-03-16 中广核工程有限公司 Automatic test method, system, computer device and medium thereof
CN112527649A (en) * 2020-12-15 2021-03-19 建信金融科技有限责任公司 Test case generation method and device
CN114510381A (en) * 2021-12-30 2022-05-17 锐捷网络股份有限公司 Fault injection method, device, equipment and storage medium
CN114500345A (en) * 2022-01-25 2022-05-13 上海安般信息科技有限公司 Fuzzy test and diagnosis system based on custom protocol configuration
CN114741284A (en) * 2022-03-30 2022-07-12 中国电子产品可靠性与环境试验研究所((工业和信息化部电子第五研究所)(中国赛宝实验室)) Task reliability evaluation method and device, computer equipment and storage medium
CN115185832A (en) * 2022-06-25 2022-10-14 平安银行股份有限公司 Test case generation method and device, computer equipment and readable storage medium
CN115291585A (en) * 2022-07-13 2022-11-04 合众新能源汽车有限公司 Method for acquiring fault data of VCU and related device
CN115470064A (en) * 2022-07-29 2022-12-13 重庆长安汽车股份有限公司 Security test method and device for device to be tested, electronic device and storage medium
CN115328771A (en) * 2022-08-02 2022-11-11 交控科技股份有限公司 Fault testing method, device, equipment and medium of testing tool
CN115616372A (en) * 2022-08-30 2023-01-17 超聚变数字技术有限公司 Fault injection test method and system
CN116302766A (en) * 2022-09-09 2023-06-23 山东有人物联网股份有限公司 Fault injection testing method and device, electronic equipment and readable storage medium
CN115993812A (en) * 2023-01-19 2023-04-21 重庆长安新能源汽车科技有限公司 Whole vehicle fault diagnosis test method, device, system, equipment and medium
CN116204428A (en) * 2023-02-27 2023-06-02 中国建设银行股份有限公司 Test case generation method and device
CN116737538A (en) * 2023-04-13 2023-09-12 武汉铁路职业技术学院 Automatic software testing system and method for rail transit traction transmission control unit
CN116915442A (en) * 2023-06-12 2023-10-20 中国工商银行股份有限公司 Vulnerability testing method, device, equipment and medium
CN116932265A (en) * 2023-07-24 2023-10-24 中国建设银行股份有限公司 Fault simulation processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN117130945A (en) 2023-11-28

Similar Documents

Publication Publication Date Title
CN110674047B (en) Software testing method and device and electronic equipment
CN109977012B (en) Joint debugging test method, device, equipment and computer readable storage medium of system
CN111221727A (en) Test method, test device, electronic equipment and computer readable medium
CN114168471A (en) Test method, test device, electronic equipment and storage medium
CN114064435A (en) Database test method, device, medium and electronic equipment
US11449408B2 (en) Method, device, and computer program product for obtaining diagnostic information
CN111597093B (en) Exception handling method, device and equipment thereof
CN117130945B (en) Test method and device
CN115576831A (en) Test case recommendation method, device, equipment and storage medium
CN113360182B (en) Method and apparatus for system performance diagnostics
CN113760874A (en) Data quality detection method and device, electronic equipment and storage medium
CN113434382A (en) Database performance monitoring method and device, electronic equipment and computer readable medium
CN113778849A (en) Method, apparatus, device and storage medium for testing code
CN113220586A (en) Automatic interface pressure test execution method, device and system
CN115129575A (en) Code coverage result generation method and device
CN113254325A (en) Test case processing method and device
CN112306723A (en) Operation information acquisition method and device applied to small program
CN113031960B (en) Code compiling method, device, server and storage medium
CN115190008B (en) Fault processing method, fault processing device, electronic equipment and storage medium
CN113360368B (en) Method and device for testing software performance
CN116723117A (en) Monitoring method and device
CN116954975A (en) Fault processing method, system and device for software product and storage medium
CN114064484A (en) Interface testing method and device, electronic equipment and readable storage medium
TW202307670A (en) Device and method for automated generation of parameter testing requests
CN116107908A (en) Unit test code generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant