CN111782546B - Automatic interface testing method and device based on machine learning - Google Patents

Automatic interface testing method and device based on machine learning Download PDF

Info

Publication number
CN111782546B
CN111782546B CN202010717743.6A CN202010717743A CN111782546B CN 111782546 B CN111782546 B CN 111782546B CN 202010717743 A CN202010717743 A CN 202010717743A CN 111782546 B CN111782546 B CN 111782546B
Authority
CN
China
Prior art keywords
test
api
parameters
code
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010717743.6A
Other languages
Chinese (zh)
Other versions
CN111782546A (en
Inventor
杨羽
姚登科
王君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Doumi Youpin Technology Development Co ltd
Original Assignee
Beijing Doumi Youpin Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Doumi Youpin Technology Development Co ltd filed Critical Beijing Doumi Youpin Technology Development Co ltd
Priority to CN202010717743.6A priority Critical patent/CN111782546B/en
Publication of CN111782546A publication Critical patent/CN111782546A/en
Application granted granted Critical
Publication of CN111782546B publication Critical patent/CN111782546B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application provides an automatic interface testing method and device based on machine learning, and belongs to the technical field of software testing. The method comprises the steps of obtaining operation records of a tested system at least in a production link from a log file; acquiring input parameters and output parameters of an API interface in a running record and an API request protocol in the process of returning the input parameters to the output parameters; generating an API interface test code according to the input parameter, the output parameter and an API request protocol; and executing the API interface test code to determine whether the tested system operates normally. The device comprises an operation record acquisition module, a parameter acquisition module, a test code generation module and a test code execution module which respectively correspond to the steps of the method. The method and the device can automatically generate the test codes according to the interface parameters, adjust the test process and greatly improve the test stability of the system.

Description

Automatic interface testing method and device based on machine learning
Technical Field
The application belongs to the technical field of software testing, and particularly relates to an automatic interface testing method and device based on machine learning.
Background
Software testing is the process of running or testing a software system using manual or automated means with the purpose of checking whether it meets specified requirements or to figure out differences between expected and actual results.
Application Program Interfaces (APIs) are becoming the center of many software developments, and different systems and applications are connected through APIs to transmit data to each other. The more and more systems are switched from the MVC architecture to the microservice architecture, the interaction between the microservices is made through the Rest API. For enterprises, it is easier to find problems in the early stage of software product development to perform API testing, which generally refers to calling a specific API through a tool or code to obtain an actual operation result and then verifying whether the actual operation result matches with an expected result that we want, than to perform UI testing after the product development is completed.
The API test also has the advantages of lower cost, easiness in automatic test and the like, in the prior art, more mature automatic test tools exist for the API test, such as RestBerd, Postman, SmartBear and the like, and the test efficiency can be greatly improved by using the test tools.
The present large-scale network application program is divided into a front end and a back end. The current trend is that the front-end devices are diversified (mobile phones, tablets, desktop computers, other special devices). Therefore, a unified mechanism is needed to facilitate the communication between different front-end devices and the backend. RESTful API is a relatively mature API design theory of a set of Internet application program at present, the API interface usually specifies the input and output of the interface at the design stage, test development compiles test codes according to the input and output of the interface and the codes, compares the actual output with the expected output, and judges whether the interface is correct and the system is normally operated, however, the Internet enterprise is rapidly changed at present, which determines that the interface can also be rapidly changed in order to verify the correctness of the interface, and the test function needs to be rapidly changed. This is supported by a large amount of human resources, which increases the difficulty of system stability. Quality is difficult to control when the project is urgent. Particularly, in the process of fast iteration of the interface, the research and development personnel change the test interface, often forget to notify the tester, and when the tester continues to perform system test according to a test program written by the previous test interface or the previous test case, a large test careless result often occurs.
Disclosure of Invention
In order to solve at least one of the above technical problems, the present application provides an automatic interface testing method and apparatus based on machine learning, and particularly, an API interface for communication in a RESTful architecture, which can automatically generate a test code and improve testing efficiency.
The application provides an automatic interface testing method based on machine learning in a first aspect, which comprises the following steps: acquiring an operation record generated by the system to be tested in at least a production link from the log file; acquiring input parameters and output parameters of an API interface in a running record and an API request protocol in the process of returning the input parameters to the output parameters; generating an API interface test code according to the input parameter, the output parameter and an API request protocol; and executing the API interface test code to determine whether the tested system operates normally.
Preferably, the acquiring the operation record of the system under test includes: and acquiring the running record of the tested system through a message queue of a log system.
Preferably, the acquiring the input parameter, the output parameter and the API request protocol includes: and intercepting input parameters, output parameters and an API request protocol in the running record through key character matching.
Preferably, the generating of the API interface test code according to the input parameter, the output parameter, and the API request protocol includes: generating an execution code for calling the tested system according to the input parameters and the API request protocol; setting actual output parameters for receiving the executed execution code; and taking the output parameters obtained from the self-operation record as expected output parameters, and generating analysis codes for comparing the actual output parameters with the expected output parameters.
Preferably, after executing the API interface test code, the method further includes: and generating a test report for reflecting the accuracy of the API of the tested system.
The second aspect of the present application provides an automatic interface testing apparatus based on machine learning, including: the operation record acquisition module is used for acquiring the operation record generated by the system to be tested in at least a production link from the log file; the parameter acquisition module is used for acquiring input parameters and output parameters of the API interface in the running record and an API request protocol in the process of returning the input parameters to the output parameters; the test code generating module is used for generating an API interface test code according to the input parameter, the output parameter and the API request protocol; and the test code execution module is used for executing the API interface test code and determining whether the tested system normally operates.
Preferably, the operation record obtaining module includes: and the message queue reading unit is used for acquiring the running record of the system to be tested through the message queue of the log system.
Preferably, the parameter acquiring module includes: and the character matching unit is used for intercepting the input parameters, the output parameters and the API request protocol in the running record through key character matching.
Preferably, the test code generation module includes: the execution code generating unit is used for generating an execution code for calling the tested system according to the input parameters and the API request protocol; an execution code result receiving setting unit for setting an actual output parameter for receiving the execution code after being executed; and the result comparison unit is used for taking the output parameters acquired from the running record as expected output parameters and generating analysis codes for comparing the actual output parameters with the expected output parameters.
Preferably, the automatic interface testing apparatus further includes: and the test report generating module is used for generating a test report reflecting the accuracy of the API of the tested system after the API test code is executed.
In a third aspect of the present application, a computer device comprises a processor, a memory, and a computer program stored on the memory and executable on the processor, the processor executing the computer program for implementing the machine learning based automatic interface testing method as described above.
In a fourth aspect of the present application, a readable storage medium stores a computer program for implementing the automatic interface testing method based on machine learning as described above when the computer program is executed by a processor.
The method and the device have no process of manually writing the test codes in the whole operation process, can automatically adjust the test process aiming at the interface parameters, and greatly improve the test stability of the system.
Drawings
FIG. 1 is a flow chart of a preferred embodiment of the present application for a machine learning based automatic interface testing method.
FIG. 2 is a timing diagram of the embodiment of FIG. 1 of the present application.
Fig. 3 is an architecture diagram of a preferred embodiment of the present application for a machine learning based automatic interface testing apparatus.
Detailed Description
In order to make the implementation objects, technical solutions and advantages of the present application clearer, the technical solutions in the embodiments of the present application will be described in more detail below with reference to the accompanying drawings in the embodiments of the present application. In the drawings, the same or similar reference numerals denote the same or similar elements or elements having the same or similar functions throughout. The described embodiments are some, but not all embodiments of the present application. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application, and should not be construed as limiting the present application. All other embodiments obtained by a person of ordinary skill in the art without any inventive work based on the embodiments in the present application are within the scope of protection of the present application. Embodiments of the present application will be described in detail below with reference to the drawings.
The application provides an automatic interface testing method and device based on machine learning, in particular to an API (application program interface) for communication aiming at a RESTful architecture, which can automatically generate a testing code and improve the testing efficiency. Where, REST (representational State transfer) representational State transition, REST refers to a set of architectural constraints and principles. If an architecture meets the constraints and principles of REST, we call it a RESTful architecture.
The format of the RESTful architecture generally includes url format, parameter format, and return body format, wherein url format is written as the following standard, for example:
http(s)://server.com/api-name/{version}/{domain}/{rest-convention}
in the above standard, { version } represents version information of api; { domain } is an area that can be used to define any technology (e.g., security-allowing a specified user to access the area) or business (e.g., the same functionality is under the same prefix); { rest-containment } represents the set of contracted rest interfaces under this domain (domain).
The parameter formats are mainly classified into a GET mode and a POST mode, and when a request-response is performed between a client and a server, the GET and the POST are two most commonly used methods, where the GET requests data from a specified resource and the POST submits data to be processed to the specified resource.
Under the RESTful architecture, GET takes two common formats:
URL parameters (preferred recommendations), such as:
https://api.doumi.com/v1.1name=test&age=20;
path parameters, such as: https:// api. doumi. com/v1.1/userinfo/{ id }.
Two common formats are adopted by POST for submission of Json format package parameters and for submission of form parameters, and examples of submission of the Json format package parameters are as follows:
POST https://api.doumi.com/v1.1
Content-Type:application/json;charset=utf-8
{"title":"test","sub":[1,2,3]};
the form parameter submission example is as follows:
POST https://www.example.com/v1.1
Content-Type:application/x-www-form-urlencoded;charset=utf-8
title=test&des=123。
an example of a return body format for the RESTful architecture is as follows:
Figure BDA0002598831850000051
the test process of the API interface generally refers to that the entry and exit of the interface are specified between systems in the project design stage; testing, researching and writing a test code according to an interface and an input parameter and an output parameter of the code, comparing the actual output parameter with an expected output parameter, and judging whether the comparison is correct, wherein for example, the input parameter adopts a name (test & age) 20 in a GET mode, and the output parameter adopts a return body format (including id, name, age, work and the like); and then deploying the test codes for automatic detection, and selectively generating a test report and an early warning report.
According to the description of the background art, in the above test process, it is possible to change the interface parameters, and it is also possible to change the interface parameters by using GET or POST, and therefore, it is desirable to have a method capable of automatically recognizing the interface change and automatically generating the test code accordingly.
According to a first aspect of the present application, there is provided an automatic interface testing method based on machine learning, as shown in fig. 1, mainly including:
and step S1, acquiring the operation record generated by the system under test in at least the production link from the log file.
And step S2, acquiring the input parameters and the output parameters of the API interface in the operation record and the API request protocol in the process of returning the input parameters to the output parameters.
And step S3, generating an API interface test code according to the input parameters, the output parameters and the API request protocol.
And step S4, executing the API interface test code, and determining whether the system to be tested normally operates.
In step S1, test data including test cases are obtained through log files, which not only can ensure that the form of the corresponding interface and the corresponding test case are obtained under the condition that the interface changes, but also can ensure the accuracy of the interface and the test case, as shown in fig. 2, the operation record of step S1 is mainly an operation record in the research and development process, and can also be an operation record in other tests or the test process, thereby ensuring the reliability of the source, avoiding the change of the test interface by a tester under the condition that the test interface changes, and solving the problem of low test efficiency caused by the need of manually writing codes to identify whether the interface is correct in the prior art.
It can be understood that in the daily development process of software, logs are necessary components of codes, a good code necessarily has a good log output, and the application utilizes log files to assist software testing through data mining of the logs.
The log is crucial to monitoring and problem locating of the system in the operating environment, and must be paid attention to output all the time in the process of system design, development and implementation, for the RESTful architecture, the operating record in step S1 is mainly stored in the message queue of the log file, and in some optional embodiments of the present application, the operating record of the system under test is obtained through the message queue of the log system. For example, the data format of the operation record obtained from the Rabbitmq queue is:
Figure BDA0002598831850000061
Figure BDA0002598831850000071
in the data format representing the operation record, the above-mentioned brief description of the code/data format adopts a GET mode to communicate, and the data format specifies the url of the server and carries input parameters including id and name at the same time, and requests the server to return a group of data, namely, user id, telephone and group id included in the "data" structure.
Step S2 of the present application is configured to parse the above data format, and obtain an input parameter and an output parameter of the API interface in the running record, and an API request protocol in the process of returning the input parameter to the output parameter, where it can be seen that the input parameter obtained in step S2 is a structure "query," the output parameter is a structure "data," and the API request protocol is a GET communication mode corresponding to a url and a method corresponding to the "name.
In some optional embodiments, obtaining the input parameters, the output parameters, and the API request protocol includes: and intercepting input parameters, output parameters and an API request protocol in the running record through key character matching.
It is to be understood that the above-described corresponding respective data may be obtained by regular expression, character matching, or the like, for example.
The method and the device adopt a mode of the monitor to obtain the log files in the research and development process, and can obtain a plurality of groups of data including the test cases in real time in batch.
In some optional embodiments, the generating an API interface test code according to the input parameter, the output parameter and the API request protocol in step S3 includes:
and step S31, generating an execution code for calling the tested system according to the input parameters and the API request protocol.
And step S32, setting actual output parameters for receiving the executed code.
And step S33, taking the output parameters obtained from the running records as expected output parameters, and generating analysis codes for comparing the actual output parameters with the expected output parameters.
It should be understood that, as defined by the software and API interface testing, the testing process consists in verifying that the system under test meets specified requirements or in making clear the difference between expected and actual results. In step S31, a test call is generated according to the input parameters and protocols recorded in the log file, so that the software automatic test can be realized, in step S32, the actual output in the software automatic test process is received, and in step S33, the actual output is compared with the expected output.
The above process generates execution code similar to the following (for ease of description, code comments are placed in front of the code):
Figure BDA0002598831850000081
in step S31-step S32, response is adopted to receive the actual output, i.e., response ═ http.get ("https:// vip. window. com/userinfo/v1team _ id ═ abc & project _ id ═ 3");
in step S33, response and expect may be compared:
Figure BDA0002598831850000082
as shown in fig. 2, after the code is generated, the code is automatically executed, actual output is continuously obtained through response, in the process, the software to be tested generates a new log system again, new input and output are generated to be used by the testing device of the application, and the whole process is continuously circulated and the machine learns and checks continuously. The design has no process of manually compiling test codes in the whole operation process, and the stability of the system is greatly improved.
It is understood that during the test process, the input and output of the automatically generated code are not determined strictly according to the input and output of the log file, and the known items are input and the unknown items are obtained from the database according to the nature of the retrieval, so that in the test case consisting of the input and output parameters obtained according to the log, the input and output parameters can be customized, for example, the input AB and the output CDF in the log file can be customized, and the input AC and the test result can be modified to test whether the BDF is output or not, or the input DF and the test result can test whether the ABC is output or not.
Because the test case data is huge, in the test case combination algorithm, the parameters which are the same as the input quantity in the original log file are preferably selected as the input parameters of the test case, then the quantity of the input parameters is increased until the quantity of the parameters reaches the maximum, for example, the original log file is input AB, CDF is output, then the first priority is to convert the test class into input AC, whether BDF is output is tested, then the second priority is to convert the test class into input ADF, whether BC is output is tested, finally the third test class is input BCDF, and whether A is output is tested.
The method also comprises the steps of setting a parameter f for representing the ratio of the conversion quantity to be tested to the whole convertible test quantity, giving by a user, forming 20 conversion cases after combining certain input and output test cases, selecting 10 conversion cases to test, wherein f is 50%, and determining the software accuracy rate through the following formula based on the test principle of the input and output conversion:
F-expect=[A/(A+B)]*[k+(1-k)*f];
the method comprises the following steps of A, B, K, F and F, wherein A refers to the number of test cases with correct returned results in the test process, B refers to the number of test cases with incorrect returned results in the test process, k refers to the accuracy proportion of the test cases of the original log file, 0.95 is usually adopted under the normal condition, f refers to the given combination case proportion, 50% is generally adopted as described above, and a tester gives the test cases according to the specific conditions of the test cases, for example, the test cases support various deformations, or the number of the test cases is relatively controllable, so that the value f can be properly increased, and if the test cases only support a small amount of deformations, the value f can be reduced.
k + (1-k) × f may be understood as transforming the value of f with the same trend within a percentage, e.g. k is 0.95, converting f to data within 95% -100%.
In some optional embodiments, after executing the API interface test code, further comprising: and generating a test report for reflecting the accuracy of the API of the tested system.
As shown in fig. 2, the present application performs processing of the test result through an alarm, for example, if the output result is found to be inconsistent with the expected result in the test process, the tester and the developer are notified, or after multiple tests, the accuracy is calculated, and when the accuracy is lower than the threshold, the tester and the developer are notified.
In a second aspect of the present application, there is provided an automatic interface testing apparatus based on machine learning corresponding to the above method, as shown in fig. 3, including:
and the operation record acquisition module is used for acquiring the operation record generated by the system to be tested in at least a production link from the log file.
And the parameter acquisition module is used for acquiring the input parameters and the output parameters of the API interface in the running record and the API request protocol in the process of returning the input parameters to the output parameters.
And the test code generation module is used for generating an API interface test code according to the input parameters, the output parameters and the API request protocol.
And the test code execution module is used for executing the API test code and determining whether the tested system runs normally.
In some optional embodiments, the operation record obtaining module includes:
and the message queue reading unit is used for acquiring the running record of the system to be tested through the message queue of the log system.
In some optional embodiments, the parameter obtaining module comprises:
and the character matching unit is used for intercepting the input parameters, the output parameters and the API request protocol in the running record through key character matching.
In some optional embodiments, the test code generation module comprises:
the execution code generating unit is used for generating an execution code for calling the tested system according to the input parameters and the API request protocol;
an execution code result receiving setting unit for setting an actual output parameter for receiving the execution code after being executed; and
and the result comparison unit is used for taking the output parameters acquired from the running records as expected output parameters and generating analysis codes for comparing the actual output parameters with the expected output parameters.
In some optional embodiments, the automatic interface testing apparatus further comprises:
and the test report generating module is used for generating a test report reflecting the accuracy of the API of the tested system after the API test code is executed.
A third aspect of the application provides a computer device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the processor executing the computer program for implementing the machine learning based automatic interface testing method as described above.
A fourth aspect of the present application provides a readable storage medium storing a computer program for implementing the machine learning based automatic interface testing method as described above when the computer program is executed by a processor.
In particular, according to embodiments of the present application, the processes described above with reference to the flow diagrams may be implemented as a computer software program, in particular a computer program installed on a mobile phone terminal, which is capable of interacting with a server. For example, embodiments of the present application include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. The computer storage media of the present application may be computer-readable signal media or computer-readable storage media or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or units described in the embodiments of the present application may be implemented by software or hardware. The modules or units described may also be provided in a processor, the names of which in some cases do not constitute a limitation of the module or unit itself.
The above description is only for the specific embodiments of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present application should be covered within the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (6)

1. An automatic interface testing method based on machine learning is characterized by comprising the following steps:
acquiring an operation record generated by the tested system at least in a production link from the log file;
acquiring input parameters and output parameters of an API interface in a running record and an API request protocol in the process of returning the input parameters to the output parameters, wherein the input parameters, the output parameters and the API request protocol in the running record are intercepted through key character matching;
generating an API interface test code according to the input parameter, the output parameter and an API request protocol, wherein the API interface test code comprises an execution code for calling a tested system according to the input parameter and the API request protocol; setting actual output parameters for receiving the executed execution code; taking the output parameters obtained from the self-operation record as expected output parameters, and generating analysis codes for comparing the actual output parameters with the expected output parameters; and
executing the API testing code to determine whether the tested system operates normally;
wherein the accuracy of the system under test is determined by the following formula:
F-expect= [A/(A+B)]*[k+(1-k)*f];
a refers to the number of test cases with correct returned results in the test process, B refers to the number of test cases with incorrect returned results in the test process, k is the accuracy proportion of the test cases of the original log file, the value is 0.95, f is the proportion of a given combination case, and is the proportion of the conversion number to be tested to the whole convertible test number, wherein the conversion number refers to the conversion of the test cases when the test cases consisting of input and output parameters are obtained according to the log;
k + (1-k) × f means that f is transformed to within 95% -100% in percentage with the same trend, when k is 0.95;
wherein executing the API interface test code comprises: and automatically executing the API test code, continuously acquiring actual output, generating a new log file by the tested system in the process, circularly executing the whole process, and continuously performing machine learning and checking.
2. The method of machine learning-based automated interface testing as defined in claim 1, wherein the obtaining a running record of a system under test comprises:
and acquiring the running record of the tested system through a message queue of a log system.
3. The machine learning-based automated interface testing method of claim 1, wherein executing the API interface test code further comprises:
and generating a test report for reflecting the accuracy of the API of the tested system.
4. An automatic interface testing device based on machine learning, comprising:
the operation record acquisition module is used for acquiring the operation record generated by the system to be tested at least in the production link from the log file;
a parameter obtaining module, configured to obtain an input parameter and an output parameter of an API interface in a running record, and an API request protocol in a process of returning the input parameter to the output parameter, where the parameter obtaining module includes: the character matching unit is used for intercepting input parameters, output parameters and an API (application program interface) request protocol in the running record through key character matching;
a test code generation module, configured to generate an API interface test code according to the input parameter, the output parameter, and an API request protocol, where the test code generation module includes: the execution code generating unit is used for generating an execution code for calling the tested system according to the input parameters and the API request protocol; an execution code result receiving setting unit for setting an actual output parameter for receiving the execution code after being executed; the result comparison unit is used for taking the output parameters acquired from the running record as expected output parameters and generating analysis codes for comparing the actual output parameters with the expected output parameters; and
the test code execution module is used for executing the API test code and determining whether the system to be tested normally runs, wherein the accuracy of the system to be tested is determined by the following formula:
F-expect= [A/(A+B)]*[k+(1-k)*f];
a refers to the number of test cases with correct returned results in the test process, B refers to the number of test cases with incorrect returned results in the test process, k is the accuracy proportion of the test cases of the original log file, the value is 0.95, f is the proportion of a given combination case, and is the proportion of the conversion number to be tested to the whole convertible test number, wherein the conversion number refers to the conversion of the test cases when the test cases consisting of input and output parameters are obtained according to the log;
k + (1-k) × f means that f is transformed to within 95% -100% in percentage with the same trend, when k is 0.95;
the test code execution module comprises: and automatically executing the API test code, continuously acquiring actual output, generating a new log file by the tested system in the process, circularly executing the whole process, and continuously performing machine learning and checking.
5. The machine learning-based automated interface testing apparatus of claim 4, wherein the run record acquisition module comprises:
and the message queue reading unit is used for acquiring the running record of the system to be tested through the message queue of the log system.
6. The machine learning-based automatic interface testing apparatus of claim 4, wherein the automatic interface testing apparatus further comprises:
and the test report generating module is used for generating a test report for reflecting the accuracy of the API of the tested system after the API test code is executed.
CN202010717743.6A 2020-07-23 2020-07-23 Automatic interface testing method and device based on machine learning Active CN111782546B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010717743.6A CN111782546B (en) 2020-07-23 2020-07-23 Automatic interface testing method and device based on machine learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010717743.6A CN111782546B (en) 2020-07-23 2020-07-23 Automatic interface testing method and device based on machine learning

Publications (2)

Publication Number Publication Date
CN111782546A CN111782546A (en) 2020-10-16
CN111782546B true CN111782546B (en) 2021-10-01

Family

ID=72763947

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010717743.6A Active CN111782546B (en) 2020-07-23 2020-07-23 Automatic interface testing method and device based on machine learning

Country Status (1)

Country Link
CN (1) CN111782546B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114253867B (en) * 2022-03-02 2022-06-14 北京仁科互动网络技术有限公司 Automatic testing method, device and system based on neural network model

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112328486A (en) * 2020-11-06 2021-02-05 深圳壹账通智能科技有限公司 Interface automation test method and device, computer equipment and storage medium
CN112631694B (en) * 2020-12-11 2023-08-11 远光软件股份有限公司 API (application program interface) encapsulation calling method and system
CN113010413B (en) * 2021-02-20 2023-04-25 烽火通信科技股份有限公司 Automatic interface testing method and device
CN115776456A (en) * 2022-11-28 2023-03-10 重庆长安汽车股份有限公司 Communication protocol testing method, device, testing equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109960653A (en) * 2019-02-18 2019-07-02 天津五八到家科技有限公司 Regression testing method, device, equipment and storage medium
CN110287069A (en) * 2019-05-21 2019-09-27 平安银行股份有限公司 ESB automatic interface testing method, server and computer readable storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103313289B (en) * 2012-03-09 2016-05-11 腾讯科技(深圳)有限公司 WAP system automation test macro and method
CN105681126B (en) * 2015-12-30 2019-07-26 合一网络技术(北京)有限公司 A kind of automated testing method and system based on protocol interface
US10360087B2 (en) * 2017-10-27 2019-07-23 International Business Machines Corporation Web API recommendations based on usage in cloud-provided runtimes
CN108762742A (en) * 2018-05-18 2018-11-06 深圳壹账通智能科技有限公司 Data flow and the analysis method of service route, device, equipment and medium
CN109271313A (en) * 2018-08-13 2019-01-25 中国平安财产保险股份有限公司 Code test method, device and computer readable storage medium
CN111190808A (en) * 2018-11-14 2020-05-22 北京京东尚科信息技术有限公司 Automated testing method, system, device and computer readable storage medium
CN111209181A (en) * 2018-11-21 2020-05-29 北京京东尚科信息技术有限公司 Regression testing method, system, device and computer storage medium
CN109828900A (en) * 2018-12-14 2019-05-31 深圳壹账通智能科技有限公司 Test script automatic generation method, device, electronic equipment and storage medium
CN109783367B (en) * 2018-12-15 2023-10-27 中国平安人寿保险股份有限公司 Interface testing method, device, computer device and storage medium
CN110309064A (en) * 2019-05-30 2019-10-08 重庆金融资产交易所有限责任公司 Unit test method, device, equipment and storage medium based on log recording

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109960653A (en) * 2019-02-18 2019-07-02 天津五八到家科技有限公司 Regression testing method, device, equipment and storage medium
CN110287069A (en) * 2019-05-21 2019-09-27 平安银行股份有限公司 ESB automatic interface testing method, server and computer readable storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114253867B (en) * 2022-03-02 2022-06-14 北京仁科互动网络技术有限公司 Automatic testing method, device and system based on neural network model

Also Published As

Publication number Publication date
CN111782546A (en) 2020-10-16

Similar Documents

Publication Publication Date Title
CN111782546B (en) Automatic interface testing method and device based on machine learning
CN108415832B (en) Interface automation test method, device, equipment and storage medium
CN107506451B (en) Abnormal information monitoring method and device for data interaction
CN110515827A (en) Automated testing method, device, computer equipment and storage medium
CN111240940A (en) Real-time service monitoring method and device, electronic equipment and storage medium
EP4050867A2 (en) Method and apparatus of synchronizing data, electronic device and storage medium
CN113760730A (en) Automatic testing method and device
CN111831536A (en) Automatic testing method and device
CN114328250A (en) Automatic self-checking method, medium and device for software system
CN112445860B (en) Method and device for processing distributed transaction
US10803861B2 (en) Method and apparatus for identifying information
CN112910855B (en) Sample message processing method and device
CN110806967A (en) Unit testing method and device
CN113704079A (en) Interface testing method and device based on Protobuf
CN112817874A (en) User interface test method, device, equipment and medium
CN110333897A (en) A kind of interface allocation method, device, medium and electronic equipment
CN111158661A (en) System interface docking method, device, medium and electronic equipment
CN111178014A (en) Method and device for processing business process
CN113268417B (en) Task execution method and device
CN113688152B (en) System function self-checking method and device
CN113342633B (en) Performance test method and device
CN117827630A (en) Method and device for generating test script
CN117130990A (en) Data processing method and device
CN118035217A (en) Data processing method, device, electronic equipment and readable storage medium
CN116627478A (en) File data processing method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant