CN109344053B - Interface coverage test method, system, computer device and storage medium - Google Patents
Interface coverage test method, system, computer device and storage medium Download PDFInfo
- Publication number
- CN109344053B CN109344053B CN201811019689.7A CN201811019689A CN109344053B CN 109344053 B CN109344053 B CN 109344053B CN 201811019689 A CN201811019689 A CN 201811019689A CN 109344053 B CN109344053 B CN 109344053B
- Authority
- CN
- China
- Prior art keywords
- test
- interface
- service
- task
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F11/00—Error detection; Error correction; Monitoring
- G06F11/36—Preventing errors by testing or debugging software
- G06F11/3668—Software testing
- G06F11/3672—Test management
- G06F11/3676—Test management for coverage analysis
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Hardware Design (AREA)
- Quality & Reliability (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
Abstract
The present invention relates to the field of network data processing technologies, and in particular, to an interface coverage test method, system, computer device, and storage medium. An interface coverage test method comprises the following steps: invoking a task group, classifying and stripping tasks in the task group according to a preset time threshold, and sorting according to the generation time to form a group of timing tasks; classifying, assembling and distributing the service codes of the timing tasks into a service framework, and obtaining the service codes by the service framework and performing data processing to generate a configuration file of an interface test; and calling the processed service codes and the configuration file, executing interface test, generating test results, acquiring the test results and judging whether task execution is successful or not. The invention carries out timing screening through the task group, and enables the service framework to sort the service codes to generate the configuration file, thereby achieving full coverage of the API interface and the code interface.
Description
Technical Field
The present invention relates to the field of network data processing technologies, and in particular, to an interface coverage test method, system, computer device, and storage medium.
Background
Software Testing (English: software Testing), describes a process used to facilitate the verification of the correctness, integrity, security, and quality of Software. In other words, the software test is an audit or comparison process between the actual output and the expected output. Classical definitions of software testing are: and (3) operating the program under the specified conditions to find out program errors, measuring the quality of the software and evaluating whether the software can meet the design requirements.
Currently, in the software test development process, a test tool is generally used for simulating a test method running on a line. By using the testing tool, the aim of improving the testing efficiency can be achieved. The development of the testing tool greatly improves the automation degree of the software test, and enables the testers to be liberated from complex and repeated testing activities and to concentrate on meaningful testing design and other activities. By adopting the automatic comparison technology, the judgment of the execution result of the test case can be automatically completed, so that the problem of omission caused by manual comparison is avoided.
However, currently, a test tool is generally used, and a tool such as Jmeter is generally used for an API (application programming) interface; for non-standard interfaces, code is generally directly written for testing; the test tool cannot be compatible with an API interface and a code interface which does not provide a standard interface, has low compatibility, needs to rely on manual intervention for adjustment, and has long test time and low efficiency.
Disclosure of Invention
In view of this, it is necessary to provide an interface coverage test method, system, computer device and storage medium for the problem of low compatibility of API interface and code interface in the existing test process.
An interface coverage test method comprises the following steps:
invoking a task group, classifying and stripping the tasks in the task group according to the time nodes of the generated time according to a preset time threshold, and forming a group of timing tasks according to the generated time sequence;
classifying and assembling the service codes of the timing tasks according to API data and code data, and distributing the service codes into a service framework, wherein the service framework obtains the service codes and generates a configuration file for interface test of a interface after data processing;
calling the processed service codes and the configuration files, executing interface test, generating test results by identifying data in the service framework, and writing the test results into a database;
and obtaining a test result from the database and judging whether the timing task is successfully executed.
In one embodiment, the retrieving the task group, classifying and stripping the tasks in the task group according to a preset time threshold, and forming a group of timing tasks according to the generated time sequence includes:
and identifying the time node of the task through the Jenkins tool, generating the time node of the task, inquiring the time node by setting an inquiry script, calling the inquiry script to connect the Jenkins tool to monitor the time node of the task subjected to classification stripping, continuing classification stripping on the task group when the time node is consistent with the time threshold, and revising the time threshold when the time node is inconsistent with the time threshold.
In one embodiment, the classifying, assembling and distributing the service codes of the timing tasks into a service framework, wherein the service framework obtains the service codes and performs data processing, and generating the configuration file of the interface type test includes:
initializing the service code to obtain an interface identification code;
inputting the interface identification code into the configuration file, wherein the configuration file comprises interface types, generation time and storage positions;
the service framework encapsulates the interface identification code to form an interface identification framework, and the interface identification framework is used for identifying the type of the interface.
In one embodiment, the performing interface testing includes:
starting a main () method in the service framework to read the configuration file, and cleaning and initializing all process data;
calling each subclass in the service framework, cleaning the data obtained after the main class main () method is cleaned and initialized by adopting a subclass setup () method, and performing interface test to obtain a test result of the corresponding subclass;
in the main class main () method, the test results of the subclasses are integrated using an HTMLTestRunner tool, and the integrated results are output in HTML format.
In one embodiment, the obtaining the test result from the database includes:
reading an original JMeter test script from the database, analyzing the original JMeter test script, and separating a test scene in the original JMeter test script;
acquiring a test scene which is correlated with the original JMeter test script in the timing task, and replacing the test scene separated from the original JMeter test script with the test scene which is correlated with the original JMeter test script to generate a new JMeter test script;
and identifying the data in the service framework according to the new JMeter test script to obtain a test result.
In one embodiment, during the process of reading the configuration file by the main class main () method, the main class main () method loads the subclasses to be executed into the test suite for sequential call;
the test suite is a set of test cases, and any one of the test suites comprises a group of test cases;
if any two test cases in a group of test cases have a dependency relationship on a service scene, the two test cases are put into a test suite to be executed.
In one embodiment, if the timing task is successfully executed, the information of the successful execution of the timing task is transmitted to a client through an HTML table in an HTML testrunner tool, and the client acquires the information of the successful execution of the timing task by accessing a page with the HTML table;
and if the timing task fails to execute, starting the service framework to regenerate a new configuration file.
An interface coverage test system comprising the following elements:
the task ordering unit is used for retrieving a task group, classifying and stripping the tasks in the task group according to the time nodes of the generation time according to a preset time threshold, and ordering the generation time to form a group of timing tasks;
the task processing unit is used for classifying and assembling the service codes of the timing tasks according to API data and code data and distributing the service codes into a service framework, and the service framework obtains the service codes and generates a configuration file for interface test of the interface after data processing;
the result generating unit is used for calling the processed service codes and the configuration files, executing interface test, generating test results by identifying data in the service framework and writing the test results into a database;
and the result judging unit is used for acquiring a test result from the database and judging whether the execution of the timing task is successful or not.
A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions which, when executed by one or more of the processors, cause the one or more processors to perform the steps of the above-described interface coverage test method.
A storage medium storing computer readable instructions that, when executed by one or more processors, cause one or more of the processors to perform the steps of the interface coverage test method described above.
The interface coverage test method, the device, the computer equipment and the storage medium comprise the steps of calling a task group, classifying and stripping the tasks in the task group according to the time nodes of the generation time according to a preset time threshold value, and forming a group of timing tasks according to the generation time sequence; classifying and assembling the service codes of the timing tasks according to API data and code data, and distributing the service codes into a service framework, wherein the service framework obtains the service codes and generates a configuration file for interface test of a interface after data processing; calling the processed service codes and the configuration files, executing interface test, generating test results by identifying data in the service framework, and writing the test results into a database; and obtaining a test result from the database and judging whether the timing task is successfully executed. Aiming at the problems that the compatibility of an API interface and a code interface in the existing test process is low and adjustment is needed by means of manual means, the task group is used for timing screening, and the service framework is started to sort service codes to generate configuration files, so that the full coverage of the API interface and the code interface is achieved.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention.
FIG. 1 is an overall flow chart of an interface coverage test method of the present invention;
FIG. 2 is a schematic diagram illustrating a task ordering process in an interface coverage test method according to the present invention;
FIG. 3 is a flow chart of an interface test procedure in an interface coverage test method of the present invention;
FIG. 4 is a block diagram of an interface coverage test system of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless expressly stated otherwise, as understood by those skilled in the art. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Fig. 1 is a flowchart of an interface coverage test method according to an embodiment of the present invention, as shown in fig. 1, the interface coverage test method includes the following steps:
s1, a task group is called, tasks in the task group are classified and stripped according to time nodes of the generation time according to a preset time threshold, and a group of timing tasks are formed by sequencing the generation time;
the preset time threshold is set according to the time length of each task when being generated, and is automatically generated when the tasks are stored in the database, but the preset time threshold may cause a high concurrency event caused by a generated record error or the storage of a plurality of tasks in the database, and the generated or ended time node of each task is inconsistent with the preset time threshold; further revisions are therefore required to the preset time threshold.
S2, classifying and assembling the service codes of the timing tasks according to API data and code data, and distributing the service codes into a service framework, wherein the service framework obtains the service codes and generates a configuration file for interface test of interfaces after data processing;
the content of the configuration file can comprise data types, data generation time, data storage positions and the like; and after the configuration file is generated, the test of the API data and the code interface can be realized by only calling the configuration file.
S3, calling the processed service codes and the configuration files, performing interface test, identifying data in the service framework, generating test results and writing the test results into a database;
the processing of the configuration file is mainly to adaptively select the content of the configuration file according to the API data and/or code data to be tested, for example, the content of the original configuration file is provided with the data generation time, but the API data or code data to be tested has no limit on the test time, and the content of the data generation time is removed from the configuration file to increase the operation efficiency of the configuration file.
S4, obtaining a test result from the database and judging whether the timing task is successfully executed.
If the timing execution is successful, the method proves that the interfaces of the API data and the code data can be completely covered; otherwise, the steps S1 to S3 need to be re-executed, and the time threshold, the configuration file, etc. are modified for applicability.
According to the method, when the interface test is carried out on a group of service frames containing API data and code data, the API interface and the code interface can be accurately covered and adapted, so that the data test is fully automated, and manual intervention is reduced.
In this embodiment, the preset time threshold is calculated according to the generation time of each task in the task group, and the time threshold can be set by using an Apriori algorithm, which is an algorithm requiring multiple iterations, and the single item meeting the minimum support requirement is extracted into 1 frequent item set L1 by scanning the database once and counting the count of the single item in the database, and the frequent item L1 is used as the basic item of the next scanning. And then repeatedly scanning the database, after the (k-1) th scanning generates the (k-1) frequent item set L, firstly generating the item set in the Lk-1 into a k-item candidate set Ck through a connection operation in the kth scanning, and deleting the item set which does not meet the minimum support degree count in the Ck through a pruning operation, thereby obtaining the frequent item set Lk. The database is repetitively scanned to generate a next candidate set from the set until no new candidate set is generated.
In the task stripping process, the time threshold can be set as a variable, namely, a weight is added to correct the time threshold, and when the time threshold is smaller than the task generation time, the time threshold needs to be added, namely, the time threshold can be brought into an error correction model for training. The error correction model can be calculated by adopting Root Mean Square Error (RMSE), and can also be corrected by adopting Granger expression theorem.
When the timing tasks are classified and assembled, the service framework compares the tasks with the API data and the tasks with the code data into a data group by calling the preset API data form and the code data form which are stored in the database. When testing, it needs to check whether the data group contains API data and code data, if only one data form is found in the data group, it needs to correct the data acquisition module in the business framework.
FIG. 2 is a schematic diagram of a task ordering process in an embodiment of the present invention, where, as shown in the drawing, in the task scheduling process of retrieving a task group, classifying and stripping tasks in the task group according to a preset time threshold, and forming a set of timing tasks according to a generated time ordering, the task scheduling process includes:
and identifying the time node of the task through the Jenkins tool, generating the time node of the task, inquiring the time node by setting an inquiry script, calling the inquiry script to connect the Jenkins tool to monitor the time node of the task subjected to classification stripping, continuing classification stripping on the task group when the time node is consistent with the time threshold, and revising the time threshold when the time node is inconsistent with the time threshold.
Jenkins is an open source software project, is a continuous integration tool based on Java development, is used for monitoring continuous repeated work, and aims to provide an open and easy-to-use software platform so that continuous integration of software is possible.
In this embodiment, when the Jenkins is used to compare the time threshold with the task time node, the applied query script is a query script that can query both the master node and the slave node of Jenkins, and specifically, when the Jenkins is applied to query both the master node and the slave node, the master node or the slave node is controlled by the server. The server receives a request of inquiring the node sent by a front-end module, then responds to the request, establishes communication connection between the server and the inquiring script, and determines the connection state of the master node and the slave node by controlling the connection process of the slave node and the master node through the server. If the connection process of the master node and the slave node is inquired, the inquiry script is in a normal running state; if the connection state of the master node and the slave node is not inquired, the inquiry script is in a disconnected state, and the inquiry script needs to be reconfigured.
In this embodiment, the Jenkins compares the time threshold with the task time node, so that the time threshold can be better corrected to match the time node with each other.
In one embodiment, the classifying, assembling and distributing the service codes of the timing tasks into a service framework, wherein the service framework obtains the service codes and performs data processing to generate a configuration file of interface type test, and the configuration file comprises:
initializing the service code to obtain an interface identification code;
the initialization process mainly clears and sorts useless data in the service codes, reduces the data volume in the service codes, and can reduce the total data volume in a layer of service frame so as to realize quick encapsulation.
Inputting the interface identification code into the configuration file, wherein the configuration file comprises interface types, generation time and storage positions;
the interface type comprises an API interface and a code interface, the generation time can be configured in a sequence from far to near or in a sequence from near to far, meanwhile, data in the same generation time can be divided or combined to generate a data group, and a configuration file is generated according to the generation time of the data group. The service framework encapsulates the interface identification code to form an interface identification framework, and the interface identification framework is used for identifying the type of the interface.
The recognition framework recognizes the interface mainly by recognizing the API function to recognize the API interface, and recognizes the code interface by recognizing the characteristic characters.
In this embodiment, the SQL language, i.e. the structured query language (Structured Query Language), is applied when processing the service codes, and is a special purpose programming language, which is a database query and programming language for accessing data and querying, updating and managing a relational database system; and is also an extension of the database script file. The SQL language is a typical example, and the requirements of the SQL query language are the most frequent, whether it is a high-level query or a low-level query.
When the SQL language is applied to initialize and sort the service codes, the existing service codes are firstly cleared to remove historical data, then a new database is created, the service codes are added into the newly created database, and the service codes are subdivided according to the initial size of the main data file, the maximum value of the growth of the main data file and the growth rate of the main data file by a group of service codes according to the stored positions. A group of service codes are subdivided to generate a plurality of sub-service code packages, and then the sub-service code packages are identified by API interfaces.
According to the embodiment, the service codes are initialized and tidied through the SQL language, so that the identification rate of the API interface can be effectively enhanced.
In one embodiment, the Bat script is used to input the interface identification code into the configuration file. The Bat script is also referred to as a batch script.
Batch processing is the processing of a batch of objects, and is generally considered to be a simplified scripting language, which is used in DOS and Windows systems. The extension of the batch file is bat. The more common batch processes currently include two types: DOS batch and PS batch. PS batch processing is a script based on a powerful photo editing software Photoshop and used for processing pictures in batches; DOS batching is based on the script of DOS commands that are used to automatically execute DOS commands in batches to achieve a particular operation. More complex cases require the use of commands such as if, for, goto to control the program's operation, as in the high-level language such as C, basic. If more complex applications need to be implemented, it is necessary to utilize external programs, including external commands provided by the system itself and tools or software provided by a third party. Although the batch process is run in a command line environment, not only command line software can be used, any program that is currently run under the system can be run in a batch file.
In this embodiment, the batch processing file by the Bat script can implement simultaneous testing of multiple API interfaces and code interfaces, so as to implement that the interface test still has stability under the high concurrency condition.
In one embodiment, shell script is used to input the interface identification code into the configuration file.
Shell script is similar to batch script BAT under Windows/Dos, namely, a program file which is put into a file in advance by various commands and is convenient for one-time execution is mainly convenient for an administrator to set or manage. But it is more powerful than batch processing under Windows, more efficient than programs edited with other programming programs, and it uses commands under Linux/Unix.
In the embodiment, the Shell script is used, so that the interface test can be adapted to not only Windows system, but also Linux/Unix system, and the system universality of the interface test is improved.
In this embodiment, the Bat script or Shell script encapsulates the sub-service code packet processed by the SQL language again to generate code sets, and brings the code sets into the configuration file to be classified into the interface types, namely, the interface types are divided into the API interface and the code interface, then the time of generating each code set, and finally the storage location of the code set.
And the data in the code set are arranged in the configuration file, the data are stored according to the classification of the configuration file, and the service frame classifies and encapsulates the identification codes to form a layer of identification frame. When the interface test is carried out, the calling identification framework carries out test identification, and if a test result has a problem, the configuration file can be changed to revise the code set.
FIG. 3 is a flow chart of an interface testing process according to one embodiment of the invention, as shown, comprising:
s201, starting a main () method in the service framework to read the configuration file, and cleaning and initializing all process data;
in the Java language, a general program takes a main function as an entry of the program, and the program is executed from the main function. Except that the applet does not require a main function, program testing is commonly used. The main function is in the format of: public static void main (String [ ] args).
S202, calling each subclass in the service framework, cleaning the data obtained after the main () method is cleaned and initialized by adopting a subclass setup () method, and performing interface test to obtain a test result of the corresponding subclass;
when main () method is applied to data cleaning, firstly identifying the time of data generation, namely classifying the data according to a time threshold, taking the data generated before the time threshold as historical data for cleaning, removing unnecessary fields, determining a missing value range and filling missing contents; the filling method mainly adopts three kinds of filling missing values according to business knowledge or experience, fills missing values according to calculation results (average value, median, mode and the like) of the same index, fills missing values according to calculation results of different indexes, and sets the state as an initialization state after the clearing treatment is completed.
S203, in the main class main () method, the test results of the subclasses are integrated by using an HTMLTestRunner tool, and the integrated results are output in an HTML format.
When the data which is cleaned and initialized by the main class main method is cleaned again by applying the sub-class setup () method, logic errors need to be cleaned, and the logic errors mainly comprise duplication removal, unreasonable numerical value removal and contradiction correction. The specific steps are that the service codes are called in test_xxx (), the results are compared by inquiring the database results or by inquiring the logs, and the data cleaning of the service is carried out in a test_xxx () method.
In the embodiment, the primary cleaning initialization is carried out on the data by the main class main () method in the interface testing process, and the secondary cleaning is carried out on the data by the sub-class setup () method, so that the identification of the data is improved, and the packaged service framework can be better suitable for testing of the API interface and the code interface.
In this embodiment, the service framework preferably uses a Python unitest framework to initialize the code. The most core four parts in unitest are: testCase, testSuite, testRunner, testFixture the number of the individual pieces of the plastic,
(1) An example of a TestCase is a test case. The test case refers to a complete test flow, including the construction of a pre-test preparation environment (setUp), the execution of test code (run), and the restoration of a post-test environment (tearDown). The nature of meta-test (unit test), herein, is that a test case is a complete test unit that can be run to verify a problem;
(2) The test cases are collected together, namely the test suite, and the test suite can be nested;
(3) TestLoader is used to load TestCase into TestSuite.
(4) TextTestRunner is used to execute test cases, in which run (test) executes run (result) methods in TestSuite/TestCase;
(5) The test result is stored in the TextTestResult instance, including information of how many test cases are run, how many test cases are successful, how many test cases are failed, and the like.
In one embodiment, the obtaining the test result from the database includes:
reading an original JMeter test script from the database, analyzing the original JMeter test script, and separating a test scene in the original JMeter test script;
in this embodiment, the JMeter is a Java-based stress testing tool developed by the Apache organization. For stress testing of software, it was originally designed for Web application testing, but later extends to other testing areas. It can be used to test static and dynamic resources such as static files, java servlets, CGI scripts, java objects, databases, FTP servers, etc. Jmeters can be used to simulate huge loads on servers, networks, or objects, from testing their strength and analyzing overall performance under different pressure categories. In addition, the JMeter can perform a functional/regression test on the application, verifying that your program returned your desired results by creating a script with assertions. For maximum flexibility, jmeters allow assertions to be created using regular expressions.
Acquiring a test scene which is correlated with the original JMeter test script in the timing task, and replacing the test scene separated from the original JMeter test script with the test scene which is correlated with the original JMeter test script to generate a new JMeter test script;
in this embodiment, the new JMeter test script is generated by replacing the test scene by mapping the time or event relationship between the original JMeter test script and the test scene to realize the association between the JMeter test script and the test scene, and then supplementing the place in the test scene, which does not match the original JMeter test script, to the JMeter test script to form the new JMeter test script.
And identifying the data in the service framework according to the new JMeter test script to obtain a test result.
In this embodiment, the new JMeter test script identifies data in the service framework mainly by identifying whether an API function exists in the data, and if so, the new JMeter test script indicates that the new JMeter test script is an API interface, otherwise, the new JMeter test script is a code interface.
In one embodiment, in the process that the main class main () method reads the configuration file, the main class main () method loads subclasses to be executed into a test suite to be sequentially called; the test suite is a set of test cases, and any one test suite comprises a group of test cases;
if any two test cases in a group of test cases have a dependency relationship on a service scene, the two test cases are put into a test suite to be executed.
According to the embodiment, the business scenes with the dependency relationship are combined into one test suite, so that the number of the test scenes can be reduced, and the efficiency of interface test is improved.
In one embodiment, if the timing task is successfully executed, the information of the successful execution of the timing task is transmitted to a client through an HTML table in an HTMLTestRunner tool, and the client acquires the information of the successful execution of the timing task by accessing a page with the HTML table;
and if the timing task fails to execute, starting the service framework to regenerate a new configuration file.
According to the embodiment, the test result can be intuitively displayed through the HTML form in the HTMLTestRunner tool, so that a tester can obtain the test result in time.
In one embodiment, an interface coverage test system is provided, as shown in fig. 4, comprising the following units:
the task ordering unit is used for retrieving a task group, classifying and stripping the tasks in the task group according to the time nodes of the generation time according to a preset time threshold, and ordering the generation time to form a group of timing tasks;
the task processing unit is used for classifying and assembling the service codes of the timing tasks according to API data and code data and distributing the service codes into a service framework, and the service framework obtains the service codes and generates a configuration file for interface test of the interface after data processing;
the result generating unit is used for calling the processed service codes and the configuration files, executing interface test, generating test results by identifying data in the service framework and writing the test results into a database;
and the result judging unit is used for acquiring a test result from the database and judging whether the execution of the timing task is successful or not.
In one embodiment, a computer device is provided that includes a memory and a processor, the memory having stored therein computer readable instructions that, when executed by the processor, cause the processor to perform the steps of: invoking a task group, classifying and stripping the tasks in the task group according to the time nodes of the generated time according to a preset time threshold, and forming a group of timing tasks according to the generated time sequence; classifying and assembling the service codes of the timing tasks according to API data and code data, and distributing the service codes into a service framework, wherein the service framework obtains the service codes and generates a configuration file for interface test of a interface after data processing; calling the processed service codes and the configuration files, executing interface test, generating test results by identifying data in the service framework, and writing the test results into a database; and obtaining a test result from the database and judging whether the timing task is successfully executed.
In one embodiment, a storage medium storing computer-readable instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of: invoking a task group, classifying and stripping the tasks in the task group according to the time nodes of the generated time according to a preset time threshold, and forming a group of timing tasks according to the generated time sequence; classifying and assembling the service codes of the timing tasks according to API data and code data, and distributing the service codes into a service framework, wherein the service framework obtains the service codes and generates a configuration file for interface test of a interface after data processing; calling the processed service codes and the configuration files, executing interface test, generating test results by identifying data in the service framework, and writing the test results into a database; and obtaining a test result from the database and judging whether the timing task is successfully executed. The storage medium may be a non-volatile storage medium.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of the above embodiments may be implemented by a program to instruct related hardware, the program may be stored in a computer readable storage medium, and the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The technical features of the above-described embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above-described embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above-described embodiments represent only some exemplary embodiments of the invention, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention. Accordingly, the scope of protection of the present invention is to be determined by the appended claims.
Claims (9)
1. An interface coverage test method, comprising:
invoking a task group, classifying and stripping tasks in the task group according to a preset time threshold, generating a time node of the task after the preset time threshold is identified by a Jenkins tool, inquiring the time node by setting an inquiry script, calling the inquiry script to connect the Jenkins tool to monitor the time node of the task after classifying and stripping, continuing classifying and stripping the task group when the time node is consistent with the time threshold, and revising the time threshold when the time node is inconsistent with the time threshold;
classifying and assembling the service codes of the timing tasks according to API data and code data, and distributing the service codes into a service framework, wherein the service framework obtains the service codes and generates a configuration file for interface test of the interface after data processing;
calling the processed service codes and the configuration files, executing interface test, generating test results by identifying data in the service framework, and writing the test results into a database;
and obtaining a test result from the database and judging whether the timing task is successfully executed.
2. The method according to claim 1, wherein, in the step of classifying, assembling and distributing the service codes of the timing tasks into a service framework, the service framework obtains the service codes and performs data processing, and generates a configuration file for interface type testing, the method comprises:
initializing the service code to obtain an interface identification code;
inputting the interface identification code into the configuration file, wherein the configuration file comprises interface types, generation time and storage positions;
the service framework encapsulates the interface identification code to form an interface identification framework, and the interface identification framework is used for identifying the type of the interface.
3. The interface coverage test method of claim 1, wherein the performing the interface test comprises:
starting a main () method in the service framework to read the configuration file, and cleaning and initializing all process data;
calling each subclass in the service framework, cleaning the data obtained after the main class main () method is cleaned and initialized by adopting a subclass setup () method, and performing interface test to obtain a test result of the corresponding subclass;
in the main class main () method, the test results of the subclasses are integrated using an HTMLTestRunner tool, and the integrated results are output in HTML format.
4. The method for testing the coverage of an interface according to claim 1, wherein the step of obtaining the test result from the database comprises:
reading an original JMeter test script from the database, analyzing the original JMeter test script, and separating a test scene in the original JMeter test script;
acquiring a test scene which is correlated with the original JMeter test script in the timing task, and replacing the test scene separated from the original JMeter test script with the test scene which is correlated with the original JMeter test script to generate a new JMeter test script;
and identifying the data in the service framework according to the new JMeter test script to obtain a test result.
5. The interface coverage test method according to claim 3, wherein, in the process of reading the configuration file by the main class main () method, the main class main () method loads subclasses to be executed into a test suite for sequential call;
the test suite is a set of test cases, and any one of the test suites comprises a group of test cases;
if any two test cases in a group of test cases have a dependency relationship on a service scene, the two test cases are put into a test suite to be executed.
6. The method according to claim 1, wherein if the timing task is successfully executed, the information of the successful execution of the timing task is transmitted to a client through an HTML table in an HTML testrunner tool, and the client knows the information of the successful execution of the timing task by accessing a page with the HTML table;
and if the timing task fails to execute, starting the service framework to regenerate a new configuration file.
7. An interface coverage test system, comprising:
the task ordering unit is set to invoke a task group, classify and strip tasks in the task group according to a preset time threshold, identify a time node of the task after the preset time threshold is identified by a Jenkins tool, inquire the time node by setting a query script, call the query script to connect the Jenkins tool to monitor the time node of the task after classified stripping, and continuously classify and strip the task group when the time node is consistent with the time threshold, and revise the time threshold when the time node is inconsistent with the time threshold;
the task processing unit is used for classifying and assembling the service codes of the timing tasks according to the API data and the code data and distributing the service codes into a service framework, and the service framework obtains the service codes and generates a configuration file for interface test of the interface after data processing;
the result generating unit is used for calling the processed service codes and the configuration files, executing interface test, generating test results by identifying data in the service framework and writing the test results into a database;
and the result judging unit is used for acquiring a test result from the database and judging whether the execution of the timing task is successful or not.
8. A computer device comprising a memory and a processor, the memory having stored therein computer readable instructions that, when executed by one or more of the processors, cause the one or more processors to perform the steps of the interface coverage test method of any of claims 1 to 6.
9. A storage medium storing computer readable instructions which, when executed by one or more processors, cause one or more of the processors to perform the steps of the interface coverage test method of any of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811019689.7A CN109344053B (en) | 2018-09-03 | 2018-09-03 | Interface coverage test method, system, computer device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811019689.7A CN109344053B (en) | 2018-09-03 | 2018-09-03 | Interface coverage test method, system, computer device and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109344053A CN109344053A (en) | 2019-02-15 |
CN109344053B true CN109344053B (en) | 2023-05-30 |
Family
ID=65292483
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811019689.7A Active CN109344053B (en) | 2018-09-03 | 2018-09-03 | Interface coverage test method, system, computer device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109344053B (en) |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110046095B (en) * | 2019-03-18 | 2024-02-02 | 天航长鹰(江苏)科技有限公司 | System integration method and device based on test flow improvement |
CN111767218B (en) * | 2020-06-24 | 2023-11-10 | 北京思特奇信息技术股份有限公司 | Automatic test method, equipment and storage medium for continuous integration |
CN111628915B (en) * | 2020-07-16 | 2022-03-25 | 安徽华速达电子科技有限公司 | Network connection state real-time monitoring and management method and system of gateway equipment |
CN112269740A (en) * | 2020-10-27 | 2021-01-26 | 知行汽车科技(苏州)有限公司 | Automatic testing method and device for automatic driving software |
CN113392026B (en) * | 2021-07-07 | 2023-12-19 | 北京智慧星光信息技术有限公司 | Interface automatic test method, system, electronic equipment and storage medium |
CN116016239B (en) * | 2023-01-03 | 2024-07-09 | 重庆长安汽车股份有限公司 | Service interface testing method, device, equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106021118A (en) * | 2016-06-20 | 2016-10-12 | 深圳充电网科技有限公司 | Test code generation method and device and test framework code execution method and device |
CN106484624A (en) * | 2016-10-21 | 2017-03-08 | 天津海量信息技术股份有限公司 | The method of testing of interface automatic test |
CN107688526A (en) * | 2017-08-25 | 2018-02-13 | 上海壹账通金融科技有限公司 | Performance test methods, device, computer equipment and the storage medium of application program |
CN107832207A (en) * | 2017-10-16 | 2018-03-23 | 深圳市牛鼎丰科技有限公司 | Interface performance test method, apparatus, storage medium and computer equipment |
-
2018
- 2018-09-03 CN CN201811019689.7A patent/CN109344053B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106021118A (en) * | 2016-06-20 | 2016-10-12 | 深圳充电网科技有限公司 | Test code generation method and device and test framework code execution method and device |
CN106484624A (en) * | 2016-10-21 | 2017-03-08 | 天津海量信息技术股份有限公司 | The method of testing of interface automatic test |
CN107688526A (en) * | 2017-08-25 | 2018-02-13 | 上海壹账通金融科技有限公司 | Performance test methods, device, computer equipment and the storage medium of application program |
CN107832207A (en) * | 2017-10-16 | 2018-03-23 | 深圳市牛鼎丰科技有限公司 | Interface performance test method, apparatus, storage medium and computer equipment |
Also Published As
Publication number | Publication date |
---|---|
CN109344053A (en) | 2019-02-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109344053B (en) | Interface coverage test method, system, computer device and storage medium | |
CN107908541B (en) | Interface testing method and device, computer equipment and storage medium | |
CN106446412B (en) | Model-based test method for avionics system | |
CN109344056B (en) | Test method and test device | |
CN107861876A (en) | Method of testing, device, computer equipment and readable storage medium storing program for executing | |
CN112052169A (en) | Test management method, system, device and computer readable storage medium | |
CN110659202A (en) | Client automatic testing method and device | |
CN109254912A (en) | A kind of method and device of automatic test | |
CN111462811A (en) | Automatic testing method and device, storage medium and electronic equipment | |
CN110597721A (en) | Automatic interface pressure testing method based on pressure testing script | |
CN111897742B (en) | Method and device for generating intelligent contract test case | |
CN112953983A (en) | SFTP transmission method and device | |
CN111258881A (en) | Intelligent test system for workflow test | |
CN111694752A (en) | Application testing method, electronic device and storage medium | |
CN113886262A (en) | Software automation test method and device, computer equipment and storage medium | |
CN112527676A (en) | Model automation test method, device and storage medium | |
US20050203717A1 (en) | Automated testing system, method and program product using testing map | |
CN117493188A (en) | Interface testing method and device, electronic equipment and storage medium | |
CN116955193A (en) | Interface testing method, device, equipment and storage medium | |
CN116467188A (en) | Universal local reproduction system and method under multi-environment scene | |
CN115587028A (en) | Interface automation test method, system, medium and terminal | |
CN115269387A (en) | Automatic interface testing method and device | |
CN114116470A (en) | Automatic static model checking method and device | |
CN114124769A (en) | Base station testing method and device, electronic equipment and storage medium | |
CN115878448A (en) | Database test method, distributed database and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |