CN110347596B - Test method, device, system, electronic equipment and medium - Google Patents

Test method, device, system, electronic equipment and medium Download PDF

Info

Publication number
CN110347596B
CN110347596B CN201910589684.6A CN201910589684A CN110347596B CN 110347596 B CN110347596 B CN 110347596B CN 201910589684 A CN201910589684 A CN 201910589684A CN 110347596 B CN110347596 B CN 110347596B
Authority
CN
China
Prior art keywords
test
server
test case
service
case
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910589684.6A
Other languages
Chinese (zh)
Other versions
CN110347596A (en
Inventor
卢政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910589684.6A priority Critical patent/CN110347596B/en
Publication of CN110347596A publication Critical patent/CN110347596A/en
Application granted granted Critical
Publication of CN110347596B publication Critical patent/CN110347596B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases

Abstract

The invention discloses a test method, a test device, a test system, electronic equipment and a test medium, wherein the method comprises the following steps: extracting a first test case set used for the test server to execute the test and a second test case set used for the business server to execute the test from the test case library according to the configured server case execution distribution proportion, wherein the proportion of the number of the test cases in the two test case sets accords with the server case execution distribution proportion; and determining a test case to be modified and generating a server case execution distribution proportion adjusting instruction according to the difference between the test response result set returned by the test server and the corresponding expected result and the difference between the service response result set returned by the service server and the corresponding expected result. The interference of the real service function of the service server due to the test can be reduced, and the high-efficiency test is ensured. The method can help to discover some abnormal or marginal situations, and further modify and perfect the test case, thereby improving the implementation robustness of the service function.

Description

Test method, device, system, electronic equipment and medium
Technical Field
The present invention relates to the field of internet communication technologies, and in particular, to a test method, device, system, electronic device, and medium.
Background
With the continuous development of internet technology, interactive processing of services by using internet communication technology has become a mainstream trend. In the service processing, the corresponding interface is often called to support the service. To ensure the effective implementation of the service function, it is necessary to perform a relevant test.
In the prior art, a related test on whether an application corresponding interface can realize an expected service function is often limited to a test environment, for example, a test server is used to execute a test case, and related debugging is performed according to a response returned by the test server. However, a test shadow may exist based on only the test server's test. Especially when there are few test cases, even if the test passes, it cannot be guaranteed that the service function can be effectively implemented under some abnormal or marginal conditions.
Disclosure of Invention
In order to solve the problems of poor test accuracy and the like when the prior art is applied to function test, the invention provides a test method, a test device, a test system, electronic equipment and a test medium, wherein the test method comprises the following steps:
in one aspect, the present invention provides a test method, the method comprising:
extracting a first test case set and a second test case set from a test case library according to the configured server case execution distribution proportion, wherein the proportion of the number of the test cases in the first test case set to the number of the test cases in the second test case set conforms to the server case execution distribution proportion, the first test case set is used for testing the execution of the server, and the second test case set is used for testing the execution of the service server;
and determining a test case to be modified and generating a server case execution distribution proportion adjusting instruction according to the difference between the test response result set returned by the test server and the corresponding expected result and the difference between the service response result set returned by the service server and the corresponding expected result.
Another aspect provides a test apparatus, the apparatus comprising:
the test case set extraction module: the server case execution distribution proportion is used for extracting a first test case set and a second test case set from a test case library according to the configured server case execution distribution proportion, wherein the proportion of the number of the test cases in the first test case set to the number of the test cases in the second test case set accords with the server case execution distribution proportion, the first test case set is used for executing a test by a test server, and the second test case set is used for executing a test by a service server;
a result difference response module: and the method is used for determining the test case to be modified and generating a server case execution distribution proportion adjusting instruction according to the difference between the test response result set returned by the test server and the corresponding expected result and the difference between the service response result set returned by the service server and the corresponding expected result.
Another aspect provides a system, which includes a test server, a service server, and the test apparatus as described above.
Another aspect provides an electronic device comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement the test method as described above.
Another aspect provides a computer readable storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by a processor to implement a test method as described above.
The test method, the test device, the test system, the electronic equipment and the medium have the following technical effects:
the invention can reduce the interference on the real service function of the service server caused by the test and ensure the high-efficiency operation of the test. The method can help to find some neglected abnormal or marginal situations, and further modify and perfect the test case, and improve the implementation robustness of the service function.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a schematic diagram of an application environment provided by an embodiment of the invention;
FIG. 2 is a flow chart of a testing method according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating sending a test case request to a test server according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating an example of adjusting a server use case execution allocation ratio according to an embodiment of the present invention;
FIG. 5 is a flow chart illustrating updating a test case library according to an embodiment of the present invention;
FIG. 6 is a block diagram of a testing apparatus according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an interface for performing test item list setting on a target interface according to an embodiment of the present invention;
FIG. 8 is an interface diagram of a new test item provided by an embodiment of the present invention;
FIG. 9 is a schematic interface diagram of a new test item provided in an embodiment of the present invention;
FIG. 10 is a schematic diagram of a testing apparatus provided in an embodiment of the present invention;
FIG. 11 is a diagram illustrating test case execution allocation provided by an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
It should be noted that the terms "comprises" and "comprising," and any variations thereof, in the description and claims of the present invention and the above-described drawings, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Referring to fig. 1, fig. 1 is a schematic diagram of an application environment according to an embodiment of the present invention, including a test platform, a test server, and a service server, where the test platform sends a test case request to the test server and the service server, respectively, and receives corresponding response results returned by the test server and the service server. The test platform can determine the test case to be modified and adjust the proportion of the test case requests sent to the two servers according to the difference between the received response result returned by the server and the expected result of the corresponding test case. It should be noted that fig. 1 is only an example.
Specifically, in this embodiment of the present specification, the test server may be a server that tests whether the call corresponding interface can implement a certain expected service function. The test server may be an independently operating server, or a distributed server, or a server cluster composed of a plurality of servers. The test server may include a network communication unit, a processor, and memory, among others.
Specifically, in this embodiment of the present specification, the service server may refer to a server that performs real service function processing. The service server may be an independently operating server, or a distributed server, or a server cluster composed of a plurality of servers. The service server may comprise a network communication unit, a processor and a memory, etc.
In practical applications, such as for business function a, when the test server acts as a service provider (called party, processes requests and returns responses; as opposed to a service consumer), the test server is required to simulate the processing procedure for the business function a. When the service server is used as a service provider, a processing process for realizing the service function A really exists in the service server, and meanwhile, the processing process can perform information interaction with a processing process for realizing other real service functions.
While specific embodiments of a test method of the present invention are described below, fig. 2 is a schematic flow chart of a test method provided by embodiments of the present invention, and the present specification provides the method steps as described in the embodiments or the flow chart, but may include more or less steps based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. In practice, the system or server product may be implemented in a sequential or parallel manner (e.g., parallel processor or multi-threaded environment) according to the embodiments or methods shown in the figures. Specifically, as shown in fig. 2, the method may include:
s201: extracting a first test case set and a second test case set from a test case library according to the configured server case execution distribution proportion, wherein the proportion of the number of the test cases in the first test case set to the number of the test cases in the second test case set conforms to the server case execution distribution proportion, the first test case set is used for testing the execution of the server, and the second test case set is used for testing the execution of the service server;
in the embodiment of the invention, the server case execution allocation proportion is used for specifying the quantity proportion of the corresponding test cases in the test case library allocated to the test server and the service server for executing the test. For a test case request sequence (a service is called by a target interface of a service provider) sent by a service consumer (a calling party sends a request; can belong to a corresponding test platform), the corresponding test case requests can be respectively sent to a test server and a service server according to the server case execution distribution proportion. For the current test round, the current server case execution distribution proportion is configured according to the difference feedback (the difference between the response result returned by the service provider (especially the service server) and the expected result) of the historical test round and/or the performance parameter (such as throughput) of the service server. The server use case execution allocation proportion is adjustable in the test.
The Test Case library may provide Test cases (Test Case, a set of Test inputs, execution conditions, and expected results tailored for a particular target to Test a certain program path or verify that a certain specific requirement is met) for implementing a certain business function to the target interface of the service provider. Specifically, each test case in the first test case set includes a corresponding test case request and a test case expected result. And generating a first test case request sequence according to the test case request corresponding to each test case in the first test case set, and further sending the first test case request sequence to a test server. Each test case in the second set of test cases includes a corresponding test case request and a test case expected result. And generating a second test case request sequence according to the test case request corresponding to each test case in the second test case set, and further sending the second test case request sequence to a service server.
In testing, for some objects that are not easily constructed or easily obtained, a virtual object may be created for testing, also called Mock testing. Accordingly, the Mock response is a simulated response constructed by the user. The Mock server (Mock server) is used as a simulated server, and when a request comes, a Mock response is returned according to a preset test rule. For example, a corresponding Mock service may be created in the test server for a processing procedure for implementing a certain service function, and corresponding pre-configuration data may be pre-stored in the test server. When the test is carried out, the pre-configuration data can be returned to the front-end equipment through the Mock service in the test server.
In practical applications, the test item list (including the corresponding server use case execution allocation proportion) may be set for an API (Application Programming Interface) of a service provider. When the service consumer invokes the API of the service provider, a test item in the list of test items may be selected for return. As shown in fig. 7 and 11, the two test items of the/echo interface are "service response" (response result returned by corresponding service server) and "Mock response" (response result returned by corresponding test server), and the weights are 30% and 70%, respectively (server case execution allocation ratio is 3: 7). When the interface needs to be called for implementation of a certain service function, a local routing SDK (Software Development Kit) may assign requests according to the weight, where about 30% of the requests are sent to the service server and about 70% of the requests are sent to the mock server. The API may be from a microservice. The corresponding micro-service architecture is a system architecture style, the loose coupling is emphasized, the services are independently deployed, and the system fault tolerance is improved.
In a specific embodiment, as shown in fig. 3, the extracting, according to the configured server case execution allocation proportion, the first test case set and the second test case set from the test case library to obtain a second test case set includes:
s301: generating a corresponding target test case set according to the configuration information and the first test case set;
the configuration information includes at least one selected from the group consisting of a response time, a response region, and a response identification. For example, the response time in the configuration information may point to a delay effect in the test response result, the response region in the configuration information may point to an effect that the test response result is displayed in a target region of the target page in a related manner, and the response identifier in the configuration information may point to an effect that the test response result carries a corresponding identification identifier (such as an HTTP status code and a JSON string; the HTTP status code is a 3-bit digital code used for representing a response state of a hypertext transfer protocol of a web server). The feature information corresponding to each test case in the first test case set may be modified according to the configuration information (for example, test input, execution conditions, and expected results corresponding to the test cases may be modified), so as to obtain a corresponding target test case set.
S302: generating a first test case request sequence according to the test case request corresponding to each test case in the target test case set;
the test case request corresponding to each test case can be extracted from the target test case set to obtain the first test case request sequence.
S303: sending the first test case request sequence to the test server;
and sending the first test case request sequence to the test server, and further receiving a test response result returned by the test server. The test server executes the test target test case set, and is beneficial to obtaining a test response result with a corresponding return effect. The execution condition of the first test case set in the test server can be more effectively determined according to the corresponding return effect, so that the test cases in the test case library can be conveniently adjusted, and the test cases can be conveniently distinguished from the service response result returned by the service server so as to improve the accuracy of subsequent statistics.
In practical application, as shown in fig. 8 and 9, when a new test item is created, a user may select a test item type as "Mock response" on a corresponding interface of a test platform, set a state code (for example, an HTTP state code) as 200, set a JSON ((JavaScript Object Notation, JS Object Notation, a lightweight data exchange format) string returned by Mock, set Mock response delay time (effect of intentionally simulating delay at a test service end), and set a weight of the test item, as shown in fig. 10, for example, the content of the test item configured on a console is { "hello": word "}, the delay time is 3000ms, and the Mock server performs corresponding return according to configuration information.
S202: and determining a test case to be modified and generating a server case execution allocation proportion adjusting instruction according to the difference between the test response result set returned by the test server and the corresponding expected result and the difference between the service response result set returned by the service server and the corresponding expected result.
In a specific embodiment, as shown in fig. 4, the determining, according to the difference between the test response result set and the expected result returned by the test server and the difference between the service response result set and the expected result returned by the service server, the test case to be modified and the server case execution allocation proportion adjustment instruction generated include:
s401: receiving the service response result set returned by the service server;
a service response result set returned by the service server according to the second test case request sequence may be received.
S402: counting the number of the service response results meeting the difference condition in the service response result set according to the expected result of the corresponding test case in the second test case set to obtain a value to be compared;
for example, one test case in the second test case set is { "hello": word "}, the corresponding service provider returns word according to the hello sent by the service consumer, and word is the expected result of the test case. When the service server returns not the word but the ths a message in the test, the service response result and the corresponding test case expected result satisfy the difference condition. Can be paired with
And counting the number of the service response results meeting the difference condition in the service response result set to obtain the value to be compared.
S403: and when the value to be compared is smaller than a result difference threshold value, adjusting the execution allocation proportion of the current server use case.
For example, in the nth test round, the second test case set includes 100 test cases, the result difference threshold is 30 (or may be directly set correspondingly in a probability form, and then the result difference threshold is obtained by multiplying the number of test cases in the second test case set), if the value to be compared is 20, it may be stated that the effect of the service server executing the current second test case set reaches the expectation, and the proportion of the number of test cases allocated to the service server executing the test in the next test round may be increased (e.g., adjusted from 3:7 to 6: 4). Of course, the result difference threshold may be adjusted during testing according to different testing runs.
The corresponding server case execution distribution proportion is configured in different test rounds, and the efficiency of testing and the possible interference on the real service function of the service server due to the test are comprehensively considered. Particularly, for the situation that the target interface corresponding to the service function is not well set or even does not correspond to the target interface, the adjustability of the server case execution distribution proportion can reduce the trial and error cost and solve the problem of difficulty in simulating the complex interface and returning logic by the limited Mock response test case.
In another specific embodiment, as shown in fig. 5, the determining, according to the difference between the test response result set and the expected result returned by the test server and the difference between the service response result set and the expected result returned by the service server, the test case to be modified and the server case execution allocation proportion adjustment instruction generated include:
s501: determining a test response result meeting a first difference condition in the test response result set according to a corresponding test case expected result in the first test case set so as to obtain a corresponding first test case to be modified;
and receiving a test response result set returned by the test server according to the first test case request sequence. And determining whether the difference between each test response result in the test response result set and the expected result of the corresponding test case in the first test case set meets the first difference condition or not to obtain the corresponding first to-be-modified test case. Here, for the determination of whether the first difference condition is satisfied, reference may be made to the description in step S402, and details are not repeated.
S502: determining a service response result meeting a second difference condition in the service response result set according to a corresponding test case expected result in the second test case set so as to obtain a corresponding second test case to be modified;
and receiving a service response result set returned by the service server according to the second test case request sequence. And determining whether the difference between each service response result in the service response result set and the expected result of the corresponding test case in the second test case set meets the second difference condition to obtain a corresponding second test case to be modified. Here, for the determination whether the second difference condition is satisfied, reference may be made to the description in step S402, and details are not repeated.
S503: and respectively modifying the parameters corresponding to the first test case to be modified and the parameters corresponding to the second test case to be modified, and updating the current test case library according to the modification result.
For example, for a test case of { "hello": word ", when the service provider returns not" world "but" this is a message "in the test, the difference feedback may be used to modify the corresponding parameters and related function definitions for which the test to be modified is used. From the modifications, the checks can determine differences that may be caused by: the service consumer checks the length of the value and only processes values that are no more than 10 characters long. The ratio of the limiting information in the code is that when the limiting information is not synchronized to a tester, related test cases may be omitted, so that an abnormality occurs after the service is on line, and the robustness of the service is not strong.
The test cases in the test case library are distributed to the test server and the service server to execute the test, on one hand, testers can write Mock response test cases aiming at the interface documents, and on the other hand, response results returned by the service server can be used for supplementing test cases which may not be covered. When the response returned by the service server cannot be processed by the consumer, the abnormal scene is shown to be possibly not covered by the Mock response test case, and a tester can compile the Mock response test case aiming at the response, so that the Mock response test cases corresponding to the API are abundant, the subsequent regression test and the test after the API is upgraded are greatly facilitated, and the service robustness is further enhanced.
In another specific embodiment, the number of test case requests received by the test server and the number of test case requests received by the service server may be counted to obtain a received request ratio. When the difference between the received request proportion and the server case execution distribution proportion is greater than a proportion difference threshold value, determining the occurrence reason of the situation based on the total number of the sent test case requests (for example, the server case execution distribution proportion is 3:7, and the total number of the test case requests is 5); whether the test items created in the test platform have problems (such as weight values and corresponding service functions) can be checked; it may be checked whether the feature information corresponding to the first test case set is sent to the test server in advance.
In practical application, the service consumer can call/echo interface 100 times per second, and count the proportion of the service response and the Mock response within a time range. Through statistics, the proportion of the Mock response to the service response is close to 7: and 3, explaining that the proportion of the local routing which is distributed according to the weight is consistent with the weight configured on the console. The statistical manner of the number of response results may be implemented according to traffic statistics, and the service consumer may request to monitor an IP Address (Internet Protocol Address) of the service provider, and may determine which server is by monitoring the IP Address of the service provider. Typically the IP address of the Mock server is fixed.
The quantity statistics of the response results and the comparison of the proportion of the response results and the execution distribution proportion of the server use cases can help to determine the effective condition of the execution distribution proportion of the server use cases, so that the test effectiveness is ensured.
According to the technical scheme provided by the embodiment of the present specification, in the embodiment of the present specification, the first test case set used for the test server to execute and the second test case set used for the service server to execute are determined according to the configured server case execution distribution proportion, so that interference on the actual service function of the service server due to the test can be reduced, and the efficient performance of the test is ensured. According to the difference between the response result returned by the relevant server and the expected result of the test case, the test case to be modified can be positioned more accurately and effectively. The response result returned by the service server can help to find some ignored abnormal or marginal situations, so that the test case can be modified and perfected, and the implementation robustness of the service function is improved. The test server has stronger execution controllability on the test cases, and is convenient to modify and verify the test cases to be modified in the test.
An embodiment of the present invention further provides a testing apparatus, as shown in fig. 6, the apparatus includes:
the test case set extraction module 610: the method comprises the steps of obtaining a first test case set and a second test case set by extracting from a test case library according to a configured server case execution distribution proportion, wherein the proportion of the number of test cases in the first test case set to the number of test cases in the second test case set conforms to the server case execution distribution proportion, the first test case set is used for testing the execution of a server, and the second test case set is used for testing the execution of a service server.
The result difference response module 620: and the test case modifying module is used for determining a test case to be modified and generating a server case execution allocation proportion adjusting instruction according to the difference between the test response result set returned by the test server and the corresponding expected result and the difference between the service response result set returned by the service server and the corresponding expected result. Specifically, the result difference response module 620 includes: a service response result set receiving unit: the server is used for receiving the service response result set returned by the service server; a statistic unit: the device is used for counting the number of the service response results meeting the difference condition in the service response result set according to the expected result of the corresponding test case in the second test case set to obtain a value to be compared; an adjusting unit: and the server is used for adjusting the execution allocation proportion of the current server use case when the value to be compared is smaller than the result difference threshold.
In a specific embodiment, the test case set extracting module 610 includes: a configuration unit: the server case execution distribution proportion is configured for each target interface, and the test case library provides at least one test case corresponding to the target interface; a feature information transmitting unit: and sending the characteristic information corresponding to the first test case set to the test server in advance. Request sequence transmitting unit: and sending a first test case request sequence corresponding to the first test case set to the test server, and sending a second test case request sequence corresponding to the second test case set to the service server.
In practical applications, as shown in fig. 7-10, a user may access a console (portal) to construct a test item, and when the user creates, edits, deletes, queries, and views the test item on the console, the user calls an API server (API unified management service for exposing APIs to the outside (e.g., the console)), and finally sends the changed test item data to the mock server and a service registry (e.g., a essence). The service registry may be used to store service configuration information in the form of key-values, and the service registry mainly stores weight information of the service API, which will eventually be synchronized to the microservice. The mock server mainly stores the mock response of the service API. mock server will maintain a table, primary key: service name/API, other fields include: test item type, Integer; mock returns JSON String, String; delay time, Integer; weight, Integer; status code, Integer. Of course, instead of using a consul, the service registry may use zookeeper (a piece of software that provides consistency services for distributed applications, providing functionality including configuration maintenance, domain name services, distributed synchronization, group services, etc.) among other components.
As shown in fig. 11, the mock server may be a common component service. A service consumer (service consumer) distributes 70% of the requests to the mock server and 30% of the requests to the business server. And after receiving the request, the mock server processes according to a 'mock response' test item configured on the console by the user.
It should be noted that the device and method embodiments in the device embodiment are based on the same inventive concept.
The embodiment of the invention also provides a test system, which comprises a test server, a service server and the test device provided by the embodiment of the device.
An embodiment of the present invention provides an electronic device, which includes a processor and a memory, where the memory stores at least one instruction, at least one program, a code set, or an instruction set, and the at least one instruction, the at least one program, the code set, or the instruction set is loaded and executed by the processor to implement the testing method provided in the foregoing method embodiment.
The memory may be used to store software programs and modules, and the processor may execute various functional applications and data processing by operating the software programs and modules stored in the memory. The memory can mainly comprise a program storage area and a data storage area, wherein the program storage area can store an operating system, application programs needed by functions and the like; the storage data area may store data created according to use of the apparatus, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory may also include a memory controller to provide the processor access to the memory.
The electronic device may be a server, and an embodiment of the present invention further provides a schematic structural diagram of the server, referring to fig. 12, where the server 1200 is configured to implement the testing method provided in the foregoing embodiment. The server 1200 may vary significantly depending on configuration or performance, and may include one or more Central Processing Units (CPUs) 1210 (e.g., one or more processors) and memory 1230, one or more storage media 1220 (e.g., one or more mass storage devices) that store applications 1223 or data 1222. Memory 1230 and storage media 1220, among other things, may be transient storage or persistent storage. The program stored in the storage medium 1220 may include one or more modules, each of which may include a series of instruction operations for a server. Still further, the central processor 1210 may be configured to communicate with the storage medium 1220 to execute a series of instruction operations in the storage medium 1220 on the server 1200. The server 1200 may also include one or more power supplies 1260, one or more wired or wireless network interfaces 1250, one or more input-output interfaces 1240, and/or one or more operating systems 1221, such as Windows Server, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM, and so forth.
Embodiments of the present invention further provide a storage medium, which may be disposed in an electronic device to store at least one instruction, at least one program, a code set, or a set of instructions related to implementing a testing method in the method embodiments, where the at least one instruction, the at least one program, the code set, or the set of instructions is loaded and executed by the processor to implement the testing method provided in the method embodiments.
Alternatively, in this embodiment, the storage medium may be located in at least one network server of a plurality of network servers of a computer network. Optionally, in this embodiment, the storage medium may include, but is not limited to: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic disk, or an optical disk, and various media capable of storing program codes.
It should be noted that: the precedence order of the above embodiments of the present invention is only for description, and does not represent the merits of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the device and electronic apparatus embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference may be made to some descriptions of the method embodiments for relevant points.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and should not be taken as limiting the scope of the present invention, which is intended to cover any modifications, equivalents, improvements, etc. within the spirit and scope of the present invention.

Claims (10)

1. A method of testing, the method comprising:
extracting a first test case set used for testing the execution of the server and a second test case set used for testing the execution of the service server from a test case library according to the configured server case execution distribution proportion, wherein the proportion of the number of the test cases in the first test case set to the number of the test cases in the second test case set conforms to the server case execution distribution proportion; the test server is used for testing whether the calling corresponding interface can realize an expected certain service function, and the service server is used for carrying out real service function processing; when the test server is used as a service provider, the test server is required to simulate a processing process for performing a service function; when the service server is used as a service provider, a processing process for realizing the service function really exists in the service server, and meanwhile, the processing process can perform information interaction with a processing process for realizing other real service functions;
determining a test case to be modified and generating a server case execution distribution proportion adjusting instruction according to the difference between the test response result set returned by the test server and the corresponding expected result and the difference between the service response result set returned by the service server and the corresponding expected result, wherein the step of determining the test case to be modified and the server case execution distribution proportion adjusting instruction comprises the following steps: and modifying parameters corresponding to the test cases to be modified, updating the current test case library according to the modification result, and adjusting the quantity proportion of the test cases distributed to the service server for executing the test in the next test round.
2. The method according to claim 1, wherein the determining a test case to be modified and generating a server case execution allocation proportion adjustment instruction according to a difference between a test response result set returned by the test server and an expected result and a difference between a service response result set returned by the service server and an expected result comprises:
receiving the service response result set returned by the service server;
counting the number of the service response results meeting the difference condition in the service response result set according to the expected result of the corresponding test case in the second test case set to obtain a value to be compared;
and when the value to be compared is smaller than a result difference threshold value, adjusting the execution allocation proportion of the current server use case.
3. The method according to claim 1, wherein the determining a test case to be modified and generating a server case execution allocation proportion adjustment instruction according to a difference between a test response result set returned by the test server and an expected result and a difference between a service response result set returned by the service server and an expected result comprises:
determining a test response result meeting a first difference condition in the test response result set according to a corresponding test case expected result in the first test case set so as to obtain a corresponding first test case to be modified;
determining a service response result meeting a second difference condition in the service response result set according to a corresponding test case expected result in the second test case set so as to obtain a corresponding second test case to be modified;
and respectively modifying the parameters corresponding to the first test case to be modified and the parameters corresponding to the second test case to be modified, and updating the current test case library according to the modification result.
4. The method of claim 1, wherein the step of extracting a first test case set and a second test case set from a test case library according to the configured server case execution allocation proportion comprises:
generating a corresponding target test case set according to the configuration information and the first test case set;
generating a first test case request sequence according to the test case request corresponding to each test case in the target test case set;
sending the first test case request sequence to the test server;
wherein the configuration information includes at least one selected from the group consisting of a response time, a response region, and a response identification.
5. A test apparatus, the apparatus comprising:
the test case set extraction module: the server case execution distribution proportion is used for extracting a first test case set used for testing the execution of the server and a second test case set used for testing the execution of the service server from the test case library according to the configured server case execution distribution proportion, and the proportion of the number of the test cases in the first test case set to the number of the test cases in the second test case set conforms to the server case execution distribution proportion; the test server is used for testing whether the calling corresponding interface can realize an expected certain service function, and the service server is used for processing a real service function; when the test server is used as a service provider, the test server is required to simulate a processing process for performing a service function; when the service server is used as a service provider, a processing process for realizing the service function really exists in the service server, and meanwhile, the processing process can perform information interaction with a processing process for realizing other real service functions;
a result difference response module: the method is used for determining a test case to be modified and generating a server case execution distribution proportion adjusting instruction according to the difference between the test response result set returned by the test server and the corresponding expected result and the difference between the service response result set returned by the service server and the corresponding expected result, and comprises the following steps: and modifying parameters corresponding to the test cases to be modified, updating the current test case library according to the modification result, and adjusting the quantity proportion of the test cases distributed to the service server to execute the test in the next test round.
6. The apparatus of claim 5, wherein the result difference response module comprises:
the service response result set receiving unit: the server is used for receiving the service response result set returned by the service server;
a statistic unit: the device is used for counting the number of the service response results meeting the difference condition in the service response result set according to the expected result of the corresponding test case in the second test case set to obtain a value to be compared;
an adjusting unit: and the server is used for adjusting the execution allocation proportion of the current server use case when the value to be compared is smaller than the result difference threshold.
7. The apparatus of claim 5, wherein the test case set extraction module comprises:
a configuration unit: the server case execution distribution proportion is configured for each target interface, and the test case library provides at least one test case corresponding to the target interface;
a feature information transmitting unit: sending the characteristic information corresponding to the first test case set to the test server in advance;
request sequence transmitting unit: and sending a first test case request sequence corresponding to the first test case set to the test server, and sending a second test case request sequence corresponding to the second test case set to the service server.
8. A test system, characterized in that the system comprises a test server, a service server and a test apparatus according to any of claims 5-7.
9. An electronic device, comprising a processor and a memory, wherein at least one instruction, at least one program, a set of codes, or a set of instructions is stored in the memory, and wherein the at least one instruction, the at least one program, the set of codes, or the set of instructions is loaded and executed by the processor to implement the testing method according to any of claims 1-4.
10. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement a test method according to any one of claims 1 to 4.
CN201910589684.6A 2019-07-02 2019-07-02 Test method, device, system, electronic equipment and medium Active CN110347596B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910589684.6A CN110347596B (en) 2019-07-02 2019-07-02 Test method, device, system, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910589684.6A CN110347596B (en) 2019-07-02 2019-07-02 Test method, device, system, electronic equipment and medium

Publications (2)

Publication Number Publication Date
CN110347596A CN110347596A (en) 2019-10-18
CN110347596B true CN110347596B (en) 2022-05-20

Family

ID=68177499

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910589684.6A Active CN110347596B (en) 2019-07-02 2019-07-02 Test method, device, system, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN110347596B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110765024B (en) * 2019-10-29 2023-08-29 百度在线网络技术(北京)有限公司 Simulation test method, simulation test device, electronic equipment and computer readable storage medium
CN112882922B (en) * 2019-11-29 2023-11-17 深圳云天励飞技术有限公司 Test method and related device
CN111090589A (en) * 2019-12-19 2020-05-01 广州品唯软件有限公司 Software testing method, software testing device and readable storage medium
CN111737109A (en) * 2020-05-20 2020-10-02 山东鲸鲨信息技术有限公司 Cluster file system testing method and device
US11455237B2 (en) * 2020-06-01 2022-09-27 Agora Lab, Inc. Highly scalable system and method for automated SDK testing
CN111770004B (en) * 2020-06-26 2021-09-07 武汉众邦银行股份有限公司 HTTP (hyper text transport protocol) (S) flow content automatic verification method and storage medium
CN111858335A (en) * 2020-07-20 2020-10-30 杭州溪塔科技有限公司 Block chain SDK testing method and device
CN113760699A (en) * 2020-07-31 2021-12-07 北京沃东天骏信息技术有限公司 Server performance test system and method
CN114124761B (en) * 2020-08-31 2024-04-09 中国电信股份有限公司 Electronic device, system, method and medium for bandwidth consistency verification
CN112019558A (en) * 2020-09-03 2020-12-01 深圳壹账通智能科技有限公司 Universal baffle testing method, device, equipment and computer storage medium
CN112306785A (en) * 2020-10-30 2021-02-02 南方电网科学研究院有限责任公司 Method, device, system, equipment and medium for testing micro-service application interface
CN112583721B (en) * 2020-11-30 2023-04-18 五八到家有限公司 Service request routing method, device and medium
CN112291119B (en) * 2020-12-28 2021-04-06 腾讯科技(深圳)有限公司 Block chain network testing method, device, medium and electronic equipment
CN113760722A (en) * 2021-01-13 2021-12-07 北京京东振世信息技术有限公司 Test system and test method
CN113157586B (en) * 2021-04-30 2024-04-05 中国工商银行股份有限公司 Financial market unit test case generation method and device
CN113946511B (en) * 2021-10-15 2022-06-17 杭州研极微电子有限公司 Full-function test case set acquisition method and system

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7509538B2 (en) * 2004-04-21 2009-03-24 Microsoft Corporation Systems and methods for automated classification and analysis of large volumes of test result data
CN106980571B (en) * 2016-01-15 2020-09-08 阿里巴巴集团控股有限公司 Method and equipment for constructing test case suite
CN106776307A (en) * 2016-12-05 2017-05-31 广州唯品会信息科技有限公司 Method for testing software and system

Also Published As

Publication number Publication date
CN110347596A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN110347596B (en) Test method, device, system, electronic equipment and medium
Han et al. Evaluating blockchains for IoT
US20200137151A1 (en) Load balancing engine, client, distributed computing system, and load balancing method
US9544403B2 (en) Estimating latency of an application
CN108256118B (en) Data processing method, device, system, computing equipment and storage medium
CN110765024A (en) Simulation test method, simulation test device, electronic equipment and computer-readable storage medium
CN109618176B (en) Processing method, equipment and storage medium for live broadcast service
CN108255708B (en) Method, device, storage medium and equipment for accessing production file in test environment
CN109788029A (en) Gray scale call method, device, terminal and the readable storage medium storing program for executing of micro services
CN104793932A (en) Method and device used for software release
US9037716B2 (en) System and method to manage a policy related to a network-based service
US9596128B2 (en) Apparatus for processing one or more events
CN104954286B (en) The method and device of bandwidth allocation
US20220337493A1 (en) Report generation from testing a test application in a network-as-a-service
De Simone et al. A latency-driven availability assessment for multi-tenant service chains
CN111107011A (en) Method for detecting and generating optimal path and network acceleration system
Straesser et al. A systematic approach for benchmarking of container orchestration frameworks
CN106549827A (en) The detection method and device of network state
CN109714214A (en) A kind of processing method and management equipment of server exception
CN111698281B (en) Resource downloading method and device, electronic equipment and storage medium
CN113127335A (en) System testing method and device
CN113452575B (en) Service test method, system, device and storage medium
CN112448833A (en) Multi-management-domain communication method and device
CN109768897B (en) Server deployment method and device
US10203970B2 (en) Dynamic configuration of native functions to intercept

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant