CN116955157A - Distributed test method and device - Google Patents

Distributed test method and device Download PDF

Info

Publication number
CN116955157A
CN116955157A CN202310739882.2A CN202310739882A CN116955157A CN 116955157 A CN116955157 A CN 116955157A CN 202310739882 A CN202310739882 A CN 202310739882A CN 116955157 A CN116955157 A CN 116955157A
Authority
CN
China
Prior art keywords
test
test case
execution
server
task
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310739882.2A
Other languages
Chinese (zh)
Inventor
孙鲜艳
崔月强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Huishang Gongda Technology Co ltd
Original Assignee
Tianjin Huishang Gongda Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Huishang Gongda Technology Co ltd filed Critical Tianjin Huishang Gongda Technology Co ltd
Priority to CN202310739882.2A priority Critical patent/CN116955157A/en
Publication of CN116955157A publication Critical patent/CN116955157A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application provides a distributed test method and equipment, wherein after test configuration information and a target test case are acquired, a test case task is constructed according to the target test case and the test configuration information, then the test case task is sent to a server so as to execute the test case task based on the server, and a test result is generated according to a received execution result of the test case task fed back by the server, wherein the target test case is a test case in a preset test case set, and the test case is formed by splicing at least one test script segment according to a preset logic relationship. According to the method, the test cases can be selected in the test case set, test case tasks are constructed, the test case tasks are distributed to one server for parallel execution or to a plurality of servers for execution, time and region limitations are eliminated, execution of different task contents at different ends is achieved, and test efficiency is improved.

Description

Distributed test method and device
Technical Field
The present application relates to the field of testing technologies, and in particular, to a distributed testing method and apparatus.
Background
Test cases refer to descriptions of test tasks performed on a specific software product, namely documents of the test tasks including test targets, test environments, input data, test steps, expected results, test scripts and the like. That is, a test case is a set of test inputs, execution conditions, and expected results that are formulated for a particular goal in order to test whether the particular goal meets a particular requirement.
The test cases can be written manually by a user according to the test requirement documents, the expression of the test cases can be converted into actions by manually calling the test software in the process of executing the test cases, and the variables of each software are manually controlled according to the description of the test cases, so that whether the test passes or not is judged. In order to save manpower, time or hardware resources, the test cases can be automatically executed, and the automatic test script is written and executed according to the test plan and the test cases by making the test plan, designing the test cases and building the test environment.
However, for each test task, the user is required to write the test cases according to the required documents, and the efficiency of test case generation is reduced depending on the experience, understanding capability, text description and other factors of the user. And when the test task is executed, the test cases are collected in an execution packet, and the execution packet is executed by calling to obtain a test result. For a plurality of test cases, distributed execution cannot be performed, and the test efficiency is reduced.
Disclosure of Invention
The application provides a distributed test method and equipment, which are used for solving the problem of low test efficiency.
In a first aspect, the present application provides a distributed test method applied to a test device, where the test device is connected to at least one server, the distributed test method includes:
acquiring test configuration information;
acquiring a target test case, wherein the target test case is a test case in a preset test case set, and the test case is formed by splicing at least one test script segment according to a preset logic relationship;
constructing a test case task according to the target test case and the test configuration information;
the test case task is sent to a server, so that the test case task is executed based on the server;
and receiving an execution result of the test case task fed back by the server, and generating a test result according to the execution result.
In one implementation, the step of obtaining the target test case includes:
traversing the test case set;
extracting test cases associated with the test configuration information to generate a target test case set;
and responding to the case selection instruction input by the user, and extracting the test case indicated by the case selection instruction from the target test case set to acquire the target test case.
In one implementation, the test case task includes a plurality of test case tasks, the method further comprising:
acquiring an execution instruction input by a user, wherein the execution instruction comprises an execution attribute, and the execution attribute is single execution or parallel execution;
if the execution attribute is single execution, sending a single execution instruction to the server so that the server responds to the single execution instruction to sequentially execute a plurality of test case tasks according to a preset sequence;
and if the execution attribute is parallel execution, sending a parallel execution instruction to the server so that the server responds to the parallel execution instruction to execute a plurality of test case tasks in parallel.
In one implementation, the step of sending the test case task to a server includes:
detecting the connection quantity of the server;
generating allocation information of the test case tasks according to the connection quantity, wherein the allocation information is used for indicating at least one server executing the test case tasks;
and sending the test case task to a server indicated by the allocation information according to the allocation information.
In one implementation, the method further comprises:
acquiring the working state of the server, wherein the working state is an executing state or a non-executing state;
if the working state is a non-execution state, sending a test execution instruction to the server, wherein the test execution instruction is used for indicating the server to execute the test case task;
and if the working state is the execution state, generating prompt information, wherein the prompt information is used for prompting a user that the server is in the execution state.
In one implementation, a plurality of target test script fragments are obtained, wherein the target test script fragments are test script fragments in a preset test script set;
configuring connection parameters for each target test script segment, wherein the connection parameters are used for representing the data transfer relationship between two adjacent test script segments;
and splicing a plurality of target test script fragments according to the connection parameters to generate a test case.
In one implementation, a plurality of target test script fragments are obtained, wherein the target test script fragments are test script fragments in a preset test script set;
configuring execution parameters for each target test script segment, wherein the execution parameters comprise at least one of a positioning mode, a positioning value, an action type and an action value, the positioning mode is used for representing a mode of acquiring an interface element, the positioning value is used for representing a page position operated by a user, the action type is used for representing the operation of the user, and the action value is used for representing a numerical value input by the user;
and splicing a plurality of target test script fragments according to a preset sequence and the execution parameters to generate a test case.
In one implementation manner, a task execution state of the test case task sent by the server is periodically obtained, wherein the task execution state comprises unexecuted, executed and executed;
marking the test case task according to the task execution state.
In one implementation, the step of generating a test result according to the execution result includes:
generating an execution detail table according to the execution result, wherein the execution detail table comprises a failed test case, a successful test case and a failure reason;
and calculating the passing rate according to the execution list to generate a test result, wherein the passing rate is the ratio of the successful number of the test cases to the total number of the test cases.
In a second aspect, the present application provides a test device comprising a communicator and a controller, wherein the communicator is configured to establish a communication connection with at least one server; the controller is configured to perform the steps of:
acquiring test configuration information;
acquiring a target test case, wherein the target test case is a test case in a preset test case set, and the test case is formed by splicing at least one test script segment according to a preset logic relationship;
constructing a test case task according to the target test case and the test configuration information;
the test case task is sent to a server, so that the test case task is executed based on the server;
and receiving an execution result of the test case task fed back by the server, and generating a test result according to the execution result.
According to the technical scheme, the distributed test method and the distributed test device are provided, after the test configuration information and the target test cases are obtained, the test case tasks are constructed according to the target test cases and the test configuration information, then the test case tasks are sent to the server, so that the test case tasks are executed based on the server, and the test result is generated according to the execution result of the test case tasks fed back by the server, wherein the target test cases are test cases in a preset test case set, and the test cases are formed by splicing at least one test script segment according to a preset logic relationship. According to the method, the test cases can be selected in the test case set, test case tasks are constructed, the test case tasks are distributed to one server for parallel execution or to a plurality of servers for execution, time and region limitations are eliminated, execution of different task contents at different ends is achieved, and test efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a distributed test method according to an embodiment of the application;
FIG. 2 is a schematic diagram illustrating the splicing of test cases according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating task allocation of test cases according to an embodiment of the present application;
FIG. 4 is a timing chart of the test equipment and the server according to the embodiment of the application;
FIG. 5 is a schematic diagram of a task operation page according to an embodiment of the present application;
FIG. 6 is a second diagram illustrating task allocation of test cases according to an embodiment of the present application;
FIG. 7 is a third diagram illustrating task allocation of test cases according to an embodiment of the present application;
fig. 8 is a flowchart illustrating a process of acquiring a working state of a server according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The embodiments described in the examples below do not represent all embodiments consistent with the application. Merely as examples of apparatus and methods consistent with some aspects of the application as set forth in the claims.
The test equipment can realize automatic test tasks by executing test cases so as to test whether a specific target meets specific requirements. The test case is a document of test tasks including a test target, a test environment, input data, a test step, an expected result, a test script and the like, and the process of executing the test case simulates a certain function of a specific target so as to test whether the function of the specific target can be normally used. For example, for the e-commerce platform, for a test task for testing the shopping cart function, a test case for executing basic functions such as adding goods, deleting goods, repeatedly adding a plurality of goods, collecting goods, and the like can be written, and whether the basic functions of the shopping cart can be normally used is tested by executing the test case. However, each test task requires a user to write a test case according to a requirement document, and the efficiency of test case generation is reduced depending on factors such as experience, understanding capability and text description of the user. And when the test task is executed, the test cases are collected in an execution packet, and the execution packet is executed by calling to obtain a test result. For a plurality of test cases, the time consumed in the execution process can be increased, and the execution efficiency of the test cases is reduced.
In order to improve the test efficiency, some embodiments of the present application provide a distributed test method, which is applied to test equipment and used for executing test cases in a distributed manner, so as to improve the test efficiency. The test device capable of applying the distributed test method is connected to at least one server, and the server is configured to execute a test case, and fig. 1 is a flow chart of the distributed test method in the embodiment of the application, as shown in fig. 1, specifically including the following contents:
s100, acquiring test configuration information.
The test configuration information is configuration information for constructing test case tasks, including but not limited to task names, applications, environments, task descriptions, and the like. In order to improve user experience and facilitate user operation, the test device may provide a visual interface, i.e., a user interface, through which a user may interact and exchange information with the test device. For example, a user invokes a menu interface through a user interface provided by the test equipment, the menu interface comprises a plurality of options such as case management, task management, timing task execution and the like, the user switches to the task management interface by clicking a task management option, and the user can input test configuration information and select test cases through the task management interface so as to construct test case tasks.
S200, obtaining a target test case.
The test case tasks are generated according to the test configuration information and the test cases, wherein the target test cases are test cases in a preset test case set, namely the test equipment can prestore the test case set, the test case set comprises a plurality of test cases which are compiled in advance, and when the test case tasks are constructed, a user can call the test case set to select the corresponding target test cases.
Because the test case set comprises test cases of different types, in order to improve user experience, the test case set can be screened according to the test configuration information, and the test cases related to the test configuration information can be extracted for users to select. Thus, in some embodiments, after the test configuration information is obtained, the test case set may be traversed, and the test cases associated with the test configuration information may be extracted to generate a target test case set. For example, the application name in the test configuration information is interface automation and the environment is a user acceptance test (UserAcceptance Test, UAT) environment. The test cases with the same application name and environment can be screened out from the test case set, and then the target test case set is generated.
And for the acquisition of the target test cases, the target test case sets can be extracted according to the requirements of users. After the test equipment acquires the target test case set, the target test case set can be presented to a user through a user interface. After the case selection instruction input by the user is obtained, the test case indicated by the case selection instruction can be extracted from the target test case set in response to the case selection instruction, so as to obtain the target test case.
Because the test cases are documents of test tasks including test targets, test environments, input data, test steps, expected results, test scripts and the like. For different test tasks, a user is required to write an independent test case according to a requirement document, so that the generation of the test case depends on factors such as experience, understanding capability and word description of the user, and the efficiency of the generation of the test case is reduced. In order to improve the efficiency of generating test cases, as shown in fig. 2, a plurality of test script fragments may be edited in advance, parameters may be configured for the corresponding test script fragments according to different test tasks, so as to establish a logic relationship between the test script fragments, and the corresponding test script fragments may be spliced according to the logic relationship, so as to generate the test cases, that is, the test cases are formed by splicing at least one test script fragment according to the preset logic relationship.
It can be understood that the method for splicing test cases can separate the generation and the writing of the test cases, and for different test tasks, the independent test cases do not need to be rewritten, but the writing of the test cases is completed by splicing different test script fragments, so that the test cases for different test tasks are generated, and the efficiency of the test case generation is improved.
Test cases include test cases of Application Programming Interfaces (APIs) and test cases of User Interfaces (UIs). For the API test case, a plurality of splicing components and connection parameters can be set to complete writing and generating of the test case. And one splicing component is associated with one test script segment, and at least one splicing component is spliced to splice the test script segments to form automatic test cases of different service scenes.
The splicing component comprises a protocol component, a database component and a code component, wherein the protocol component is used for completing the request of a protocol request type interface and checking the numerical result, such as a hypertext transfer protocol (Hyper Text Transfer Protocol, HTTP). The database component is used for extracting corresponding numerical values from different databases to carry out splicing or checking. The code component is used for acquiring the value based on a self-defined method and completing the splicing or the value verification with other components. The connection parameters are generated according to the parameters of the configuration of the splicing component.
And splicing the splicing assembly, namely splicing the test script fragments, wherein the connection parameters are used for representing the data transfer relationship between two adjacent test script fragments. And configuring connection parameters for each test script segment by acquiring parameters configured by a user on the splicing assembly. When a test case is generated, a plurality of target test script fragments are obtained, connection parameters are configured for each target test script fragment through a splicing component, and the plurality of target test script fragments are spliced according to the connection parameters so as to generate the test case.
The target test script fragments are test script fragments in a preset test script set, namely the test equipment can prestore the test script set, the test script set comprises a plurality of test script fragments which are compiled in advance, and when a test case is generated, a user can call the test script set to select the corresponding target test script fragments.
The connection parameters include relevance, relevance parameters, and relevance substitutions. The relevance is used for representing whether the interfaces are relevant or not, namely, the relevance configuration among the splicing components, and if the relevance exists, the situation that the splicing components are required to be connected with a control program for corresponding processing is indicated. The associated parameters are used for representing that the parameters returned by the current interface are associated with the next interface, and after the parameters are fetched by the current interface returned data, the splicing component is connected with the control program to pre-store the corresponding parameters. The association replacement is used for representing parameter replacement, the parameter returned by the previous interface is replaced to the current request as an input parameter, and the parameter value returned by the previous splicing component is replaced by the corresponding parameter value in the data of the current request, so that the connection and data transmission between the two splicing components are realized. That is, when splicing test script fragments, connection values corresponding to connection parameters may be stored and connected according to whether the next splice component configuration is received.
For UI test cases, behavior components and execution parameters can be set to complete the writing and generation of the test cases. Each behavior component is associated with a test script segment, and the behavior components are spliced and combined to splice the test script segments, so that an automatic test case is generated. When the behavior components are spliced, the corresponding behavior components can be connected in series according to the steps of user behavior, operation target positioning, user behavior operation simulation execution and the like, so that automatic test cases for simulating user operation user pages in different scenes are realized.
The execution parameters are generated according to the parameters configured by the behavior component. The execution parameters include at least one of a location manner used to characterize the manner in which the interface element is obtained, e.g., ID, NAME, XPATH, etc., a location value, a type of action, and a value of action. The location value is used to characterize the page location of the user operation, e.g., account number entry box. The action type is used to characterize the user's operations, such as clicking, sliding, entering, etc. The action value is used to characterize the value entered by the user, e.g., input account number, search value, value that asserts the presence of a text page, etc. Corresponding execution parameters are configured for different behavior components through simulating user behaviors, and combination and splicing are carried out to generate test cases.
When a test case is generated, a plurality of target test script fragments are obtained, wherein the target test script fragments are test script fragments in a preset test script set, execution parameters are configured for each target test script fragment through a behavior component, and the plurality of target test script fragments are spliced according to a preset sequence and the execution parameters to generate the test case.
In this embodiment, by writing a plurality of test script fragments in advance, corresponding test script fragments can be spliced according to a preset logic relationship for test cases of different test tasks, so as to generate test cases, and improve efficiency of test case generation. Meanwhile, in order to improve user experience, codes of the test script fragments are packaged and hidden and presented in a component form, and the programming and the generation of the test cases are completed through configuration parameters and assembly and combination of the components, so that the visual management of the test cases is realized.
S300, constructing test case tasks according to the target test cases and the test configuration information.
After the target test case and the test configuration information are obtained, a test case task can be constructed according to the target test case and the test configuration information. The test case tasks can comprise a plurality of test cases, and one test case task comprises one test case or a plurality of test cases. Instead of collecting test cases in one execution package for execution, the embodiment of the application can generate different test case tasks according to different test cases, perform a plurality of test case tasks in a distributed manner, collect, sort and analyze results to obtain test results, and improve test efficiency.
S400, the test case task is sent to the server so as to execute the test case task based on the server.
When executing the test case task, the test case task is sent to the server, and after the test case task is sent to the server due to the fact that the test equipment is in communication connection with the server, a test execution instruction can be sent to the server, so that the server can execute the test case task in response to the test execution instruction.
The test case task execution modes can include various, the test case task can be executed on one server, and in order to improve the test efficiency, as shown in fig. 3, a plurality of test case tasks can be executed in parallel on one server. That is, in some embodiments, as shown in fig. 4, a user-entered execution instruction may be obtained, where the execution instruction includes an execution attribute that is either single execution or parallel execution. And according to the execution attribute indicated by the execution instruction, sending the corresponding execution instruction to the server. If the execution attribute is single execution, generating a single execution instruction, and sending the single execution instruction to the server, so that the server responds to the single execution instruction to sequentially execute a plurality of test case tasks according to a preset sequence. If the execution attribute is parallel execution, generating a parallel execution instruction and sending the parallel execution instruction to the server so that the server can execute a plurality of test case tasks in parallel in response to the parallel execution instruction.
In addition, to reduce vulnerabilities and defects present in a particular target, test cases may be periodically executed to test whether the particular target meets particular requirements. Therefore, after the test case task is sent to the server, the test execution instruction can be periodically sent to the server, so that the server periodically executes the test case task. Or sending a timing execution instruction to the server, wherein the timing execution instruction is used for indicating the user to periodically execute the test case task. For example, the test equipment sends a timing execution instruction to the server, where the timing execution instruction is used to instruct the server to execute the test case task once at intervals 12h, and the server may execute the test case task once at intervals 12h in response to the timing execution instruction.
Of course, the test case tasks selected by the user can be independently executed according to the user requirements. For example, as shown in fig. 5, after the test case task is built, the user may call a task operation page, where the task operation page includes the built test case task and related operation options for task execution, and the user may input different control instructions by clicking on the corresponding options, for example, the user may select the test case task to be executed by checking a check box and clicking on a "batch execution" option, so that the selected test case task may be executed in parallel, and the user may click on an "execution" option corresponding to the test case task with a task id 307, so that the test case task with a task id 307 may be executed separately.
Test case tasks may also be performed on multiple servers. That is, the test equipment can be connected with a plurality of servers, and when the test case task is executed, the test case task is distributed to a plurality of servers to be executed in parallel. Therefore, in some embodiments, before the test case task is sent to the server, the connection number of the server, that is, the number of servers currently connected with the test device, may be detected first, allocation information of the test case task is generated according to the connection number, and the test case task is sent to the server indicated by the allocation information according to the allocation information.
The allocation information is used for indicating at least one server executing the test case tasks. That is, one test case task may be executed by one server or may be executed by a plurality of servers. For example, 2 test case tasks are constructed, the number of connection of the servers is 3, as shown in fig. 6, 3 servers may be allocated to each test case task for execution, that is, each server executes 2 test case tasks in parallel. As shown in fig. 7, 1 server may be allocated to each test case task for execution, that is, 2 test case tasks are respectively sent to 2 servers, and each server executes 1 test case task.
In order to improve the stable running of the test case task, avoid the conflict between the execution of the test case task by the server and the processing request of the server, as shown in fig. 8, before sending the test execution instruction to the server, the terminal device may send a detection instruction to the server to detect the working state of the server, and determine whether to send the test execution instruction to the server according to the working state of the server by obtaining the working state of the server.
Wherein the working state is an executing state or a non-executing state, and the executing state refers to a state that the server is processing tasks, for example, the server is processing requests of clients. The non-execution state refers to a state in which the server is in normal operation, but is not processing tasks, i.e., an idle state. If the working state is the non-execution state, generating a test execution instruction, and sending the test execution instruction to the server, wherein the test execution instruction is used for indicating the server to execute the test case task. If the working state is the execution state, generating prompt information which is used for prompting the user that the server is in the execution state.
Of course, for the test case task executed at regular time, when the timing time is reached, whether the test case task is executed or not can be determined according to the working state of the current server, if the server is in a non-execution state, the test case task is executed, and if the server is in an execution state, the test case task is skipped.
In the process of executing the test case task by the server, the task execution state of the test case task sent by the server can be periodically obtained, and the test case task is marked according to the task execution state, namely, the task execution state of the test case task is recorded in real time. When the user checks the task execution state of the test case task, the task execution state can be determined according to the mark of the test case task, and the user interface is controlled to display.
Wherein the task execution state includes unexecuted, executing neutral, and executed. The task execution status may be represented using different symbols, for example, the number "0" for unexecuted, the number "1" for executed, and the number "2" for executed.
S500, receiving an execution result of the test case task fed back by the server, and generating a test result according to the execution result.
After the server executes the test case task, the execution result of the test case task is fed back to the test equipment. The test device may receive the execution result, sort and analyze the execution result, and generate a test result.
For ease of user observation, the test results may be in the form of a list, i.e., in some embodiments, an execution list may be generated from the received execution results, where the execution list includes failed test cases, successful test cases, failure causes, and the like.
When the number of test case tasks is large, the data volume of the execution list is large, and the analysis by a user is inconvenient, therefore, the execution list can be further sorted, the test case tasks are grouped according to different test case tasks, and the number of successful test cases, the number of test cases, the passing rate and the like of each test case task are counted according to the execution list. The passing rate is the ratio of the successful number of the test cases to the total number of the test cases. To make the data more intuitive, the pass rate may be presented by a chart, e.g., pie chart, bar chart, etc.
After the test result is obtained, the test result can be sent to different clients or servers according to the user requirement, for example, a user can input a designated mailbox, and the test equipment can send the test result to a mail client corresponding to the mailbox according to the mailbox.
Based on the above-mentioned distributed test method, some embodiments of the present application further provide a test apparatus, as shown in fig. 1, including: a communicator and a controller. Wherein the communicator is configured to establish a communication connection with at least one server; the controller is configured to perform the following program steps:
and acquiring test configuration information.
And obtaining the target test case.
The target test cases are test cases in a preset test case set, and the test cases are formed by splicing at least one test script segment according to a preset logic relationship.
And constructing test case tasks according to the target test cases and the test configuration information.
And sending the test case task to a server to execute the test case task based on the server.
Receiving an execution result of the test case task fed back by the server, and generating a test result according to the execution result.
According to the distributed test method and the distributed test device, after the test configuration information and the target test cases are obtained, the test case tasks are built according to the target test cases and the test configuration information, then the test case tasks are sent to the server to execute the test case tasks based on the server, and the test results are generated according to the execution results of the test case tasks fed back by the server, wherein the target test cases are test cases in a preset test case set, and the test cases are formed by splicing at least one test script segment according to a preset logic relationship. According to the method, the test cases can be selected in the test case set, test case tasks are constructed, the test case tasks are distributed to one server for parallel execution or to a plurality of servers for execution, time and region limitations are eliminated, execution of different task contents at different ends is achieved, and test efficiency is improved.
The above-provided detailed description is merely a few examples under the general inventive concept and does not limit the scope of the present application. Any other embodiments which are extended according to the solution of the application without inventive effort fall within the scope of protection of the application for a person skilled in the art.

Claims (10)

1. A distributed test method, applied to a test device, the test device being connected to at least one server, the distributed test method comprising:
acquiring test configuration information;
acquiring a target test case, wherein the target test case is a test case in a preset test case set, and the test case is formed by splicing at least one test script segment according to a preset logic relationship;
constructing a test case task according to the target test case and the test configuration information;
the test case task is sent to a server, so that the test case task is executed based on the server;
and receiving an execution result of the test case task fed back by the server, and generating a test result according to the execution result.
2. The distributed test method of claim 1, wherein the step of obtaining the target test case comprises:
traversing the test case set;
extracting test cases associated with the test configuration information to generate a target test case set;
and responding to the case selection instruction input by the user, and extracting the test case indicated by the case selection instruction from the target test case set to acquire the target test case.
3. The distributed test method of claim 1, wherein the test case tasks include a plurality, the method further comprising:
acquiring an execution instruction input by a user, wherein the execution instruction comprises an execution attribute, and the execution attribute is single execution or parallel execution;
if the execution attribute is single execution, sending a single execution instruction to the server so that the server responds to the single execution instruction to sequentially execute a plurality of test case tasks according to a preset sequence;
and if the execution attribute is parallel execution, sending a parallel execution instruction to the server so that the server responds to the parallel execution instruction to execute a plurality of test case tasks in parallel.
4. The distributed test method of claim 1, wherein the step of sending the test case task to a server comprises:
detecting the connection quantity of the server;
generating allocation information of the test case tasks according to the connection quantity, wherein the allocation information is used for indicating at least one server executing the test case tasks;
and sending the test case task to a server indicated by the allocation information according to the allocation information.
5. The distributed test method of claim 1, wherein the method further comprises:
acquiring the working state of the server, wherein the working state is an executing state or a non-executing state;
if the working state is a non-execution state, sending a test execution instruction to the server, wherein the test execution instruction is used for indicating the server to execute the test case task;
and if the working state is the execution state, generating prompt information, wherein the prompt information is used for prompting a user that the server is in the execution state.
6. The distributed test method of claim 1, wherein the method further comprises:
obtaining a plurality of target test script fragments, wherein the target test script fragments are test script fragments in a preset test script set;
configuring connection parameters for each target test script segment, wherein the connection parameters are used for representing the data transfer relationship between two adjacent test script segments;
and splicing a plurality of target test script fragments according to the connection parameters to generate a test case.
7. The distributed test method of claim 1, wherein the method further comprises:
obtaining a plurality of target test script fragments, wherein the target test script fragments are test script fragments in a preset test script set;
configuring execution parameters for each target test script segment, wherein the execution parameters comprise at least one of a positioning mode, a positioning value, an action type and an action value, the positioning mode is used for representing a mode of acquiring an interface element, the positioning value is used for representing a page position operated by a user, the action type is used for representing the operation of the user, and the action value is used for representing a numerical value input by the user;
and splicing a plurality of target test script fragments according to a preset sequence and the execution parameters to generate a test case.
8. The distributed test method of claim 1, wherein the method further comprises:
periodically acquiring a task execution state of the test case task sent by the server, wherein the task execution state comprises unexecuted, executed and executed;
marking the test case task according to the task execution state.
9. The distributed test method of claim 1, wherein the step of generating test results from the execution results comprises:
generating an execution detail table according to the execution result, wherein the execution detail table comprises a failed test case, a successful test case and a failure reason;
and calculating the passing rate according to the execution list to generate a test result, wherein the passing rate is the ratio of the successful number of the test cases to the total number of the test cases.
10. A test apparatus, comprising:
a communicator configured to establish a communication connection with at least one server;
a controller configured to:
acquiring test configuration information;
acquiring a target test case, wherein the target test case is a test case in a preset test case set, and the test case is formed by splicing at least one test script segment according to a preset logic relationship;
constructing a test case task according to the target test case and the test configuration information;
the test case task is sent to a server, so that the test case task is executed based on the server;
and receiving an execution result of the test case task fed back by the server, and generating a test result according to the execution result.
CN202310739882.2A 2023-06-20 2023-06-20 Distributed test method and device Pending CN116955157A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310739882.2A CN116955157A (en) 2023-06-20 2023-06-20 Distributed test method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310739882.2A CN116955157A (en) 2023-06-20 2023-06-20 Distributed test method and device

Publications (1)

Publication Number Publication Date
CN116955157A true CN116955157A (en) 2023-10-27

Family

ID=88448344

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310739882.2A Pending CN116955157A (en) 2023-06-20 2023-06-20 Distributed test method and device

Country Status (1)

Country Link
CN (1) CN116955157A (en)

Similar Documents

Publication Publication Date Title
US10108535B2 (en) Web application test script generation to test software functionality
US7810070B2 (en) System and method for software testing
US20130124450A1 (en) Adaptive business process automation
WO2003075178A2 (en) Method and system for recording user interaction with an application
CN110928763A (en) Test method, test device, storage medium and computer equipment
US11074162B2 (en) System and a method for automated script generation for application testing
CA2992605C (en) A system and method for use in regression testing of electronic document hyperlinks
CN114546738B (en) Universal test method, system, terminal and storage medium for server
CN109801677B (en) Sequencing data automatic analysis method and device and electronic equipment
CN109799985A (en) Front-end code generation method and device, storage medium and electronic equipment
CN110263070A (en) Event report method and device
Grigera et al. Kobold: web usability as a service
US20200342471A1 (en) Computer system and method for electronic survey programming
CN113297086A (en) Test case generation method and device, electronic equipment and storage medium
CN115185797A (en) Method and system for testing visual algorithm model, electronic equipment and storage medium
CN110543429A (en) Test case debugging method and device and storage medium
CN107430590B (en) System and method for data comparison
WO2023160402A1 (en) Data modeling method and apparatus, and device and storage medium
CN115599683A (en) Automatic testing method, device, equipment and storage medium
CN116016270A (en) Switch test management method and device, electronic equipment and storage medium
CN116955157A (en) Distributed test method and device
CN112559278B (en) Method and device for acquiring operation data
CN113326193A (en) Applet testing method and device
CN113220596B (en) Application testing method, device, equipment, storage medium and program product
CN118093381B (en) Software testing method and system based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination