CN114911712A - Visual test execution method and device, electronic equipment and readable storage medium - Google Patents

Visual test execution method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN114911712A
CN114911712A CN202210674814.8A CN202210674814A CN114911712A CN 114911712 A CN114911712 A CN 114911712A CN 202210674814 A CN202210674814 A CN 202210674814A CN 114911712 A CN114911712 A CN 114911712A
Authority
CN
China
Prior art keywords
test
task
executed
execution
interface
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210674814.8A
Other languages
Chinese (zh)
Inventor
熊群
徐佐
路遥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kangjian Information Technology Shenzhen Co Ltd
Original Assignee
Kangjian Information Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kangjian Information Technology Shenzhen Co Ltd filed Critical Kangjian Information Technology Shenzhen Co Ltd
Priority to CN202210674814.8A priority Critical patent/CN114911712A/en
Publication of CN114911712A publication Critical patent/CN114911712A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application discloses a visual test execution method and device, electronic equipment and a readable storage medium, relates to the technical field of tests, and aims to automatically create and execute tasks in applied dimensions, simplify a test process and improve test coverage and test efficiency by adopting visual billboard display. The method comprises the following steps: analyzing the test codes uploaded by the user every preset test duration to acquire interface test cases and to-be-tested application information; creating a test task to be executed according to the application information to be tested; when the task state of a to-be-executed test task in at least one to-be-executed test task is inquired to be an execution identifier, generating a to-be-executed configuration file for the to-be-executed test task according to an interface test case corresponding to the to-be-executed test task in at least one interface test case; and executing the interface test case corresponding to the test task to be executed based on the configuration file to be executed, and displaying the execution result of the interface test case by adopting a visual billboard of the user terminal.

Description

Visual test execution method and device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of testing technologies, and in particular, to a visual test execution method, apparatus, electronic device, and readable storage medium.
Background
The test is an important method for the tester to verify the system, and for the tester, the automatic test is compiled and executed, so that the test efficiency can be effectively improved. When the tested system is upgraded and tested, an interface automation case needs to be executed to test the system so as to ensure whether the system runs normally.
In the related art, a test platform with a test function needs a worker to manually maintain or add an interface case on the test platform, or the worker writes a code for testing locally and then uploads the code to the test platform, so that the test platform performs continuous integration of test codes, and the test of a system is realized.
In carrying out the present application, the applicant has found that the related art has at least the following problems:
the cases manually added by workers are limited, the cases manually added cannot be effectively connected in series and simulated in a complex case scene, the number of applications to be tested is large, the interfaces of each application are different, the interfaces of each abbreviated application need to be dispatched and counted manually, the testing process is complex, the coverage rate is poor, and the testing efficiency is not high.
Disclosure of Invention
In view of this, the present application provides a visual test execution method, a visual test execution device, an electronic device, and a readable storage medium, and mainly aims to solve the problems of a current test process that the test process is complicated, the coverage rate is poor, and the test efficiency is not high.
According to a first aspect of the present application, there is provided a visual test execution method, including:
analyzing the test codes uploaded by the user every other preset test duration to obtain at least one interface test case and to-be-tested application information;
creating at least one test task to be executed according to the to-be-tested application information;
when the task state of a to-be-executed test task in the at least one to-be-executed test task is inquired to be an execution identifier, generating a to-be-executed configuration file for the to-be-executed test task according to an interface test case corresponding to the to-be-executed test task in the at least one interface test case;
and executing the interface test case corresponding to the test task to be executed based on the configuration file to be executed, and displaying the execution result of the interface test case by adopting a visual billboard of the user terminal.
Optionally, the analyzing the test code uploaded by the user every preset test duration to obtain at least one interface test case and to-be-tested application information includes:
acquiring a warehouse address configured in a database every other preset test duration, and determining a distributed code storage bin indicated by the warehouse address, wherein the distributed code storage bin is used for storing the test codes uploaded by the user;
reading the distributed code storage bin, and analyzing the test codes stored in the distributed code storage bin;
acquiring a preset directory name and preset characters, and extracting codes which are positioned under the preset directory name and the file name is ended by the preset characters from the test codes to form at least one interface test case;
analyzing a tool configuration file in the test code, acquiring an application name to be tested in the tool configuration file, acquiring a test annotation of a test framework of a preset unit, and reading at least one piece of application information marked with the test annotation in the tool configuration file;
and taking the name of the application to be tested and the at least one piece of application information as the information of the application to be tested, and storing the information of the application to be tested to the database.
Optionally, the creating at least one test task to be executed according to the to-be-tested application information includes:
acquiring task creation time in the to-be-tested application information, acquiring daily task information in the to-be-tested application information when detecting that the current time point reaches the task creation time, and creating a daily scheduling task as the to-be-executed test task according to the daily task information; and/or the presence of a gas in the gas,
continuously acquiring content issued by an application deployment system, reading at least one iteration task attribute indicated by the application deployment feedback when acquiring the application deployment feedback issued by the application deployment system, acquiring iteration task information related to the at least one iteration task attribute from the to-be-tested application information, and creating at least one iteration scheduling task matched with the at least one iteration task attribute by using the iteration task information as the at least one to-be-executed test task.
Optionally, after creating at least one test task to be executed according to the information of the application to be tested, the method further includes:
for a daily scheduling task in the at least one test task to be executed, reading task execution time in task information of the daily scheduling task, and when detecting that the current time point reaches the task execution time, acquiring the execution identifier for indicating to start executing the task, and updating the task state of the daily scheduling task to the execution identifier; and/or the presence of a gas in the gas,
continuously acquiring the content issued by the application deployment system for the iterative scheduling task in the at least one test task to be executed, and updating the task state of the iterative scheduling task to the execution identifier when receiving a task execution instruction issued by the application deployment system.
Optionally, when it is found that a task state of a to-be-executed test task in the at least one to-be-executed test task is an execution identifier, generating a to-be-executed configuration file for the to-be-executed test task according to an interface test case corresponding to the to-be-executed test task in the at least one interface test case, includes:
reading the task state of each test task to be executed every other preset scheduling time;
when the task state of a test task to be executed in the at least one test task to be executed is found to be an execution identifier, determining an interface test case corresponding to the test task to be executed in the at least one interface test case, and reading at least one project to be tested indicated by the interface test case;
traversing the test requirement of each project to be tested in the at least one project to be tested, assembling the test codes stored in the distributed code storage bin according to the test requirement of each project to be tested, and generating a configuration file for testing for each project to be tested as a to-be-executed configuration file of the to-be-executed test task.
Optionally, the executing the interface test case corresponding to the test task to be executed based on the configuration file to be executed, and displaying the execution result of the interface test case by using a visual billboard of a user terminal includes:
calling a task execution thread and executing the configuration file to be executed;
acquiring log information generated when the configuration file to be executed is executed, and recording the log information into a log file;
calling a monitoring thread, monitoring the log file based on the monitoring thread, and storing the updated content of the log file to a database at intervals of preset monitoring intervals;
when an execution ending event is detected, extracting all updated contents related to the log file from the database as logs to be output, and uploading the logs to be output to a distributed storage system, wherein the execution ending event comprises one of the completion of the execution of the configuration file to be executed, the overtime of the file running time and the receipt of an execution ending instruction;
analyzing the log to be output, determining the execution progress of the interface test case corresponding to the test task to be executed, and updating the execution progress to the execution state corresponding to the test task to be executed by taking the execution progress as the execution result of the test task to be executed;
and displaying the test task to be executed after the execution state is updated by adopting the visual billboard.
Optionally, the method further comprises:
receiving the test code uploaded by the user, storing the test code into a distributed code storage bin, acquiring a warehouse address of the distributed code storage bin, and configuring the warehouse address in a database, wherein the test code is compiled by the user and marked by a test annotation of a preset unit test framework; and/or the presence of a gas in the gas,
after executing the interface test case corresponding to the test task to be executed based on the configuration file to be executed, displaying the at least one test task to be executed, the at least one interface test case and the corresponding execution state by using the visual billboard, suspending the execution of the interface test case corresponding to the test task to be executed when detecting that a user stops the execution of the test task to be executed based on the request of the visual billboard, and restarting the execution of the interface test case when detecting that the user continues to execute the test task to be executed based on the request of the visual billboard; and/or the presence of a gas in the gas,
after the configuration file to be executed is generated, the visual billboard is adopted to display the at least one test task to be executed, the test task to be executed, which successfully generates the configuration file to be executed, in the at least one test task to be executed is marked, and if the fact that a user requests to execute the test task to be executed, which successfully generates the configuration file to be executed, is detected based on the visual billboard, an interface test case related to the test task to be executed is executed based on the configuration file to be executed, which corresponds to the test task to be executed; and/or the presence of a gas in the gas,
after executing the interface test case corresponding to the test task to be executed based on the configuration file to be executed, acquiring a preset result notification party, and sending the execution result of the interface test case to the preset result notification party.
According to a second aspect of the present application, there is provided a visual test execution apparatus, the apparatus comprising:
the analysis module is used for analyzing the test codes uploaded by the user every preset test duration to acquire at least one interface test case and to-be-tested application information;
the creating module is used for creating at least one testing task to be executed according to the information of the application to be tested;
the generating module is used for generating a to-be-executed configuration file for the to-be-executed test task according to an interface test case corresponding to the to-be-executed test task in the at least one interface test case when the task state of the to-be-executed test task in the at least one to-be-executed test task is inquired to be an execution identifier;
and the execution module is used for executing the interface test case corresponding to the test task to be executed based on the configuration file to be executed and displaying the execution result of the interface test case by adopting a visual billboard of the user terminal.
Optionally, the analysis module is configured to obtain a warehouse address configured in a database every other preset test duration, and determine a distributed code storage bin indicated by the warehouse address, where the distributed code storage bin is used to store the test code uploaded by the user; reading the distributed code storage bin, and analyzing the test codes stored in the distributed code storage bin; acquiring a preset directory name and preset characters, and extracting codes which are positioned under the preset directory name and the file name is ended by the preset characters from the test codes to form at least one interface test case; analyzing a tool configuration file in the test code, acquiring an application name to be tested in the tool configuration file, acquiring a test annotation of a preset unit test framework, and reading at least one piece of application information marked with the test annotation in the tool configuration file; and taking the name of the application to be tested and the at least one piece of application information as the information of the application to be tested, and storing the information of the application to be tested to the database.
Optionally, the creating module is configured to obtain task creating time in the to-be-tested application information, obtain daily task information in the to-be-tested application information when it is detected that the current time point reaches the task creating time, and create a daily scheduling task as the to-be-executed test task according to the daily task information; and/or continuously acquiring content issued by an application deployment system, reading at least one iteration task attribute indicated by the application deployment feedback when acquiring the application deployment feedback issued by the application deployment system, acquiring iteration task information related to the at least one iteration task attribute from the to-be-tested application information, and creating at least one iteration scheduling task matched with the at least one iteration task attribute by using the iteration task information as the at least one to-be-executed test task.
Optionally, the apparatus further comprises:
the updating module is used for reading task execution time in task information of the daily scheduling task for the daily scheduling task in the at least one test task to be executed, acquiring the execution identifier for indicating the start of executing the task when detecting that the current time point reaches the task execution time, and updating the task state of the daily scheduling task into the execution identifier; and/or the presence of a gas in the gas,
the updating module is further configured to continuously obtain, for an iterative scheduling task in the at least one test task to be executed, content issued by an application deployment system, and update a task state of the iterative scheduling task to the execution identifier when receiving a task execution instruction issued by the application deployment system.
Optionally, the generating module is configured to read a task state of each to-be-executed test task every preset scheduling time; when the task state of a test task to be executed in the at least one test task to be executed is found to be an execution identifier, determining an interface test case corresponding to the test task to be executed in the at least one interface test case, and reading at least one project to be tested indicated by the interface test case; traversing the test requirement of each project to be tested in the at least one project to be tested, assembling the test codes stored in the distributed code storage bin according to the test requirement of each project to be tested, and generating a configuration file for testing for each project to be tested as a to-be-executed configuration file of the to-be-executed test task.
Optionally, the execution module is configured to invoke a task execution thread and execute the configuration file to be executed; acquiring log information generated when the configuration file to be executed is executed, and recording the log information into a log file; calling a monitoring thread, monitoring the log file based on the monitoring thread, and storing the updated content of the log file to a database at intervals of preset monitoring intervals; when an execution ending event is detected, extracting all updated contents related to the log file from the database as logs to be output, and uploading the logs to be output to a distributed storage system, wherein the execution ending event comprises one of the completion of the execution of the configuration file to be executed, the overtime of the file running time and the receipt of an execution ending instruction; analyzing the log to be output, determining the execution progress of the interface test case corresponding to the test task to be executed, and updating the execution progress to the execution state corresponding to the test task to be executed by taking the execution progress as the execution result of the test task to be executed; and displaying the test task to be executed after the execution state is updated by adopting the visual billboard.
Optionally, the apparatus further comprises:
the configuration module is used for receiving the test codes uploaded by the user, storing the test codes into a distributed code storage bin, acquiring warehouse addresses of the distributed code storage bin, and configuring the warehouse addresses in a database, wherein the test codes are compiled by the user and marked by test notes of a preset unit test framework; and/or the presence of a gas in the gas,
the execution module is further configured to display the at least one test task to be executed, the at least one interface test case, and a corresponding execution state by using the visual billboard after executing the interface test case corresponding to the test task to be executed based on the configuration file to be executed, suspend execution of the interface test case corresponding to the test task to be executed when it is detected that a user stops execution of the test task to be executed based on the request of the visual billboard, and restart execution of the interface test case when it is detected that the user continues execution of the test task to be executed based on the request of the visual billboard; and/or the presence of a gas in the gas,
the execution module is further configured to display the at least one to-be-executed test task by using the visual billboard after the to-be-executed configuration file is generated, mark the to-be-executed test task, which successfully generates the to-be-executed configuration file, in the at least one to-be-executed test task, and execute an interface test case related to the to-be-executed test task based on the to-be-executed configuration file corresponding to the to-be-executed test task if it is detected that a user request is executed for the to-be-executed test task, which successfully generates the to-be-executed configuration file, based on the visual billboard; and/or the presence of a gas in the gas,
and the notification module is used for acquiring a preset result notification party after executing the interface test case corresponding to the test task to be executed based on the configuration file to be executed, and sending the execution result of the interface test case to the preset result notification party.
According to a third aspect of the present application, there is provided an electronic device comprising a memory storing a computer program and a processor implementing the steps of the method of any of the first aspect when the computer program is executed.
According to a fourth aspect of the present application, there is provided a readable storage medium having stored thereon a computer program which, when executed by a processor, carries out the steps of the method of any of the above-mentioned first aspects.
By means of the technical scheme, the visual test execution method, the visual test execution device, the electronic equipment and the readable storage medium are provided, the visual test execution method, the visual test execution device and the readable storage medium analyze a test code uploaded by a user every preset test time, obtain at least one interface test case and information of applications to be tested, create at least one test task to be executed according to the information of the applications to be tested, generate a configuration file to be executed for the test task to be executed according to the corresponding interface test case of the test task to be executed in the at least one interface test case when the task state of the test task to be executed in the at least one test task to be queried is an execution identifier, execute the execution of the interface test case corresponding to the test task to be executed based on the configuration file to be executed, display the execution result of the interface test case by using a visual panel of a user terminal, the method has the advantages that the test task is automatically created and executed according to the applied dimension, the test can be automatically carried out according to the applied interface condition, the user can clearly see the test condition by adopting the visual billboard, the test process is simplified, and the test coverage rate is improved while the test efficiency is improved.
The above description is only an overview of the technical solutions of the present application, and the present application may be implemented in accordance with the content of the description so as to make the technical means of the present application more clearly understood, and the detailed description of the present application will be given below in order to make the above and other objects, features, and advantages of the present application more clearly understood.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the application. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic flowchart illustrating a visual test execution method according to an embodiment of the present application;
FIG. 2A is a flow chart of another visual test execution method provided by the embodiment of the present application;
FIG. 2B is a diagram illustrating a visual test execution method according to an embodiment of the present application;
FIG. 2C is a diagram illustrating a visual test execution method according to an embodiment of the present application;
FIG. 2D is a flowchart illustrating a visual test execution method according to an embodiment of the present disclosure;
FIG. 2E is a diagram illustrating a visual test execution method according to an embodiment of the present application;
FIG. 2F is a diagram illustrating a visual test execution method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a visual test execution device according to an embodiment of the present application;
fig. 4 shows a schematic device structure diagram of a computer apparatus according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
An embodiment of the present application provides a visual test execution method, as shown in fig. 1, the method includes:
101. analyzing the test codes uploaded by the user every preset test duration to obtain at least one interface test case and to-be-tested application information.
The visual test execution method provided by the application can be applied to a test platform, the test platform provides an automatic test function based on a carried server, and the server can be an independent server or a cloud server providing basic cloud computing services such as cloud service, a cloud database, cloud computing, a cloud function, cloud storage, Network service, cloud communication, middleware service, domain name service, safety service, Content Delivery Network (CDN), big data, an artificial intelligence platform and the like. The test platform can provide a front end facing a user, and the user such as a tester, a maintainer and the like can write test codes locally and upload the test codes to the test platform for application test. In an optional implementation scheme, the user can locally use a maven + testing framework and according to the automatic programming specification through tools such as IDEA (integrated environment for Java language development) and the like to program the automatic test code, and upload the programmed test code to the test platform.
Furthermore, since the user updates the uploaded test codes and the application test needs to be continuously performed, a preset test duration is set in the test platform, and the test platform can automatically analyze the test codes uploaded by the user every other preset test duration to obtain at least one interface test case and application information to be tested. For example, the preset test duration may be set to 10 minutes, so that the test platform automatically pulls the automated interface test case meeting the specification every 10 minutes, and stores the interface test case in the database for use in the subsequent application test process.
102. And creating at least one test task to be executed according to the information of the application to be tested.
In the embodiment of the application, after the test case and which application needs to be tested are determined, the test platform creates at least one test task to be executed according to application dimensions based on the obtained information of the application to be tested. The test tasks to be executed can be daily scheduling tasks for testing the application every day, or can also be iterative scheduling tasks such as research and development self-test, smoke test, functional test, regression test and the like aiming at the current application, so that the test tasks of all projects or teams are created in one-time task creation and can be executed in subsequent iteration, and the cross-project and cross-team creation of the scheduling tasks is realized. The method comprises the following steps of executing at least one test task to be executed, wherein the task type and the task number of the at least one test task to be executed are not specifically limited.
103. And when the task state of a to-be-executed test task in the at least one to-be-executed test task is the execution identifier, generating a to-be-executed configuration file for the to-be-executed test task according to the corresponding interface test case of the to-be-executed test task in the at least one interface test case.
In the embodiment of the application, the task state of the test task to be executed is updated to the execution identifier when the test task needs to be executed, and the test platform can continuously inquire the task state of each test task to be executed, so that the change of the task state can be found in time.
Further, when the task state of any one to-be-executed test task in the at least one to-be-executed test task is updated to the execution identifier, the to-be-executed test task is indicated to start to be executed, and therefore the test platform generates the to-be-executed configuration file for the to-be-executed test task according to the corresponding interface test case of the to-be-executed test task in the at least one interface test case. That is, the test platform queries which test cases need to be used by the test task to be executed, and assembles the configuration file of the task according to the test cases, so that the task can be executed smoothly.
In the process of actual application, suitexml (configuration file) of TestNG (unit test framework) can be assembled according to the test case corresponding to the task, the suite is used for describing a test script set to be run, a name can be defined by self, and the final name can be seen in a test report of TestNG. The suiteml includes a test script to be executed, and the suiteml is used as a configuration file to be executed of the task.
104. And executing the interface test case corresponding to the test task to be executed based on the configuration file to be executed, and displaying the execution result of the interface test case by adopting a visual billboard of the user terminal.
In the embodiment of the application, after the configuration file to be executed is obtained, the execution of the interface test case can be realized by running the configuration file to be executed, and the application is tested. After the interface test case is executed, an execution result is generated, for example, whether the interface test case passes through, the passing rate is large, and the like. In an optional embodiment, the test platform may create a result page as a visual bulletin board, and display at least one interface test case, information of the application to be tested, and an execution result in the visual bulletin board, so that a user can know a test condition of each project related to the application in detail.
The method provided by the embodiment of the application analyzes the test code uploaded by the user every preset test duration to obtain at least one interface test case and information of the application to be tested, creates at least one test task to be executed according to the information of the application to be tested, generates a configuration file to be executed for the test task to be executed according to the interface test case corresponding to the test task to be executed in the at least one interface test case when the task state of the test task to be executed in the at least one test task to be executed is inquired as an execution identifier, executes the interface test case corresponding to the test task to be executed based on the configuration file to be executed, displays the execution result of the interface test case by adopting a visual billboard of the user terminal, automatically creates and executes the test task according to the dimension of the application, and can carry out the test according to the interface condition of the application, and the visual billboard is adopted, so that a user can clearly see the test condition, the test process is simplified, and the test coverage rate is improved, and meanwhile, the test efficiency is also improved.
Further, as a refinement and an extension of the specific implementation of the foregoing embodiment, in order to fully illustrate the specific implementation process of the present embodiment, an embodiment of the present application provides another visual test execution method, as shown in fig. 2A, the method includes:
201. and receiving the test codes uploaded by the users.
In the embodiment of the application, a user can locally write automatic test codes according to an automatic writing specification by using a maven + testing framework through tools such as IDEA and the like, and upload the written test codes to a test platform.
The maven + testing framework is a standard testing framework generated by modifying testers through techniques such as maven and TestNG; the automatic writing specification is preset and can comprise the following three aspects:
(1) notes from TestNG and TestNG are used. That is, the automation of the interface written in the test code uploaded by the user needs to use TestNG and the test annotation of TestNG, which may be "description", "priority" or "enable" specifically;
(2) in a mem (configuration) file of maven, introducing monitoring of a maven + testing framework, which can be specifically realized by introducing "com.pajk.test.testing.runLister" into the framework;
(3) property file in the maven + testing framework, appname (application name) is configured to indicate which application is tested.
Therefore, after the user writes the test code marked by the test annotation of the preset unit test frame according to the automatic writing specification, the test code can be uploaded to the test platform. The test platform receives the test code uploaded by the user and stores the test code in a distributed code storage bin such as a GIT (distributed version control system) warehouse. In order to perform subsequent testing according to the testing code written by the user, the testing platform acquires the warehouse address of the distributed code storage bin, and configures the warehouse address in the database, so that the code under the distributed code storage bin can be pulled conveniently. It should be noted that, when testing is performed based on a test code written by a user for the first time, the configuration of the warehouse address needs to be performed once, and when testing is performed using the same test code, the configuration of the warehouse address does not need to be performed again.
202. Analyzing the test codes uploaded by the user every preset test duration to obtain at least one interface test case and to-be-tested application information.
In the embodiment of the application, after the test platform completes the configuration of the warehouse address, an automatic test process is started. The testing platform is provided with preset testing time duration, and the testing platform can analyze testing codes uploaded by a user every other preset testing time duration to obtain at least one interface testing case and to-be-tested application information. In an optional embodiment, the process of pulling the interface test case and the information of the application to be tested is as follows:
firstly, every preset test duration, a test platform acquires a warehouse address configured in a database and determines a distributed code storage bin indicated by the warehouse address. And then, the test platform reads the distributed code storage bin, analyzes the test codes stored in the distributed code storage bin, acquires preset directory names and preset characters, and extracts codes which are positioned under the preset directory names and the file names of which are ended by the preset characters from the test codes to form at least one interface test case. And then, analyzing the tool configuration file in the test code, acquiring the name of the application to be tested in the tool configuration file, acquiring the test annotation of the test framework of the preset unit, and reading at least one piece of application information marked with the test annotation in the tool configuration file. And finally, taking the name of the application to be tested and at least one piece of application information as the information of the application to be tested, and storing the information of the application to be tested to a database.
In the practical application process, the preset Test time period may be 10 minutes, 15 minutes, etc., the preset directory name may be "/src/main/Test", and the preset character may be "IT, UT, Test". Continuing with the example of configuring the appname in the config.properties file described in step 201 above, the Test platform may pull codes in the "/src/main/Test" directory and whose file names end with "IT, UT, and Test" under the distributed code storage bin to form at least one interface Test case, analyze the config.properties file through Java codes, obtain the appname as the application name to be tested, analyze the TestNG annotation (Test annotation) in each Java file, and store at least one piece of application information in the TestNG annotation, i.e., description, methodname, priority, enable, and the like, as application information to be tested in the database.
It should be noted that, in the embodiment of the present application, a visual billboard is provided for a user, so that in the process of practical application, after obtaining at least one interface test case, the test platform may use the visual billboard to display the at least one interface test case, so that the user may visually see information such as the distributed code storage bin, the preset directory name, the number of interface test cases, and the like in the visual billboard.
203. And creating at least one test task to be executed according to the information of the application to be tested.
In the embodiment of the application, after the at least one interface test case and the to-be-tested application information are obtained, the to-be-tested application information actually includes information such as application, engineering, case description, availability and the like corresponding to each interface test case, so that the test platform can create at least one to-be-executed test task according to the to-be-tested application information and the application dimension. The test task to be executed may be a daily scheduling task or an iterative scheduling task, and the following describes the creation processes of the two tasks respectively:
for the daily scheduling task, the testing platform can acquire task creation time in the to-be-tested application information, acquire daily task information in the to-be-tested application information when detecting that the current time point reaches the task creation time, and create the daily scheduling task as the to-be-executed testing task according to the daily task information. The user can set task creation time and task execution time on the test platform, for example, if the task creation time set by the user is morning every day, the test platform creates a daily scheduling task through a timing task in the morning every day according to application dimensionality;
for the iterative scheduling task, the test platform can continuously obtain the content released by the application deployment system. When the application deployment feedback issued by the application deployment system is acquired, the application is indicated to be already issued in the application deployment system, and a task needs to be created to test the application, so that the test platform reads at least one iteration task attribute indicated by the application deployment feedback and determines which kind of attribute of the iteration scheduling task needs to be created. And then, acquiring iterative task information related to at least one iterative task attribute from the to-be-tested application information, and creating at least one iterative scheduling task matched with the at least one iterative task attribute by adopting the iterative task information as at least one to-be-executed test task. Specifically, the iterative scheduling task may be a research and development self-test task, a smoke test task, a function test task, a regression test task, and the like, and the task attribute of the created iterative scheduling task is not specifically limited in the present application.
204. Updating the task state of at least one test task to be executed, and reading the task state of each test task to be executed at intervals of preset scheduling time.
In the embodiment of the application, when the test task to be executed needs to be executed, the test task to be executed needs to be first embodied in the task state, that is, the task state needs to be updated to the execution identifier for indicating that the task is to be executed, so that the test platform updates the task state of the test task to be executed, which needs to be executed currently when the test task to be executed needs to be executed. In this embodiment, the execution identifier may be a word such as "to be executed", and since the test task to be executed may be a daily scheduling task or an iterative scheduling task, and the execution timings of different tasks are different, in an optional embodiment, the process of updating the task state of the test task to be executed is as follows:
for a daily scheduling task in at least one to-be-executed test task, the test platform reads task execution time in task information of the daily scheduling task, acquires an execution identifier for indicating to start executing the task when detecting that the current time point reaches the task execution time, and updates a task state of the daily scheduling task into the execution identifier;
for an iterative scheduling task in at least one test task to be executed, the test platform continuously acquires the content issued by the application deployment system, and updates the task state of the iterative scheduling task to be an execution identifier when receiving a task execution instruction issued by the application deployment system. Specifically, the test platform may monitor the content issued by the application deployment system based on an MQ (Message Queue), so as to update the task state of the test task to be executed in time.
Further, a preset scheduling time duration, such as 10 seconds, 15 seconds, and the like, is set in the test platform, and every other preset scheduling time duration, the test platform reads the task state of each to-be-executed test task, and determines which task states of the to-be-executed test tasks are changed so as to start the execution of the to-be-executed test tasks in time. It should be noted that, because the versions of the applications to be tested that are deployed each time are different, when the test task to be executed starts to be executed, the test platform may record the version information of the current application deployment, and record the version information in association with the test condition, so that subsequent testers can maintain, regulate and control the applications according to the versions, and trace traceability is ensured.
205. And generating a configuration file for testing as the configuration file to be executed for the test task to be executed with the task state set as the execution identifier.
In the embodiment of the application, when the test task starts to be executed, the test platform generates a configuration file for testing for the to-be-executed test task with the task state set as the execution identifier as the to-be-executed configuration file, so that the interface test case corresponding to the to-be-executed test task is executed based on the to-be-executed configuration file. In an alternative embodiment, the process of generating the profile to be executed is as follows:
firstly, when the test platform inquires that the task state of a to-be-executed test task in at least one to-be-executed test task is an execution identifier, determining an interface test case corresponding to the to-be-executed test task in at least one interface test case, and reading at least one to-be-tested project indicated by the interface test case. And then, the test platform traverses the test requirement of each project to be tested in at least one project to be tested, assembles the test codes stored in the distributed code storage bin according to the test requirement of each project to be tested, and generates a configuration file for testing for each project to be tested as a to-be-executed configuration file of the to-be-executed test task.
In the practical application process, if the distributed code storage bin is a GIT warehouse, the test platform may traverse the projects of the interface test case and the GIT warehouse according to the interface test case corresponding to the test task to be executed, and assemble the suiteml of TestNG by using the test code in the GIT warehouse according to the interface test case of each project. Since the suiteml file is generated for each project, after the suiteml file is generated, as shown in fig. 2B, the corresponding relationship among the name of the GIT warehouse, the name of the application to be tested, the name of the project, the name of the directory where the suiteml file is located, and the number of use cases can be established on the visual panel by taking the project as a dimension, and the name of the person in charge is added to the corresponding relationship, so that the specific use case condition of each project is visually displayed, and the user can conveniently view the specific use case condition.
206. And executing the interface test case corresponding to the test task to be executed based on the configuration file to be executed.
In the embodiment of the application, after the configuration file to be executed is generated, the test platform executes the interface test case corresponding to the test task to be executed based on the configuration file to be executed. In an optional embodiment, the test platform may call a task execution thread, execute the configuration file to be executed, acquire log information generated when the configuration file to be executed is executed, and record the log information into the log file to realize recording of an execution process.
Specifically, in the process of practical application, the task execution thread "mvn test-U-dsurefire. suitexmlfiles" may be used to execute the configuration file to be executed, so as to implement the execution of the related interface test case.
It should be noted that, in the embodiment of the present application, a visual bulletin board is provided for a user, so that in the process of practical application, after executing the interface test case corresponding to the test task to be executed based on the configuration file to be executed, the test platform may use the visual bulletin board to display at least one test task to be executed, at least one interface test case, and a corresponding execution state. Specifically, a refresh button may be set for each test task to be executed, so that a user may obtain the latest execution state of the interface test case related to the corresponding test task to be executed by triggering the corresponding refresh button. Furthermore, the test platform can also set a button for running and cancelling running for each test task to be executed on the visual billboard, and the user can control running and stopping of the specified test task to be executed by triggering the corresponding button, so that the test platform suspends the execution of the interface test case corresponding to the test task to be executed when detecting that the user stops the execution of the test task to be executed based on the visual billboard request, and restarts the execution of the interface test case when detecting that the user continues to execute the test task to be executed based on the visual billboard request.
In addition, since the running and suspending buttons can be set for each test task to be executed, in this application, in fact, a user can also select which test task to be executed is to be executed by himself, and it is not necessary to execute all test tasks to be executed in a unified manner. Then, if it is detected that the user requests to execute a to-be-executed test task which successfully generates a to-be-executed configuration file based on the visual billboard, the test platform executes the interface test case related to the to-be-executed test task based on the to-be-executed configuration file corresponding to the to-be-executed test task, and other to-be-executed test tasks which are not selected by the user can be temporarily suspended from execution.
207. When the execution ending event is detected, storing a log to be output indicating the execution condition of the interface test case corresponding to the test task to be executed, and displaying the execution result of the interface test case by adopting a visual billboard of the user terminal.
In the embodiment of the application, after the test is started to be executed, the test platform can continuously monitor the execution condition of each interface test case, form a log file and store the log file. Specifically, a monitoring thread may be invoked, the log file is monitored based on the monitoring thread, and preset monitoring intervals such as 5 seconds and 6 seconds are set, so that the test platform stores the updated content of the log file to the database at every preset monitoring interval, thereby realizing the recording of the execution condition.
In addition, the test platform monitors whether the test needs to be stopped and determines the end of execution when an end of execution event is detected. The execution end event may include various conditions, for example, the execution of the test file for each test project in the configuration file to be executed is completed; or the running time of the file is overtime; or an execution end instruction is received, namely manual termination. Correspondingly, when an execution end event is detected, the test platform extracts all update contents related to the log file from the database as logs to be output, and uploads the logs to be output to the distributed storage system for storage, wherein the distributed storage system may be a TFS (Team Foundation Server, software management and development tool), and stores the logs to be output as TFSkey. And the test platform also analyzes the log to be output, determines the execution progress of the interface test case corresponding to the test task to be executed, and updates the execution progress to the execution state corresponding to the test task to be executed by taking the execution progress as the execution result of the test task to be executed. Because the visual bulletin board is provided for the user in the embodiment of the application, in the process of practical application, the test platform can display the to-be-executed test task with the updated execution state by using the visual bulletin board, so that the user can know the execution condition of the task based on the visual bulletin board conveniently. Specifically, referring to fig. 2C, a corresponding relationship among the task ID, the name of the application to be tested, the total number of use cases, the success number, the success rate, the test result, the test time, the test report, and the execution log is established, each data after the test execution is displayed in columns, and a viewing button is provided in the columns of the test report and the execution log, so that a user can view the detailed report of the test and the log file of the test by triggering a key.
Further, in the process of practical application, after the test for a certain project to be tested is completed, the test platform may automatically determine whether there are other projects to be tested for the application to be tested, if so, the test for the other projects may be continuously and automatically performed, the process described in step 206 is repeatedly performed, and if not, the test may be determined to be completed, and the execution result may be updated.
In the actual application process, if the user sets the preset result notifying party in the test platform, after the interface test case corresponding to the test task to be executed is executed based on the configuration file to be executed, the test platform may actually obtain the preset result notifying party, and send the execution result of the interface test case to the preset result notifying party, for example, a mail or an in-application message is used for notification, and the notification mode is not specifically limited in the present application.
Further, after the test platform outputs the execution result of the test, some execution results may have a poor operation effect, and the user may update and optimize the test code or the application again, so that if the user uploads the test code again, the test process in the above step 202 to step 207 may be executed again.
Therefore, by the technical scheme in the embodiment of the application, after a tester writes an automatic test code, the test code is conveniently executed, and a set of standard automatic test flow is formed based on a test platform; based on a test platform, the automatic case, the application and the interface compiled by a tester are visualized, the construction, the scheduling and the execution of tasks are supported by cross-team and cross-application, the test efficiency is improved, and the coverage is high; moreover, the test platform is in butt joint with the application deployment system, when the application is released, the test platform actively associates the execution and iteration of the automatic cases and serves as a quality access control, so that the complex test flow and the human resource investment are reduced, and the system defects can be found more quickly and earlier.
In summary, the flow of the visual test execution method in the actual application process provided by the present application can be summarized as follows:
referring to FIG. 2D, the tester writes test code and uploads the test code into the GIT repository. And the test platform scans the test codes in the GIT warehouse at regular time, and analyzes the codes meeting the standard interface in the GIT warehouse to obtain an interface test case. Then, the testing platform creates a testing task to be executed according to the application dimension. The test tasks to be executed can comprise daily scheduling tasks and iterative scheduling tasks, and the test platform executes the daily scheduling tasks regularly; for the iterative scheduling task, the testing platform is in butt joint with the application deployment system, and when the application deployment system deploys the application and requests to test the application, the testing platform executes the task to be tested, so that a series of scene tests such as research and development tests, smoking tests, function tests, regression tests and the like of the application are realized. When testing is carried out, the test platform executes the script, generates a to-be-executed configuration file according to the interface test case, calls the task thread to execute the to-be-executed configuration file, and carries out the execution of the interface test case based on the to-be-executed configuration file. In the execution process, the test platform monitors the execution log of the interface test case, continuously records the log and forms a log file. When the execution is finished or overtime or the user manually stops executing, the currently recorded log file is uploaded to the TFS system for storage, and if the request for stopping executing is not detected, the interface test case is continuously executed and the execution log is continuously collected for recording. When all the interface test cases related to a certain project are executed, the test platform checks whether other projects need to be tested, if not, the execution conditions of the project interface test cases are counted, for example, the execution passing rate is large, and the execution results are output. And if other projects need to be tested, generating the configuration file to be executed for the project again, calling the task thread to execute the configuration file to be executed, and executing the interface test case based on the configuration file to be executed.
Furthermore, the application of the corresponding interface test case, daily scheduling task, iterative scheduling task and execution result can be visualized on the test platform, and the user can manually perform the task execution and stop functions. Referring specifically to fig. 2E, the testing platform may display the table shown in fig. 2C on a user-oriented visual billboard, set a task ID for each test task to be executed, and associate the test task to be executed with the application name to be tested based on the task ID. The name of the test responsible person can be displayed on the visual billboard, so that the user can conveniently find the task responsible for the user; for the condition that various test environments exist, a test environment column can be set, and the name of the test environment is displayed in the test environment column; the number of use cases, execution results and execution passing rate can be displayed on the visual billboard, and keys for viewing reports and logs are displayed, so that a user can conveniently trigger corresponding keys to view the reports or logs; further, the visual billboard also provides operation keys for operation, cancel operation, refresh and the like as shown in fig. 2E, so that a user can conveniently manually operate or stop an appointed test task, and timely know the execution condition of the task through refreshing the task state. In addition, in order to facilitate the user to selectively query all the test tasks related to a certain application, a certain test principal, a certain execution result, a certain group and a certain state, as shown in fig. 2E, the visual bulletin board may further provide input boxes of the application name, the test principal, the execution result, the group and the open state, and provide a query button, so that the user can input a specific characteristic in the input boxes to query the state of the related task.
In addition, because the tasks to be tested established in the application actually relate to different types of iterative scheduling tasks, such as function tests, regression tests and the like, the different types of iterative scheduling tasks actually correspond to different test scenarios, such as the function tests actually used for the tests of the function scenarios, and the function test team is responsible for the tests. Therefore, in order to enable different teams to find the test conditions of a plurality of different applications in a fixed scene, which are responsible for the teams, according to the scene, in the embodiment of the application, the test platform provides a function of viewing the test execution conditions according to the scene based on the visual bulletin board. As shown in fig. 2F, taking the test of the current test involving the functional scenario and the regression scenario as an example, the test platform may provide, on the visualization panel, an area for displaying the test condition in the functional scenario and an area for displaying the test condition in the regression scenario; displaying detailed information such as a scene name, an application name to be tested, an environment, the number of use cases, whether to start, a success rate, a test result and the like in an area related to a functional scene, and displaying a key for checking a report and a log; similarly, detailed information such as scene names, application names to be tested, environments, the number of use cases, whether the application is started, success rates, test results and the like is displayed in the areas related to the regression function, and buttons for viewing reports and logs are displayed, so that a user can timely know which applications are participating in the test in different scenes and the detailed condition of each application test based on the currently displayed content, and the visualization of the test is realized.
According to the method provided by the embodiment of the application, the test task is automatically created and executed according to the applied dimension, the test can be carried out according to the applied interface condition, the user can clearly see the test condition by adopting the visual billboard, the test process is simplified, and the test coverage rate is improved while the test efficiency is improved.
Further, as a specific implementation of the method shown in fig. 1, an embodiment of the present application provides a visual test execution apparatus, as shown in fig. 3, the apparatus includes: a parsing module 301, a creating module 302, a generating module 303 and an executing module 304.
The analysis module 301 is configured to analyze a test code uploaded by a user every preset test duration to obtain at least one interface test case and application information to be tested;
the creating module 302 is configured to create at least one test task to be executed according to the to-be-tested application information;
the generating module 303 is configured to, when it is found that a task state of a to-be-executed test task in the at least one to-be-executed test task is an execution identifier, generate a to-be-executed configuration file for the to-be-executed test task according to an interface test case corresponding to the to-be-executed test task in the at least one interface test case;
the execution module 304 is configured to execute the interface test case corresponding to the test task to be executed based on the configuration file to be executed, and display an execution result of the interface test case by using a visual bulletin board of the user terminal.
In a specific application scenario, the parsing module 301 is configured to obtain a warehouse address configured in a database every other preset test duration, and determine a distributed code storage bin indicated by the warehouse address, where the distributed code storage bin is used to store a test code uploaded by the user; reading the distributed code storage bin, and analyzing the test codes stored in the distributed code storage bin; acquiring a preset directory name and preset characters, and extracting codes which are positioned under the preset directory name and the file name is ended by the preset characters from the test codes to form at least one interface test case; analyzing a tool configuration file in the test code, acquiring an application name to be tested in the tool configuration file, acquiring a test annotation of a preset unit test framework, and reading at least one piece of application information marked with the test annotation in the tool configuration file; and taking the name of the application to be tested and the at least one piece of application information as the information of the application to be tested, and storing the information of the application to be tested to the database.
In a specific application scenario, the creating module 302 is configured to obtain task creating time in the to-be-tested application information, obtain daily task information in the to-be-tested application information when it is detected that the current time point reaches the task creating time, and create a daily scheduling task as the to-be-executed test task according to the daily task information; and/or continuously acquiring content issued by an application deployment system, reading at least one iteration task attribute indicated by the application deployment feedback when acquiring the application deployment feedback issued by the application deployment system, acquiring iteration task information related to the at least one iteration task attribute from the to-be-tested application information, and creating at least one iteration scheduling task matched with the at least one iteration task attribute by using the iteration task information as the at least one to-be-executed test task.
In a specific application scenario, the apparatus further includes:
the updating module is used for reading task execution time in task information of the daily scheduling task for the daily scheduling task in the at least one test task to be executed, acquiring the execution identifier for indicating the start of executing the task when detecting that the current time point reaches the task execution time, and updating the task state of the daily scheduling task into the execution identifier; and/or the presence of a gas in the gas,
the updating module is further configured to continuously obtain, for an iterative scheduling task in the at least one test task to be executed, content issued by an application deployment system, and update a task state of the iterative scheduling task to the execution identifier when receiving a task execution instruction issued by the application deployment system.
In a specific application scenario, the generating module 303 is configured to read a task state of each test task to be executed every preset scheduling time; when the task state of a to-be-executed test task in the at least one to-be-executed test task is found to be an execution identifier, determining an interface test case corresponding to the to-be-executed test task in the at least one interface test case, and reading at least one to-be-tested project indicated by the interface test case; traversing the test requirement of each project to be tested in the at least one project to be tested, assembling the test codes stored in the distributed code storage bin according to the test requirement of each project to be tested, and generating a configuration file for testing for each project to be tested as a to-be-executed configuration file of the to-be-executed test task.
In a specific application scenario, the execution module 304 is configured to invoke a task execution thread and execute the configuration file to be executed; acquiring log information generated when the configuration file to be executed is executed, and recording the log information into a log file; calling a monitoring thread, monitoring the log file based on the monitoring thread, and storing the updated content of the log file to a database at intervals of preset monitoring intervals; when an execution ending event is detected, extracting all updated contents related to the log file from the database as logs to be output, and uploading the logs to be output to a distributed storage system, wherein the execution ending event comprises one of the completion of the execution of the configuration file to be executed, the overtime of the file running time and the receipt of an execution ending instruction; analyzing the log to be output, determining the execution progress of the interface test case corresponding to the test task to be executed, and updating the execution progress to the execution state corresponding to the test task to be executed by taking the execution progress as the execution result of the test task to be executed; and displaying the test task to be executed after the execution state is updated by adopting the visual billboard.
In a specific application scenario, the apparatus further includes:
the configuration module is used for receiving the test codes uploaded by the user, storing the test codes into a distributed code storage bin, acquiring warehouse addresses of the distributed code storage bin, and configuring the warehouse addresses in a database, wherein the test codes are compiled by the user and marked by test notes of a preset unit test framework; and/or the presence of a gas in the gas,
the execution module 304 is further configured to display the at least one test task to be executed, the at least one interface test case, and a corresponding execution state by using the visual dashboard after executing the interface test case corresponding to the test task to be executed based on the configuration file to be executed, suspend execution of the interface test case corresponding to the test task to be executed when it is detected that a user stops execution of the test task to be executed based on a request of the visual dashboard, and restart execution of the interface test case when it is detected that the user continues execution of the test task to be executed based on the request of the visual dashboard; and/or the presence of a gas in the gas,
the execution module 304 is further configured to, after the to-be-executed configuration file is generated, display the at least one to-be-executed test task by using the visual bulletin board, mark the to-be-executed test task that successfully generates the to-be-executed configuration file among the at least one to-be-executed test task, and execute an interface test case related to the to-be-executed test task based on the to-be-executed configuration file corresponding to the to-be-executed test task if it is detected that a user request is executed for the to-be-executed test task that successfully generates the to-be-executed configuration file based on the visual bulletin board; and/or the presence of a gas in the gas,
and the notification module is used for acquiring a preset result notification party after executing the interface test case corresponding to the test task to be executed based on the configuration file to be executed, and sending the execution result of the interface test case to the preset result notification party.
The device that this application embodiment provided to the dimension automation of using is establish the test task and is carried out, can test according to the interface condition of using by oneself, and adopts visual billboard to make that the user can be clear see the test condition, simplifies the test process, when promoting the test coverage, also makes efficiency of software testing obtain promoting.
It should be noted that other corresponding descriptions of the functional units related to the visual test execution apparatus provided in the embodiment of the present application may refer to the corresponding descriptions in fig. 1 and fig. 2A to fig. 2F, and are not repeated herein.
In an exemplary embodiment, referring to fig. 4, a computer device is further provided, the computer device includes a bus, a processor, a memory, a communication interface, an input/output interface, and a display device, wherein the functional units can communicate with each other through the bus. The memory stores computer programs, and the processor is used for executing the programs stored in the memory and executing the visual test execution method in the above embodiments.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the visualized test execution method.
Through the above description of the embodiments, those skilled in the art will clearly understand that the present application can be implemented by hardware, and also by software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be embodied in the form of a software product, which may be stored in a non-volatile storage medium (which may be a CD-ROM, a usb disk, a removable hard disk, etc.), and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the implementation scenarios of the present application.
Those skilled in the art will appreciate that the figures are merely schematic representations of one preferred implementation scenario and that the blocks or flow diagrams in the figures are not necessarily required to practice the present application.
Those skilled in the art will appreciate that the modules in the devices in the implementation scenario may be distributed in the devices in the implementation scenario according to the description of the implementation scenario, or may be located in one or more devices different from the present implementation scenario with corresponding changes. The modules of the implementation scenario may be combined into one module, or may be further split into multiple sub-modules.
The above application serial numbers are for description purposes only and do not represent the superiority or inferiority of the implementation scenarios.
The above disclosure is only a few specific implementation scenarios of the present application, but the present application is not limited thereto, and any variations that can be made by those skilled in the art are intended to fall within the scope of the present application.

Claims (10)

1. A visual test execution method, comprising:
analyzing the test codes uploaded by the user every preset test duration to obtain at least one interface test case and to-be-tested application information;
creating at least one test task to be executed according to the application information to be tested;
when the task state of a to-be-executed test task in the at least one to-be-executed test task is inquired to be an execution identifier, generating a to-be-executed configuration file for the to-be-executed test task according to an interface test case corresponding to the to-be-executed test task in the at least one interface test case;
and executing the interface test case corresponding to the test task to be executed based on the configuration file to be executed, and displaying the execution result of the interface test case by adopting a visual billboard of the user terminal.
2. The method of claim 1, wherein analyzing the test code uploaded by the user every preset test duration to obtain at least one interface test case and application information to be tested comprises:
acquiring a warehouse address configured in a database every other preset test duration, and determining a distributed code storage bin indicated by the warehouse address, wherein the distributed code storage bin is used for storing test codes uploaded by the user;
reading the distributed code storage bin, and analyzing the test codes stored in the distributed code storage bin;
acquiring a preset directory name and preset characters, and extracting codes which are positioned under the preset directory name and the file name is ended by the preset characters from the test codes to form at least one interface test case;
analyzing a tool configuration file in the test code, acquiring an application name to be tested in the tool configuration file, acquiring a test annotation of a test framework of a preset unit, and reading at least one piece of application information marked with the test annotation in the tool configuration file;
and taking the name of the application to be tested and the at least one piece of application information as the information of the application to be tested, and storing the information of the application to be tested to the database.
3. The method of claim 1, wherein the creating at least one test task to be executed according to the application information to be tested comprises:
acquiring task creation time in the to-be-tested application information, acquiring daily task information in the to-be-tested application information when detecting that the current time point reaches the task creation time, and creating a daily scheduling task as the to-be-executed test task according to the daily task information; and/or the presence of a gas in the gas,
continuously acquiring content issued by an application deployment system, reading at least one iteration task attribute indicated by the application deployment feedback when acquiring the application deployment feedback issued by the application deployment system, acquiring iteration task information related to the at least one iteration task attribute from the to-be-tested application information, and creating at least one iteration scheduling task matched with the at least one iteration task attribute by using the iteration task information as the at least one to-be-executed test task.
4. The method of claim 1, wherein after creating at least one test task to be performed according to the application information to be tested, the method further comprises:
for a daily scheduling task in the at least one to-be-executed test task, reading task execution time in task information of the daily scheduling task, and when detecting that the current time point reaches the task execution time, acquiring the execution identifier for indicating to start executing the task, and updating the task state of the daily scheduling task to the execution identifier; and/or the presence of a gas in the gas,
continuously acquiring the content issued by the application deployment system for the iterative scheduling task in the at least one test task to be executed, and updating the task state of the iterative scheduling task to the execution identifier when receiving a task execution instruction issued by the application deployment system.
5. The method according to claim 1, wherein when it is found that a task state of a to-be-executed test task in the at least one to-be-executed test task is an execution identifier, generating a to-be-executed configuration file for the to-be-executed test task according to an interface test case corresponding to the to-be-executed test task in the at least one interface test case, includes:
reading the task state of each test task to be executed every other preset scheduling time;
when the task state of a test task to be executed in the at least one test task to be executed is found to be an execution identifier, determining an interface test case corresponding to the test task to be executed in the at least one interface test case, and reading at least one project to be tested indicated by the interface test case;
traversing the test requirement of each project to be tested in the at least one project to be tested, assembling the test codes stored in the distributed code storage bin according to the test requirement of each project to be tested, and generating a configuration file for testing for each project to be tested as a to-be-executed configuration file of the to-be-executed test task.
6. The method according to claim 1, wherein the executing of the interface test case corresponding to the test task to be executed is performed based on the configuration file to be executed, and a visual bulletin board of a user terminal is adopted to display an execution result of the interface test case, and the method includes:
calling a task execution thread and executing the configuration file to be executed;
acquiring log information generated when the configuration file to be executed is executed, and recording the log information into a log file;
calling a monitoring thread, monitoring the log file based on the monitoring thread, and storing the updated content of the log file to a database at intervals of preset monitoring intervals;
when an execution ending event is detected, extracting all updated contents related to the log file from the database as logs to be output, and uploading the logs to be output to a distributed storage system, wherein the execution ending event comprises one of the completion of the execution of the configuration file to be executed, the overtime of the file running time and the receipt of an execution ending instruction;
analyzing the log to be output, determining the execution progress of the interface test case corresponding to the test task to be executed, and updating the execution progress to the execution state corresponding to the test task to be executed by taking the execution progress as the execution result of the test task to be executed;
and displaying the test task to be executed after the execution state is updated by adopting the visual billboard.
7. The method of claim 1, further comprising:
receiving the test code uploaded by the user, storing the test code into a distributed code storage bin, acquiring a warehouse address of the distributed code storage bin, and configuring the warehouse address in a database, wherein the test code is compiled by the user and marked by a test annotation of a preset unit test framework; and/or the presence of a gas in the gas,
after executing the interface test case corresponding to the test task to be executed based on the configuration file to be executed, displaying the at least one test task to be executed, the at least one interface test case and the corresponding execution state by using the visual billboard, suspending the execution of the interface test case corresponding to the test task to be executed when detecting that a user stops the execution of the test task to be executed based on the request of the visual billboard, and restarting the execution of the interface test case when detecting that the user continues to execute the test task to be executed based on the request of the visual billboard; and/or the presence of a gas in the gas,
after the configuration file to be executed is generated, the visual billboard is adopted to display the at least one test task to be executed, the test task to be executed, which successfully generates the configuration file to be executed, in the at least one test task to be executed is marked, and if the fact that a user requests to execute the test task to be executed, which successfully generates the configuration file to be executed, is detected based on the visual billboard, an interface test case related to the test task to be executed is executed based on the configuration file to be executed, which corresponds to the test task to be executed; and/or the presence of a gas in the gas,
after executing the interface test case corresponding to the test task to be executed based on the configuration file to be executed, acquiring a preset result notification party, and sending the execution result of the interface test case to the preset result notification party.
8. A visual test execution apparatus, comprising:
the analysis module is used for analyzing the test codes uploaded by the user every preset test duration to acquire at least one interface test case and to-be-tested application information;
the creating module is used for creating at least one testing task to be executed according to the information of the application to be tested;
the generating module is used for generating a to-be-executed configuration file for the to-be-executed test task according to an interface test case corresponding to the to-be-executed test task in the at least one interface test case when the task state of the to-be-executed test task in the at least one to-be-executed test task is inquired to be an execution identifier;
and the execution module is used for executing the interface test case corresponding to the test task to be executed based on the configuration file to be executed and displaying the execution result of the interface test case by adopting a visual billboard of the user terminal.
9. An electronic device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 7.
CN202210674814.8A 2022-06-15 2022-06-15 Visual test execution method and device, electronic equipment and readable storage medium Pending CN114911712A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210674814.8A CN114911712A (en) 2022-06-15 2022-06-15 Visual test execution method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210674814.8A CN114911712A (en) 2022-06-15 2022-06-15 Visual test execution method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN114911712A true CN114911712A (en) 2022-08-16

Family

ID=82769898

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210674814.8A Pending CN114911712A (en) 2022-06-15 2022-06-15 Visual test execution method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN114911712A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115952100A (en) * 2023-01-10 2023-04-11 北京百度网讯科技有限公司 Interface test method, device, system, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115952100A (en) * 2023-01-10 2023-04-11 北京百度网讯科技有限公司 Interface test method, device, system, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
US8584079B2 (en) Quality on submit process
US8001532B1 (en) System and method for generating source code-based test cases
US10691320B2 (en) Simulation of cloud platform integration testing in a visual interface
JP2020515985A (en) Methods and systems for testing web applications
US20130159784A1 (en) Systems and methods for recording user interactions within a target application
Roehm et al. Monitoring user interactions for supporting failure reproduction
US20130339931A1 (en) Application trace replay and simulation systems and methods
EP3019961A1 (en) Cloud services load testing and analysis
US20110137820A1 (en) Graphical model-based debugging for business processes
US9454463B2 (en) Rapid automation front-end framework library and executable graphic user interface testing system and method
JP5989194B1 (en) Test management system and program
CN114911712A (en) Visual test execution method and device, electronic equipment and readable storage medium
Lei et al. Performance and scalability testing strategy based on kubemark
CN113535567B (en) Software testing method, device, equipment and medium
CN108399122A (en) Test script operation method and system
CN107451056B (en) Method and device for monitoring interface test result
CN108073511B (en) Test code generation method and device
CN111597091A (en) Data monitoring method and system, electronic equipment and computer storage medium
CN113220592B (en) Processing method and device for automatic test resources, server and storage medium
CN111311187B (en) Project progress monitoring method, system and storage medium based on Revit
EP3014562A1 (en) Simulating sensors
US20060143533A1 (en) Apparatus and system for testing of software
CN112765040B (en) Page test method, system, computer equipment and storage medium
US20080066005A1 (en) Systems and Methods of Interfacing with Enterprise Resource Planning Systems
Riccio Enhancing Automated GUI Exploration Techniques for Android Mobile Applications.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination