WO2019095580A1 - Procédé et appareil de test, dispositif informatique et support de stockage lisible - Google Patents

Procédé et appareil de test, dispositif informatique et support de stockage lisible Download PDF

Info

Publication number
WO2019095580A1
WO2019095580A1 PCT/CN2018/077273 CN2018077273W WO2019095580A1 WO 2019095580 A1 WO2019095580 A1 WO 2019095580A1 CN 2018077273 W CN2018077273 W CN 2018077273W WO 2019095580 A1 WO2019095580 A1 WO 2019095580A1
Authority
WO
WIPO (PCT)
Prior art keywords
test
manual
test case
automated
task
Prior art date
Application number
PCT/CN2018/077273
Other languages
English (en)
Chinese (zh)
Inventor
余巍
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2019095580A1 publication Critical patent/WO2019095580A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3676Test management for coverage analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites

Definitions

  • the present application relates to the field of computer technology, and in particular, to a test method, apparatus, computer device, and readable storage medium.
  • the test is to test various performance indicators of the system by testing various normal, peak and abnormal load conditions.
  • the tests for the system and the like are manually tested, that is, the test cases need to be manually input into the system to be tested one by one, and then the test results are judged according to the output of the system to be tested, and the test of multiple test cases is counted. The number of successes to get the test success rate, the process needs to be tested one by one, resulting in lower test efficiency.
  • test method In accordance with various embodiments disclosed herein, a test method, apparatus, computer apparatus, and readable storage medium are provided.
  • a test method comprising:
  • An unexecuted manual test case is selected from the manual test case according to the selected automated test case, and the unexecuted manual test case is output for manual testing.
  • test device comprising:
  • test task acquisition module for obtaining a test task constructed
  • An automated test case selection module for selecting automated test cases corresponding to the constructed test tasks
  • An execution module for executing the automated test case according to the built test task
  • a manual test case selection module configured to select a manual test case corresponding to the constructed test task when the automated test cases are all executed
  • an output module configured to select an unexecuted manual test case from the manual test case according to the selected automated test case, and output the unexecuted manual test case for manual test.
  • a computer device comprising a memory, a processor, on which is stored computer readable instructions, and the processor, when executing the computer readable instructions, implements the following steps:
  • An unexecuted manual test case is selected from the manual test case according to the selected automated test case, and the unexecuted manual test case is output for manual testing.
  • One or more computer readable non-volatile storage media storing computer readable instructions, when executed by one or more processors, cause one or more processors to perform the steps of:
  • An unexecuted manual test case is selected from the manual test case according to the selected automated test case, and the unexecuted manual test case is output for manual testing.
  • 1 is an application environment diagram of a test method in an embodiment
  • FIG. 2 is a flow chart of a test method in an embodiment
  • FIG. 3 is a flow chart of step S206 in the embodiment shown in Figure 2;
  • Figure 5 is an interface diagram of all test tasks in an embodiment
  • FIG. 7 is a schematic structural view of a test device in an embodiment
  • FIG. 8 is a schematic structural diagram of a computer device in an embodiment.
  • FIG. 1 is an application environment diagram of a test method in an embodiment, including a computer device, a database, and a plurality of execution nodes, wherein the computer device is configured to receive the constructed test task, and test the test case according to the test task. Sended to several execution nodes, the execution node performs test tasks, the computer device collects the test results of the execution nodes, and stores the test results in the database, and the test results of the manual test are stored in the manual test result storage system, such as DPM system, TESTLINK The system, the WIZARD system, or the JENKINS system, etc., can obtain the test results stored in the manual test result storage system, and store the test results in a database for the user to query or generate reports.
  • the manual test result storage system such as DPM system, TESTLINK The system, the WIZARD system, or the JENKINS system, etc.
  • a test method is provided. This embodiment is exemplified by applying the method to the computer device in FIG. 1 above.
  • a test readable instruction is run on the computer device, and the test method is implemented by the test readable instruction.
  • the method specifically includes the following steps:
  • the user can build a test task on the computer device, and an operation interface of the automated test can be provided on the computer device, and the task list, the node list, and the statistical report related to the automated test can be displayed on the operation interface, when the user opens the task list.
  • You can add new test tasks to the task list, or delete the built test tasks, or modify the built test tasks, such as modifying the execution nodes of the built test tasks.
  • the node list is a server that can perform test tasks, and the like.
  • a statistical report is a report that shows the test results of a test task and test parameters, which can be displayed in a tabular or other graphical manner.
  • the test case corresponding to the test task is obtained, for example, the automatic test case may be selected according to the version information of the test version of the built test task, and then according to the test type. For example, interface type or interface type, select the corresponding type of automated test case from the corresponding version of the automated test case.
  • the step may be performed by using a timed task. For example, after obtaining the built test task, setting a timed task, and then selecting an automated test case according to the start time of the timed task, thereby performing an automated test. This is because when there are many test tasks, there may be higher-priority test tasks.
  • the currently built test tasks cannot be performed in time, but can be preset according to the possible duration of the higher-priority test tasks.
  • the timed task can ensure that the current test task can be executed in time after the higher priority test task is completed, without manual or machine real-time monitoring whether the higher priority test task has been completed, which can save human resources, and Computer equipment does not require a large amount of resources to monitor higher-priority test tasks in real time, reducing waste of resources.
  • the automated test case is executed to complete the built test task.
  • the automated test of the test task is completed.
  • each test involves not only the automated test but also the manual test, so the computer device obtains
  • the manual test case can also be stored in a manual test result storage system, such as a DPM system, a TESTLINK system, a WIZARD system, or a JENKINS system, etc., and the computer device can access these systems. Get all the manual test cases.
  • S210 Select an unexecuted manual test case from the manual test case according to the selected automated test case, and output an unexecuted manual test case for manual test.
  • the automated test case set is a subset of the manual test case set. All automated test cases are manual test cases. When the automated test case has been executed, the automated test case is no longer needed to be executed again. The manual test case is not executed, so the computer device outputs the unexecuted manual test case, and the manual test case is manually executed by the user.
  • the above test method first selects an automated test case corresponding to the constructed test task, then executes the automated test case, and then selects a manual test case corresponding to the constructed test task, and according to the automated test case that has been executed, from the manual Selecting unexecuted manual test cases from the test cases can greatly reduce the number of manual tests, thereby improving the test efficiency. Because the quality of the test scripts is highly dependent on the quality of the test scripts and the test scripts are of poor quality, the test results will not be Reliable, the manual solution is also introduced in the technical solution, that is, the test process is not completely dependent on the automatic test, and the reliability of the test result is guaranteed.
  • FIG. 3 is a flowchart of step S206 in the embodiment shown in FIG. 2, which may include: performing steps of performing an automated test case according to the constructed test task:
  • S302 Select an online idle node from the execution node according to the constructed test task.
  • FIG. 4 is an interface diagram of a new test task in an embodiment
  • FIG. 5 is an interface diagram of all test tasks in an embodiment.
  • FIG. 4 when a test task is added, The execution node is selected, and the selection of the execution node first needs to match the constructed test task, and secondly, the execution node is to be online.
  • the execution node is matched with the built test task, and the execution node is required to install a test system or a test framework corresponding to the test task.
  • a table may be set, where the table includes a node identifier of the execution node and Execute the test system or test framework installed in the node.
  • the node identifier of the execution node may be the MAC address of the execution node, etc., which can uniquely identify the attributes of the execution node.
  • the test system or test framework installed by the execution node may use keywords. Or label the way to identify, in order to facilitate the query.
  • a plurality of test tasks are given, which have been executed, and the status of the execution node, such as online or offline, can still be displayed in the execution node column, so that the user can perform the test task again.
  • the maximum connection number can be selected to be greater than or equal to 2, and the current connection number is less than the maximum.
  • the number of execution nodes of the connection so as to ensure that a limited number of execution nodes perform a large number of test tasks, thereby improving the utilization rate of the execution nodes.
  • the execution node needs to be selected. If the C node is not online, the node can only be selected from the A node and the B node; if the A node is online and idle, the A node is directly selected as the execution node. If the A node is online but not in the idle state, it needs to be selected from other nodes. In this case, the maximum number of connections of the B node is 3, which is greater than the number of test tasks that it is currently connected to. Therefore, the C node is selected as the execution node of the current test task. If the number of execution nodes of the currently selected test task is greater than or equal to 2, it is output to the interface in a list manner, and the user selects one of the nodes as the execution node of the current test task.
  • S306 Send the automated test case to the selected execution node and execute the automated test case.
  • the automated test case is sent to the selected execution node, and the execution node invokes the corresponding script to execute the automated test case for automated testing.
  • the current state of the execution node and the maximum number of connections of the execution node are fully considered, and the resources of the execution node can be fully utilized to improve the efficiency of the test.
  • the testing method may further include obtaining a first quantity of the automated test case whose execution result is successful; obtaining a second quantity of the manual test case whose execution result is successful from the manual test result storage system; A number, a second quantity, and a number of manual use cases corresponding to the constructed test task are calculated to obtain a test success rate of the constructed test task; when the test success rate is lower than a preset value, the result of the test failure is output.
  • FIG. 6 is a schematic diagram of a test report in an embodiment, including a system name, a task type, a build time, a duration, a manual use case, and an automatic execution. Number, automatic coverage, success number, and number of failures, etc., where the system name is the system name of the system corresponding to the test task being built, and the task type includes interface test and interface test, etc.
  • the build time is the time and duration of the test task establishment.
  • the time used to execute the automated test case the number of manual use cases is the total number of test cases
  • the number of automated executions is the number of automated test cases
  • the automated test coverage is the ratio of the number of automated test cases to the number of manual test cases.
  • the success number is the first number of successful executions in the automated test case and the second number of successes in the manual test case.
  • the number of failures is the number of manual cases minus the first quantity and the second quantity. Then is the ratio of the sum of the first quantity and the second quantity to the number of manual use cases.
  • the test When the success rate is lower than the preset value, the test is considered to be failed, so the result of the test failure may be output to prompt the user, for example, the user may send a short message, a WeChat or send an email, make a call, etc., or directly output to the interface, The user is prompted.
  • the first quantity is the result of the automated test, which can be obtained directly from the execution node.
  • the second quantity may exist in different manual test result storage systems, and therefore needs to be obtained from different manual test result storage systems, and is configured in this embodiment. Different interfaces are connected to different manual test result storage systems, and different templates are configured to adapt to different manual test result storage systems, so that test results can be obtained directly from different manual test result storage systems, from statistical manual test cases.
  • the result of the execution is the second number of successes to facilitate subsequent calculation of the test success rate.
  • the first quantity of the automated test case whose execution result is successful is obtained; and the second quantity of the manual test case whose execution result is successful is obtained from the manual test result storage system, according to the first quantity, the second quantity, and The number of manual use cases corresponding to the test task is calculated to obtain the test success rate of the constructed test task, and the operation is simple and reliable.
  • the testing method may further include: when the test success rate is not lower than the preset value, acquiring a historical test task associated with the constructed test task; when the test success rate is lower than the historical test task When the test success rate is successful, the result of the test failure is indicated.
  • the computer device can also obtain the automatic test success rate of the historical version, and calculate the automatic test success rate of the current version.
  • the current version of the automated test success rate is less than the historical version of the automated test success rate, the current automation is prompted.
  • the test success rate should be higher and higher. If the test success rate is lower than the historical test success rate, the task version upgrade fails and the test fails.
  • the same test version of the same version before and after, the success rate of the latter test should also be higher than the success rate of the previous test, if it is low, the result of the test failure.
  • test success rate of the historical version is combined to determine whether the test of the current version is successful, the relationship between the versions is strengthened, and the accuracy of the test is improved.
  • the testing method may further include: counting the number of all types of test cases in the automated test case; calculating each number according to the number of all types of test cases in the automated test case and the number of manual test cases The coverage of a type of test case; when there is a coverage type of a test case whose coverage is lower than the coverage threshold, it prompts to increase a certain type of test case.
  • FIG. 6 is a statistical diagram of test results in an embodiment, which may include a version build graph, a version success rate graph, a use case coverage graph, and a use case type map.
  • the abscissa of the version build graph is the version number of the build version
  • the ordinate is the build success rate
  • different fold lines can identify different items
  • the computer device automatically counts the success rate of each version of each item and displays it, without manual intervention.
  • the version success rate chart can use a pie chart in which the total number of automated test cases is the denominator, the number of successful automated test runs, the number of automated test cases that failed to execute, and the number of unexecuted automated test cases are numerators.
  • the number of manual use cases in the use case coverage chart is the denominator.
  • the number of test rates for each type is numerator, such as the number of interface test cases and the number of interface test cases.
  • the coverage is lower than that.
  • the coverage threshold is used, it prompts to increase a certain type of test case, which can ensure the sufficiency of the test case and improve the accuracy of the test.
  • the use case type map can be a pie chart.
  • the number of all types of test cases in the automated test case is counted; the coverage of each type of test case is calculated according to the number of all types of test cases in the automated test case and the number of manual test cases;
  • the coverage of a certain type of test case is lower than the coverage threshold, it prompts to increase a certain type of test case, which can ensure the sufficiency of the test case and improve the accuracy of the test.
  • FIGS. 2 and 3 are schematic flowcharts of a method according to an embodiment of the present application. It should be understood that although the various steps in the flowcharts of FIGS. 2 and 3 are sequentially displayed in accordance with the indication of the arrows, these steps are not necessarily performed in the order indicated by the arrows. Except as explicitly stated herein, the execution of these steps is not strictly limited, and may be performed in other sequences. Moreover, at least some of the steps in FIGS.
  • 2 and 3 may include a plurality of sub-steps or stages, which are not necessarily performed at the same time, but may be executed at different times, and the execution order thereof is also It is not necessarily performed sequentially, but may be performed alternately or alternately with at least a portion of other steps or sub-steps or stages of other steps.
  • FIG. 7 is a schematic structural diagram of a testing device in an embodiment, where the testing device includes:
  • the test task acquisition module 100 is configured to acquire the constructed test task.
  • the automated test case selection module 200 is configured to select an automated test case corresponding to the constructed test task.
  • the execution module 300 is configured to execute an automated test case according to the built test task.
  • the manual test case selection module 400 is configured to select a manual test case corresponding to the constructed test task when the automated test cases are all executed.
  • the output module 500 is configured to select an unexecuted manual test case from the manual test case according to the selected automated test case, and output an unexecuted manual test case for manual testing.
  • the execution module 300 can include:
  • the first node selection unit is configured to select an online idle node from the execution node according to the constructed test task.
  • the second node selection unit is configured to: when there is no online idle node in the execution node, select, from the execution node, a node whose maximum connection number is greater than or equal to two and the current connection number is less than the maximum connection number.
  • An execution unit that sends automated test cases to selected nodes and executes automated test cases.
  • the testing device may further include:
  • the first test result obtaining unit is configured to obtain the first quantity of the automated test case whose execution result is successful.
  • the second test result obtaining unit is configured to obtain, from the manual test result storage system, a second quantity of the manual test case whose execution result is successful.
  • the calculating unit is configured to calculate a test success rate of the constructed test task according to the first quantity, the second quantity, and the number of manual use cases corresponding to the constructed test task.
  • the first output unit is configured to output a result of the test failure when the test success rate is lower than a preset value.
  • the testing device may further include:
  • the historical test task acquisition module is configured to acquire a historical test task associated with the constructed test task when the test success rate is not lower than a preset value.
  • the output module 500 can also be used to indicate the result of the test failure when the test success rate is lower than the test success rate of the historical test task.
  • the testing device may further include:
  • a statistical module that counts the number of all types of test cases in an automated test case.
  • Coverage calculation module for calculating the coverage of each type of test case based on the number of all types of test cases in the automated test case and the number of manual test cases.
  • the output module 500 can also be used to prompt to add a certain type of test case when there is a certain type of test case whose coverage is lower than the coverage threshold.
  • test device For the specific definition of the test device, reference may be made to the above definition of the test method, and details are not described herein again.
  • Each of the above test devices may be implemented in whole or in part by software, hardware, and combinations thereof.
  • Each of the above modules may be embedded in or independent of the processor in the computer device, or may be stored in a memory in the computer device in a software form, so that the processor invokes the operations corresponding to the above modules.
  • the processor can be a central processing unit (CPU), a microprocessor, a microcontroller, or the like.
  • the above test apparatus can be implemented in the form of a computer readable instruction that can be executed on a computer device as shown in FIG.
  • a computer device which may be a server, and its internal structure diagram may be as shown in FIG.
  • the computer device includes a processor, memory, network interface, and database connected by a system bus.
  • the processor of the computer device is used to provide computing and control capabilities.
  • the memory of the computer device includes a non-volatile storage medium, an internal memory.
  • the non-volatile storage medium stores an operating system, a computer program, and a database.
  • the internal memory provides an environment for operation of an operating system and computer programs in a non-volatile storage medium.
  • the database of the computer device is used to store test results.
  • the network interface of the computer device is used to communicate with an external terminal via a network connection.
  • the computer program is executed by the processor to implement a test method.
  • FIG. 8 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation of the computer device to which the solution of the present application is applied.
  • the specific computer device may It includes more or fewer components than those shown in the figures, or some components are combined, or have different component arrangements.
  • a computer apparatus comprising a memory, a processor, on which the computer readable instructions are stored, and when the processor executes the computer readable instructions, the following steps are performed: acquiring the constructed test task; An automated test case corresponding to the test task being built; an automated test case is executed according to the built test task; when the automated test case is executed, the manual test case corresponding to the constructed test task is selected; The automated test case selects unexecuted manual test cases from manual test cases and outputs unexecuted manual test cases for manual testing.
  • a computer readable storage medium having stored thereon computer readable instructions, such as the nonvolatile storage medium shown in FIG. 8, wherein the readable instructions are executed by a processor Implement the following steps: obtain the built test task; select the automated test case corresponding to the built test task; execute the automated test case according to the built test task; when the automated test case is completed, select and build The manual test case corresponding to the test task; the unexecuted manual test case is selected from the manual test case according to the selected automated test case, and the unexecuted manual test case is output for manual test.
  • Non-volatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM is available in a variety of formats, such as static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronization chain.
  • SRAM static RAM
  • DRAM dynamic RAM
  • SDRAM synchronous DRAM
  • DDRSDRAM double data rate SDRAM
  • ESDRAM enhanced SDRAM
  • Synchlink DRAM SLDRAM
  • Memory Bus Radbus
  • RDRAM Direct RAM
  • DRAM Direct Memory Bus Dynamic RAM
  • RDRAM Memory Bus Dynamic RAM

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

L'invention concerne un procédé et un appareil de test, un dispositif informatique, et un support de stockage lisible. Le procédé consiste à : acquérir des tâches de test construites ; sélectionner des cas de test automatisés qui correspondent aux tâches de test construites ; exécuter les cas de test automatisés selon les tâches de test construites ; lorsque l'exécution de tous les cas de test automatisés est achevée, sélectionner des cas de test manuel correspondant aux tâches de test construites ; et sélectionner un cas de test manuel non exécuté à partir des cas de test manuel selon les cas de test automatisés sélectionnés, et délivrer en sortie le cas de test manuel non exécuté en vue d'un test manuel.
PCT/CN2018/077273 2017-11-16 2018-02-26 Procédé et appareil de test, dispositif informatique et support de stockage lisible WO2019095580A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711136509.9A CN107861876A (zh) 2017-11-16 2017-11-16 测试方法、装置、计算机设备及可读存储介质
CN201711136509.9 2017-11-16

Publications (1)

Publication Number Publication Date
WO2019095580A1 true WO2019095580A1 (fr) 2019-05-23

Family

ID=61701857

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/077273 WO2019095580A1 (fr) 2017-11-16 2018-02-26 Procédé et appareil de test, dispositif informatique et support de stockage lisible

Country Status (2)

Country Link
CN (1) CN107861876A (fr)
WO (1) WO2019095580A1 (fr)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110442508B (zh) * 2018-05-03 2023-05-23 阿里巴巴集团控股有限公司 测试任务处理方法、装置、设备和介质
CN108845951A (zh) * 2018-08-14 2018-11-20 郑州云海信息技术有限公司 一种自动化测试覆盖率的计算方法及系统
CN109165165A (zh) * 2018-09-04 2019-01-08 中国平安人寿保险股份有限公司 接口测试方法、装置、计算机设备和存储介质
CN109491898B (zh) * 2018-10-30 2021-11-12 武汉思普崚技术有限公司 基于自动化测试与用例管理的测试效率提升方法及设备
CN109828905B (zh) * 2018-12-15 2023-08-22 中国平安人寿保险股份有限公司 自动化测试方法、装置、计算机装置及存储介质
CN109885391A (zh) * 2018-12-28 2019-06-14 北京城市网邻信息技术有限公司 一种资源打包方法、装置、电子设备及介质
CN109726134B (zh) * 2019-01-16 2023-02-07 中国平安财产保险股份有限公司 接口测试方法和系统
CN110674044B (zh) * 2019-09-24 2023-09-01 携程旅游网络技术(上海)有限公司 功能自动化测试的覆盖率获取方法、系统、设备及介质
CN110928798A (zh) * 2019-11-30 2020-03-27 苏州浪潮智能科技有限公司 一种代码测试方法、装置及设备
CN113110993A (zh) * 2021-04-12 2021-07-13 深圳市吉祥腾达科技有限公司 自动化测试平台集成对接testlink系统的方法
CN113643128B (zh) * 2021-08-31 2024-02-27 中国银行股份有限公司 银行产品的自动化测试方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080222454A1 (en) * 2007-03-08 2008-09-11 Tim Kelso Program test system
CN101706751A (zh) * 2009-11-23 2010-05-12 中兴通讯股份有限公司 软件业务功能覆盖率的统计方法及系统
CN102662833A (zh) * 2012-03-21 2012-09-12 天津书生软件技术有限公司 一种管理测试用例的方法
CN104572449A (zh) * 2014-12-23 2015-04-29 中国移动通信集团广东有限公司 一种基于用例库的自动化测试方法
CN106445764A (zh) * 2016-09-29 2017-02-22 福州大学 一种实现安卓设备稳定性自动化测试的方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080222454A1 (en) * 2007-03-08 2008-09-11 Tim Kelso Program test system
CN101706751A (zh) * 2009-11-23 2010-05-12 中兴通讯股份有限公司 软件业务功能覆盖率的统计方法及系统
CN102662833A (zh) * 2012-03-21 2012-09-12 天津书生软件技术有限公司 一种管理测试用例的方法
CN104572449A (zh) * 2014-12-23 2015-04-29 中国移动通信集团广东有限公司 一种基于用例库的自动化测试方法
CN106445764A (zh) * 2016-09-29 2017-02-22 福州大学 一种实现安卓设备稳定性自动化测试的方法

Also Published As

Publication number Publication date
CN107861876A (zh) 2018-03-30

Similar Documents

Publication Publication Date Title
WO2019095580A1 (fr) Procédé et appareil de test, dispositif informatique et support de stockage lisible
CN107341098B (zh) 软件性能测试方法、平台、设备及存储介质
WO2019161619A1 (fr) Procédé et appareil de test automatique d'interface, dispositif, et support d'informations lisible par ordinateur
CN108804215B (zh) 一种任务处理方法、装置以及电子设备
CN105302722B (zh) Cts自动测试方法及装置
CN110502366B (zh) 案例执行方法、装置、设备及计算机可读存储介质
CN111258591B (zh) 程序部署任务执行方法、装置、计算机设备和存储介质
CN112631919A (zh) 一种对比测试方法、装置、计算机设备及存储介质
WO2019075845A1 (fr) Procédé et dispositif de construction pour relation d'appel de liaison, dispositif informatique et support de stockage
US11169910B2 (en) Probabilistic software testing via dynamic graphs
CN110750443A (zh) 网页测试的方法、装置、计算机设备及存储介质
CN112650676A (zh) 软件测试方法、装置、设备及存储介质
CN115422048A (zh) 链路稳定性测试方法、装置、计算机设备和存储介质
CN113268409B (zh) 自动化测试时跟踪逻辑节点的方法、装置、设备和介质
US20090083747A1 (en) Method for managing application programs by utilizing redundancy and load balance
CN115604086A (zh) 监控报警故障自愈方法、装置、设备、介质和程序产品
CN115017047A (zh) 基于b/s架构的测试方法、系统、设备及介质
CN111679924B (zh) 构件化软件系统可靠性仿真方法、装置及电子设备
CN109240906B (zh) 数据库配置信息适配方法、装置、计算机设备和存储介质
CN113360389A (zh) 一种性能测试方法、装置、设备及存储介质
CN112069027A (zh) 一种接口数据处理方法、装置、电子设备及存储介质
CN112486838A (zh) 接口测试方法、装置、计算机设备和存储介质
CN110334905B (zh) 项目故障显示方法、装置、计算机设备和存储介质
CN113238930B (zh) 软件系统的测试方法、装置、终端设备和存储介质
CN110347409B (zh) 自动控制方法、客户端和服务器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18877399

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 14.08.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18877399

Country of ref document: EP

Kind code of ref document: A1