WO2020207016A1 - 用于异常追踪的软件开发测试方法及相关装置 - Google Patents

用于异常追踪的软件开发测试方法及相关装置 Download PDF

Info

Publication number
WO2020207016A1
WO2020207016A1 PCT/CN2019/118976 CN2019118976W WO2020207016A1 WO 2020207016 A1 WO2020207016 A1 WO 2020207016A1 CN 2019118976 W CN2019118976 W CN 2019118976W WO 2020207016 A1 WO2020207016 A1 WO 2020207016A1
Authority
WO
WIPO (PCT)
Prior art keywords
test
test case
tester
case
software
Prior art date
Application number
PCT/CN2019/118976
Other languages
English (en)
French (fr)
Inventor
翟彬彬
梁辉
Original Assignee
平安科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 平安科技(深圳)有限公司 filed Critical 平安科技(深圳)有限公司
Publication of WO2020207016A1 publication Critical patent/WO2020207016A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3692Test management for test results analysis

Definitions

  • This application relates to the field of computer technology, in particular to a software development and testing method, software development and testing device, computer device and non-volatile readable storage medium for abnormal tracking.
  • the existing automated testing methods are usually: setting timed tasks, compiling and building versions, running software system upgrades or installation programs, running pre-designed test cases, checking and analyzing test results, etc.
  • the software system is large or the number of test cases is large, the entire test process will be very complicated, resulting in a test case running problem can not quickly track the specific tester, resulting in slower feedback speed.
  • the first aspect of the present application provides a software development and testing method for exception tracking, the method includes:
  • test cases corresponding to the current iterative version of the test software to be developed, where each test case is associated with one of the testers;
  • test case in the task list When a test instruction is received, run the test case in the task list to test the test software to be developed;
  • test report including the running results of each test case and the testers associated with the test case
  • test cases that failed to run are selected from the test report and marked as abnormal test cases, and then the abnormal test cases in the test report and the testers associated with the abnormal test cases are displayed in focus .
  • the second aspect of the present application provides a software development and testing device for exception tracking, the device includes:
  • the obtaining module is used to obtain multiple test cases corresponding to the current iterative version of the test software to be developed, wherein each test case is associated with one of the testers;
  • the running module is configured to run the test case in the task list to test the test software to be developed when a test instruction is received;
  • the report generation module is used to generate a test report and display the test report when all the test cases are run.
  • the test report includes the running result of each test case and the testers associated with the test case ;as well as
  • the screening module is used to screen out the test cases that failed to run from the test report and mark them as abnormal test cases, and then combine the abnormal test cases in the test report and the abnormal test cases associated with the abnormal test cases The tester performs key display.
  • a third aspect of the present application provides a computer device, the computer device includes a processor, the processor is used to execute the computer-readable instructions stored in the memory to implement the aforementioned software development and testing method for exception tracking .
  • the fourth aspect of the present application provides a non-volatile readable storage medium having computer readable instructions stored thereon, and when the computer readable instructions are executed by a processor, the aforementioned software for exception tracking is implemented Develop test methods.
  • This application specifically classifies different test cases under the names of associated testers.
  • the subsequent test report generated includes the running results of each test case and the testers associated with the test case, which is convenient for tracking the test situation and Each test case corresponds to the person responsible for execution, and it is convenient for multiple testers to perform cross-testing in the same iteration cycle.
  • FIG. 1 is a flowchart of a software development and testing method for exception tracking provided in Embodiment 1 of the present application.
  • Fig. 2 is a schematic structural diagram of a software development and testing device for exception tracking provided in the second embodiment of the present application.
  • FIG. 3 is a schematic diagram of a computer device provided in Embodiment 3 of the present application.
  • FIG. 1 is a flowchart of the software development and testing method for exception tracking provided by the first embodiment of the present application.
  • the software development and testing method is applied to a computer device. According to different needs, the order of the steps in the flowchart can be changed, and some steps can be omitted.
  • Step S11 Obtain multiple test cases corresponding to the current iterative version of the test software to be developed, wherein each test case is associated with one of the testers.
  • the current iterative version refers to the software version in the process of continuously updating to achieve the final software product goal during the development of the test software to be developed, or the software developed for the first time, or to the software to be developed A new version for repairing test software vulnerabilities, or a new version obtained by developing new function points of the test software to be developed.
  • the computer device provides an input interface for multiple testers to upload test cases (Test Cases) written for the current iterative version to the test library of the computer device, so that the computer
  • the device can obtain the test case from the test library.
  • the test case is used to test whether the iterative version meets specific requirements.
  • the current iterative version is an externally released version, and multiple sub-versions can be further developed for the current iterative version, and the tester can write the test cases for different sub-versions under the current iterative version, At this time, the computer device obtains the test case corresponding to each sub-version.
  • the basic information of each test case includes: name, identification, input parameters, output parameters, functions, use case identification, test steps, expected results, test date and update date, etc.
  • the computer device adds the verification information to the name of the test case. Since the verification information includes the user name of the tester, each test case is associated with the tester who uploaded the test case.
  • the tester may also input the data to be evaluated for the current iterative version through the input interface.
  • the computer device automatically generates the test case according to the data to be evaluated, that is, the present embodiment does not require a tester to manually write and import the test case. Then, the computer device adds the verification information of the tester who entered the data to be evaluated to the name of the test case, so that each test case is associated with the tester who entered the data to be evaluated.
  • the data to be evaluated is software-related data that needs to be evaluated that appears during the development process of the test software to be developed, and the data to be evaluated may include the version of the test software to be developed and repaired during the iteration. Related vulnerabilities, newly developed or fixed function points, etc.
  • the test case can be generated by the PICT tool ((Pairwise Independent Combinatorial Testing tool)).
  • Step S12 creating a task list, the task list including the test case.
  • Step S13 when a test instruction is received, run the test case in the task list to test the test software to be developed.
  • the running the test case in the task list to test the test software to be developed includes:
  • step (a) Determine whether the test case runs successfully, if yes, proceed to step (b); otherwise, proceed to step (c).
  • the computer device judges whether the running times of the test case reaches the preset threshold (for example, 4 times), and when the running times does not reach the preset threshold, reruns the test case until all The test case runs successfully.
  • the preset threshold for example, 4 times
  • the number of runs reaches the preset threshold and the test case still fails to run successfully, it is determined that the test case has failed to run and the next test case is run.
  • Step S14 when all the test cases are completed, a test report is generated and the test report is displayed, and the test report includes the running result of each test case and the testers associated with the test case.
  • test cases do not exist independently of each other, the running results between the test cases may affect each other, and the final test results of all test cases need to be judged on the basis of the entire current iterative version.
  • the running result of the use case cannot guarantee the test effect of the current iterative version. Therefore, the judgment and evaluation of test effects need to be based on the running results of all test cases. Therefore, when all the test cases are completed, the computer device obtains the running results of each test case, and comprehensively organizes the running results of all test cases, thereby generating the test report for testers or R&D personnel reference.
  • the test report includes testers associated with each test case, it is convenient to track the executive responsible person corresponding to each test case during the entire test process.
  • each iteration can adjust testers corresponding to different sub-versions and perform cross-testing.
  • the running result of the test case includes the basic information of the test case, the description of the vulnerability, and the use of the test case.
  • the corresponding function of the iterative version of the test For example, when a vulnerability occurs in the execution of test case A, the running result of test case A includes the name of test case A, the input parameters for executing test case A, the description of the vulnerability, and the iterative version The a function and so on.
  • the computer device includes a display screen, and the test report is displayed on the display screen.
  • the display screen may be a liquid crystal display (Liquid Crystal Display, LCD) or an organic light-emitting diode (Organic Light-Emitting Diode, OLED) display.
  • LCD Liquid Crystal Display
  • OLED Organic Light-Emitting Diode
  • step S15 the test cases that failed to run are selected from the test report and marked as abnormal test cases, and then the abnormal test cases in the test report and the testers associated with the abnormal test cases Perform key display.
  • the key display is that the abnormal test case and the tester associated with the abnormal test case are displayed in the test report in different display modes, for example, the abnormal test case and the test report The testers associated with the abnormal test case are displayed in the test report in different colors, so that the corresponding executive person can be quickly traced.
  • FIG. 1 describes in detail the software development and testing method of the present application.
  • the functional modules of the software device that implements the software development and testing method and the hardware device architecture that implements the software development and testing method will be introduced below with reference to Figures 2 and 3 .
  • Fig. 2 is a structural diagram of a preferred embodiment of a software development and testing device for abnormal tracking of this application.
  • the software development and testing device 10 runs in a computer device.
  • the software development testing device 10 may include multiple functional modules composed of program code segments, and the program is a series of computer readable instructions.
  • the program code of each program segment in the software development and testing device 10 can be stored in the memory of the computer device and executed by the at least one processor to realize the software development and testing function.
  • the software development and testing device 10 can be divided into multiple functional modules according to the functions it performs.
  • the functional modules may include: an acquisition module 101, a creation module 102, an operation module 103, a report generation module 104, and a screening module 106.
  • the module referred to in this application refers to a series of computer-readable instruction segments that can be executed by at least one processor and can complete fixed functions, and are stored in a memory.
  • the function of each module will be described in detail in subsequent embodiments.
  • the acquiring module 101 is used for acquiring test cases corresponding to the current iterative version of the test software to be developed, wherein each test case is associated with one of the testers.
  • the current iterative version refers to the software version in the process of continuously updating to achieve the final software product goal during the development of the test software to be developed, or the software developed for the first time, or to the software to be developed A new version for repairing test software vulnerabilities, or a new version obtained by developing new function points of the test software to be developed.
  • the functional module may also include an input module (not shown).
  • the input module provides an input interface for multiple testers to upload test cases written for the current iterative version to the test library of the computer device, so that the acquisition module 101 can download Obtain the test case from the test library.
  • the test case is used to test whether the iterative version meets specific requirements.
  • the current iterative version is an externally released version, and multiple sub-versions can be further developed for the current iterative version, and the tester can write the test cases for different sub-versions under the current iterative version, At this time, the computer device obtains the test case corresponding to each sub-version.
  • the functional module may also include a verification module 105.
  • the verification module 105 is configured to provide a login interface before the input module provides the input interface for the tester to input verification information (such as user name and password), and then obtain the verification input by the tester Information, and compare the verification information with preset registration information. When determining that the verification information is consistent with the registration information of one of the testers, the verification module 105 allows the tester to log in to the computer device and provides the input interface for the tester to upload the test case , And then associate the uploaded test case with the verification information.
  • verification information such as user name and password
  • the basic information of each test case includes: name, identification, input parameters, output parameters, functions, use case identification, test steps, expected results, test date and update date, etc.
  • the computer device adds the verification information to the name of the test case. Since the verification information includes the user name of the tester, each test case is associated with the tester who uploaded the test case.
  • the tester may also input the data to be evaluated for the current iterative version through the input interface.
  • the functional module may also include a use case generation module (not shown).
  • the use case generation module is used for automatically generating the test case according to the data to be evaluated, that is, in this embodiment, the tester does not need to manually write and import the test case. Then, the use case generation module adds the verification information of the tester who entered the data to be evaluated to the name of the test case, so that each test case is associated with the tester who entered the data to be evaluated.
  • the data to be evaluated is software-related data that needs to be evaluated that appears during the development process of the test software to be developed, and the data to be evaluated may include the version of the test software to be developed and repaired during the iteration. Related vulnerabilities, newly developed or fixed function points, etc.
  • the test case can be generated by the PICT tool ((Pairwise Independent Combinatorial Testing tool)).
  • the creation module 102 is used to create a task list, and the task list includes the test case.
  • the running module 103 is configured to run the test case in the task list to test the test software to be developed when a test instruction is received.
  • the running module 103 runs the test case in the task list, it is also used to determine whether the test case runs successfully, if so, run the next test case; otherwise, restart Run the test case until the test case runs successfully or the number of runs exceeds a preset threshold.
  • the running module 103 determines whether the running times of the test case reaches the preset threshold (eg, 4 times), and when the running times does not reach the preset threshold, the test case is rerun until The test case runs successfully. When the number of runs reaches the preset threshold and the test case still fails to run successfully, it is determined that the test case has failed to run and the next test case is run.
  • the preset threshold eg, 4 times
  • the report generation module 104 is configured to generate a test report and display the test report when all the test cases are completed.
  • the test report includes the running result of each test case and information associated with the test case. Testers.
  • test cases do not exist independently of each other, the running results between the test cases may affect each other, and the final test results of all test cases need to be judged on the basis of the entire current iterative version.
  • the running result of the use case cannot guarantee the test effect of the current iterative version. Therefore, the judgment and evaluation of test effects need to be based on the running results of all test cases. Therefore, when all the test cases are finished running, the report generation module 104 is used to obtain the running results of each test case, and comprehensively sort the running results of all the test cases, thereby generating the test report for testing. Reference for personnel or R&D personnel.
  • the test report includes testers associated with each test case, it is convenient to track the executive responsible person corresponding to each test case during the entire test process.
  • each iteration can adjust testers corresponding to different sub-versions and perform cross-testing.
  • the running result of the test case includes the basic information of the test case, the description of the vulnerability, and the use of the test case.
  • the corresponding function of the iterative version of the test For example, when a vulnerability occurs in the execution of test case A, the running result of test case A includes the name of test case A, the input parameters for executing test case A, the description of the vulnerability, and the iterative version The a function and so on.
  • the screening module 106 is configured to screen out the test cases that failed to run in the test report and mark them as abnormal test cases, and then correlate the abnormal test cases in the test report and the abnormal test cases The testers of the joint show the key points.
  • the key display is that the abnormal test case and the tester associated with the abnormal test case are displayed in the test report in different display modes, for example, the abnormal test case and the test report The testers associated with the abnormal test case are displayed in the test report in different colors, so that the corresponding executive person can be quickly traced.
  • the software development and testing device in the embodiment of this application specifically divides different test cases into the names of associated testers, and subsequently generated test reports include the running results of each test case and the related test cases.
  • Connected testers facilitate the tracking of the test situation and the person responsible for the execution of each test case throughout the test process, and facilitate cross-testing by multiple testers in the same iteration cycle.
  • FIG. 3 is a schematic diagram of a preferred embodiment of the computer device of this application.
  • the computer device 1 includes a memory 20, a processor 30, and computer-readable instructions 40 stored in the memory 20 and running on the processor 30, such as a software development test program.
  • the processor 30 executes the computer-readable instruction 40, the steps in the above-mentioned software development and testing method embodiment for exception tracking are implemented:
  • Step S11 Obtain multiple test cases corresponding to the current iterative version of the test software to be developed, wherein each test case is associated with one of the testers.
  • the current iterative version refers to the software version in the process of continuously updating to achieve the final software product goal during the development of the test software to be developed, or the software developed for the first time, or to the software to be developed A new version for repairing test software vulnerabilities, or a new version obtained by developing new function points of the test software to be developed.
  • the computer device provides an input interface for multiple testers to upload test cases (Test Cases) written for the current iterative version to the test library of the computer device, so that the computer
  • the device can obtain the test case from the test library.
  • the test case is used to test whether the iterative version meets specific requirements.
  • the current iterative version is an externally released version, and multiple sub-versions can be further developed for the current iterative version, and the tester can write the test cases for different sub-versions under the current iterative version, At this time, the computer device obtains the test case corresponding to each sub-version.
  • the basic information of each test case includes: name, identification, input parameters, output parameters, functions, use case identification, test steps, expected results, test date and update date, etc.
  • the computer device adds the verification information to the name of the test case. Since the verification information includes the user name of the tester, each test case is associated with the tester who uploaded the test case.
  • the tester may also input the data to be evaluated for the current iterative version through the input interface.
  • the computer device automatically generates the test case according to the data to be evaluated, that is, the present embodiment does not require a tester to manually write and import the test case. Then, the computer device adds the verification information of the tester who entered the data to be evaluated to the name of the test case, so that each test case is associated with the tester who entered the data to be evaluated.
  • the data to be evaluated is software-related data that needs to be evaluated that appears during the development process of the test software to be developed, and the data to be evaluated may include the version of the test software to be developed and repaired during the iteration. Related vulnerabilities, newly developed or fixed function points, etc.
  • the test case can be generated by the PICT tool ((Pairwise Independent Combinatorial Testing tool)).
  • Step S12 creating a task list, the task list including the test case.
  • Step S13 when a test instruction is received, run the test case in the task list to test the test software to be developed.
  • the running the test case in the task list to test the test software to be developed includes:
  • step (a) Determine whether the test case runs successfully, if yes, proceed to step (b); otherwise, proceed to step (c).
  • the computer device judges whether the running times of the test case reaches the preset threshold (for example, 4 times), and when the running times does not reach the preset threshold, reruns the test case until all The test case runs successfully.
  • the preset threshold for example, 4 times
  • the number of runs reaches the preset threshold and the test case still fails to run successfully, it is determined that the test case has failed to run and the next test case is run.
  • Step S14 when all the test cases are completed, a test report is generated and the test report is displayed, and the test report includes the running result of each test case and the testers associated with the test case.
  • test cases do not exist independently of each other, the running results between the test cases may affect each other, and the final test results of all test cases need to be judged on the basis of the entire current iterative version.
  • the running result of the use case cannot guarantee the test effect of the current iterative version. Therefore, the judgment and evaluation of test effects need to be based on the running results of all test cases. Therefore, when all the test cases are completed, the computer device obtains the running results of each test case, and comprehensively organizes the running results of all test cases, thereby generating the test report for testers or R&D personnel reference.
  • the test report includes testers associated with each test case, it is convenient to track the executive responsible person corresponding to each test case during the entire test process.
  • each iteration can adjust testers corresponding to different sub-versions and perform cross-testing.
  • the running result of the test case includes the basic information of the test case, the description of the vulnerability, and the use of the test case.
  • the corresponding function of the iterative version of the test For example, when a vulnerability occurs in the execution of test case A, the running result of test case A includes the name of test case A, the input parameters for executing test case A, the description of the vulnerability, and the iterative version The a function and so on.
  • step S15 the test cases that failed to run are selected from the test report and marked as abnormal test cases, and then the abnormal test cases in the test report and the testers associated with the abnormal test cases Perform key display.
  • the key display is that the abnormal test case and the tester associated with the abnormal test case are displayed in the test report in different display modes, for example, the abnormal test case and the test report The testers associated with the abnormal test case are displayed in the test report in different colors, so that the corresponding executive person can be quickly traced.
  • the processor 30 executes the computer-readable instructions 40, the functions of the modules/units in the above-mentioned software development and testing apparatus for exception tracking are implemented, such as the modules 101-106 in FIG. 2.
  • the computer-readable instructions 40 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 20 and executed by the processor 30, To complete this application.
  • the one or more modules/units may be a series of computer-readable instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer-readable instructions 40 in the computer device 1.
  • the computer-readable instructions 40 can be divided into the acquisition module 101, the creation module 102, the execution module 103, the report generation module 104, the verification module 105, and the screening module 106 in FIG. Refer to the second embodiment for the specific functions of each module.
  • the computer device 1 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the schematic diagram is only an example of the computer device 1 and does not constitute a limitation on the computer device 1. It may include more or fewer components than those shown in the figure, or a combination of certain components, or different components. Components, for example, the computer device 1 may also include input and output devices, network access devices, buses, and so on.
  • the so-called processor 30 may be a central processing unit (Central Processing Unit, CPU), other general processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic devices, discrete gates or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor may be a microprocessor or the processor 30 may also be any conventional processor, etc.
  • the processor 30 is the control center of the computer device 1 and connects the entire computer device 1 with various interfaces and lines. Various parts.
  • the memory 20 may be used to store the computer-readable instructions 40 and/or modules/units, and the processor 30 can run or execute the computer-readable instructions and/or modules/units stored in the memory 20, and
  • the data stored in the memory 20 is called to realize various functions of the computer device 1.
  • the memory 20 may mainly include a program storage area and a data storage area.
  • the program storage area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; Data (such as audio data) created according to the use of the computer device 1 and the like are stored.
  • the memory 20 may include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, a flash memory card (Flash Card), At least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
  • non-volatile memory such as a hard disk, a memory, a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a Secure Digital (SD) card, a flash memory card (Flash Card), At least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
  • the integrated module/unit of the electronic device 1 is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a non-volatile readable storage medium.
  • this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through computer-readable instructions.
  • the computer-readable instructions can be stored in a non-volatile memory. In the read storage medium, when the computer-readable instructions are executed by the processor, the steps of the foregoing method embodiments can be implemented.
  • the computer-readable instruction code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the non-volatile readable medium may include: any entity or device capable of carrying the computer readable instruction code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory).
  • the functional units in the various embodiments of the present application may be integrated in the same processing unit, or each unit may exist alone physically, or two or more units may be integrated in the same unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or in the form of hardware plus software functional modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

一种用于异常追踪的软件开发测试方法包括:获取待开发测试软件的当前迭代版本所对应的测试用例,每一测试用例与其中一测试人员相关联(S11);创建任务列表,所述任务列表包括所述测试用例(S12);在接收到测试指令时,运行任务列表中的测试用例以对所述待开发测试软件进行测试(S13);在所有测试用例运行完毕时生成测试报告,所述测试报告包括每一测试用例的运行结果以及与所述测试用例相关联的测试人员(S14);从所述测试报告中筛选出运行失败的所述测试用例并标记为异常测试用例,然后将所述测试报告中的所述异常测试用例以及与所述异常测试用例相关联的测试人员进行重点显示(S15)。

Description

用于异常追踪的软件开发测试方法及相关装置
本申请要求于2019年04月09日提交中国专利局,申请号为201910279854.0、发明名称为“软件开发测试方法、装置、计算机装置及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机技术领域,具体涉及一种用于异常追踪的软件开发测试方法、软件开发测试装置、计算机装置及非易失性可读存储介质。
背景技术
在现代软件开发流程中,自动化测试已成为其中必不可少的一个环节。通过自动运行预先设计的测试用例,得到实际结果,与期望结果进行比较,并生成测试报告。在此过程中,可以很大程度地节省人力成本、时间成本和硬件资源,提高测试效率,并能尽早地发现软件设计和软件实现中存在的缺陷。
现有的自动化测试方法通常是:设置定时任务,编译和构建版本,运行软件系统的升级或安装程序,运行预先设计的测试用例,检查和分析测试结果等。在这一过程中,如果软件系统较庞大或测试用例数量较多,整个测试过程会非常繁杂,导致某一测试用例运行出现问题时无法快速跟踪到具体的测试人员,导致反馈速度较慢。
发明内容
鉴于以上内容,有必要提出一种软件开发测试方法及装置、计算机装置和非易失性可读存储介质,从而解决以上问题。
本申请的第一方面提供一种用于异常跟踪的软件开发测试方法,所述方法包括:
获取待开发测试软件的当前迭代版本所对应的多个测试用例,其中,每一测试用例与其中一测试人员相关联;
创建一任务列表,所述任务列表包括所述测试用例;
在接收到一测试指令时,运行所述任务列表中的所述测试用例以对所述待开发测试软件进行测试;
在所有所述测试用例运行完毕时,生成一测试报告并显示所述测试报告,所述测试 报告包括每一测试用例的运行结果以及与所述测试用例相关联的测试人员;以及
从所述测试报告中筛选出运行失败的所述测试用例并标记为异常测试用例,然后将所述测试报告中的所述异常测试用例以及与所述异常测试用例相关联的测试人员进行重点显示。
本申请的第二方面提供一种用于异常跟踪的软件开发测试装置,所述装置包括:
获取模块,用于获取待开发测试软件的当前迭代版本所对应的多个测试用例,其中,每一测试用例与其中一测试人员相关联;
创建模块,用于创建一任务列表,所述任务列表包括所述测试用例;
运行模块,用于在接收到一测试指令时,运行所述任务列表中的所述测试用例以对所述待开发测试软件进行测试;
报告生成模块,用于在所有所述测试用例运行完毕时,生成一测试报告并显示所述测试报告,所述测试报告包括每一测试用例的运行结果以及与所述测试用例相关联的测试人员;以及
筛选模块,用于从所述测试报告中筛选出运行失败的所述测试用例并标记为异常测试用例,然后将所述测试报告中的所述异常测试用例以及与所述异常测试用例相关联的测试人员进行重点显示。
本申请的第三方面提供一种计算机装置,所述计算机装置包括处理器,所述处理器用于执行存储器中存储的计算机可读指令时实现如前所述的用于异常跟踪的软件开发测试方法。
本申请的第四方面提供一种非易失性可读存储介质,其上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现如前所述的用于异常跟踪的软件开发测试方法。
本申请将不同测试用例具体划分至相关联的测试人员名下,后续生成的测试报告包括每一测试用例的运行结果以及与所述测试用例相关联的测试人员,便于整个测试过程跟踪测试情况以及每一测试用例对应的执行责任人,且便于同一迭代周期内多个测试人员进行交叉测试。
附图说明
图1是本申请实施例一提供的用于异常跟踪的软件开发测试方法的流程图。
图2是本申请实施例二提供的用于异常跟踪的软件开发测试装置的结构示意图。
图3是本申请实施例三提供的计算机装置示意图。
具体实施方式
实施例一
请参阅图1所示,是本申请第一实施例提供的用于异常跟踪的软件开发测试方法的流程图。所述软件开发测试方法应用于一计算机装置中。根据不同的需求,该流程图中步骤的顺序可以改变,某些步骤可以省略。
步骤S11,获取待开发测试软件的当前迭代版本所对应的多个测试用例,其中,每一测试用例与其中一测试人员相关联。
其中,所述当前迭代版本是指所述待开发测试软件开发过程中,进行不断更新以达到最终的软件产品目标的过程中的软件版本,或是初次开发的软件,或是对所述待开发测试软件漏洞进行修复的新版本,亦或是对所述待开发测试软件进行新功能点的开发得到的新版本。
在本实施方式中,所述计算机装置提供一输入界面,以供多个测试人员将针对所述当前迭代版本编写的测试用例(Test Case)上传到所述计算机装置的测试库,使得所述计算机装置可从所述测试库中获取所述测试用例。所述测试用例用于测试所述迭代版本是否满足特定需求。优选地,假设所述当前迭代版本为对外发行的版本,而针对所述当前迭代版本可进一步开发有多个子版本,所述测试人员可针对当前迭代版本下不同的子版本编写所述测试用例,此时,所述计算机装置获取每一子版本所对应的测试用例。
更具体地,在所述提供一输入界面之前,还包括如下步骤:
(a)提供一登陆界面,以供所述测试人员输入验证信息(如用户名与密码);
(b)获取所述测试人员输入的验证信息,并将所述验证信息与预设的注册信息比对;
(c)在判断所述验证信息与其中一测试人员的注册信息一致时,允许所述测试人员登陆至所述计算机装置并提供所述输入界面以供所述测试人员上传所述测试用例;
(d)将上传的所述测试用例与所述验证信息相关联。
其中,每一测试用例的基本信息包括:名称、标识、输入参数、输出参数、功能、用例标识、测试步骤、预期结果、测试日期和更新日期等。所述计算机装置将所述验证信息添加至所述测试用例的名称中。由于所述验证信息包含有所述测试人员的用户名,从而使得每一测试用例与上传所述测试用例的测试人员相关联。
在另一实施方式中,所述测试人员还可经所述输入界面录入针对所述当前迭代版本的待评估数据。所述计算机装置根据所述待评估数据自动生成所述测试用例,即本实施方式不需要测试人员手动编写及导入所述测试用例。然后,所述计算机装置将录入所述 待评估数据的测试人员的验证信息添加至所述测试用例的名称中,从而使得每一测试用例与录入所述待评估数据的测试人员相关联。其中,所述待评估数据为在所述待开发测试软件开发过程中出现的需要评估的软件相关数据,所述待评估数据可以包含所述待开发测试软件的版本在进行迭代的过程中,修复的相关漏洞,新开发的或者修复的功能点等。生成所述测试用例可通过PICT工具((Pairwise Independent Combinatorial Testing tool))实现。
步骤S12,创建一任务列表,所述任务列表包括所述测试用例。
步骤S13,在接收到一测试指令时,运行所述任务列表中的所述测试用例以对所述待开发测试软件进行测试。
在本实施方式中,所述运行所述任务列表中的所述测试用例以对所述待开发测试软件进行测试包括:
(a)判断所述测试用例是否运行成功,若是,则进行步骤(b);否则,则进行步骤(c)。
(b)运行下一测试用例;
(c)重新运行所述测试用例,直至所述测试用例运行成功或运行次数超过一预设阈值。
具体地,所述计算机装置判断所述测试用例的运行次数是否达到所述预设阈值(如,4次),当运行次数未达到所述预设阈值时,则重新运行所述测试用例直至所述测试用例运行成功。当运行次数达到所述预设阈值时且所述测试用例仍然没有运行成功,则确定所述测试用例运行失败并运行下一个测试用例。
步骤S14,在所有所述测试用例运行完毕时,生成一测试报告并显示所述测试报告,所述测试报告包括每一测试用例的运行结果以及与所述测试用例相关联的测试人员。
其中,所述测试用例并不是相互独立存在的,所述测试用例之间的运行结果可能会相互影响,所有测试用例的最终测试结果需以整个所述当前迭代版本为基础进行判断,单独的测试用例的运行结果无法保障所述当前迭代版本的测试效果。所以,测试效果的判定和评估需要以所有测试用例的运行结果为基准。因此,当所有所述测试用例运行完毕时,所述计算机装置获取每一测试用例的运行结果,并将所有测试用例的运行结果进行综合整理,从而生成所述测试报告以供测试人员或研发人员参考。再者,由于所述测试报告包括与每一测试用例相关联的测试人员,因此,便于整个测试过程中追踪每一测试用例对应的执行负责人。最后,由于在当前迭代版本的迭代周期内还可进行多次迭代,每一次迭代可对应调整不同子版本对应的测试人员并进行交叉测试。
优选地,当某一所述测试用例运行失败(即,出现漏洞)时,所述测试用例的运行结 果包括所述测试用例的基本信息、所述漏洞的描述内容、以及利用所述测试用例所测试的所述迭代版本的对应功能。例如,当测试用例A在执行过程中出现漏洞时,所述测试用例A的运行结果包括所述测试用例A的名称、执行所述测试用例A的输入参数、漏洞的描述内容、所述迭代版本的a功能等。
在本实施方式中,所述计算机装置包括一显示屏,所述测试报告显示于所述显示屏上。所述显示屏可以是液晶显示屏(Liquid Crystal Display,LCD)或有机发光二极管(Organic Light-Emitting Diode,OLED)显示屏。
步骤S15,从所述测试报告中筛选出运行失败的所述测试用例并标记为异常测试用例,然后将所述测试报告中的所述异常测试用例以及与所述异常测试用例相关联的测试人员进行重点显示。
在本实施方式中,所述重点显示为所述异常测试用例以及与所述异常测试用例相关联的测试人员通过不同的显示方式显示于所述测试报告中,例如,所述异常测试用例以及与所述异常测试用例相关联的测试人员通过不同的颜色显示于所述测试报告中,从而便于快速追踪到对应的执行负责人。
上述图1详细介绍了本申请的软件开发测试方法,下面结合图2和图3,对实现所述软件开发测试方法的软件装置的功能模块以及实现所述软件开发测试方法的硬件装置架构进行介绍。
应该了解,所述实施例仅为说明之用,在专利申请范围上并不受此结构的限制。
实施例二
图2为本申请的用于异常跟踪的软件开发测试装置较佳实施例的结构图。
在一些实施例中,所述软件开发测试装置10运行于计算机装置中。所述软件开发测试装置10可以包括多个由程序代码段所组成的功能模块,所述程序是一系列的计算机可读指令。所述软件开发测试装置10中的各个程序段的程序代码可以存储于计算机装置的存储器中,并由所述至少一个处理器所执行,以实现软件开发测试功能。
本实施例中,所述软件开发测试装置10根据其所执行的功能,可以被划分为多个功能模块。参阅图2所示,所述功能模块可以包括:获取模块101、创建模块102、运行模块103、报告生成模块104以及筛选模块106。本申请所称的模块是指一种能够被至少一个处理器所执行并且能够完成固定功能的一系列计算机可读指令段,其存储在存储器中。在本实施例中,关于各模块的功能将在后续的实施例中详述。
所述获取模块101用于获取待开发测试软件的当前迭代版本所对应的测试用例,其中,每一测试用例与其中一测试人员相关联。
其中,所述当前迭代版本是指所述待开发测试软件开发过程中,进行不断更新以达到最终的软件产品目标的过程中的软件版本,或是初次开发的软件,或是对所述待开发测试软件漏洞进行修复的新版本,亦或是对所述待开发测试软件进行新功能点的开发得到的新版本。
在本实施方式中,所述功能模块还可以包括一输入模块(图未示)。所述输入模块提供一输入界面,以供多个测试人员将针对所述当前迭代版本编写的测试用例(Test Case)上传到所述计算机装置的测试库,使得所述获取模块101可从所述测试库中获取所述测试用例。所述测试用例用于测试所述迭代版本是否满足特定需求。优选地,假设所述当前迭代版本为对外发行的版本,而针对所述当前迭代版本可进一步开发有多个子版本,所述测试人员可针对当前迭代版本下不同的子版本编写所述测试用例,此时,所述计算机装置获取每一子版本所对应的测试用例。
更具体地,所述功能模块还可以包括一验证模块105。所述验证模块105用于在所述输入模块提供所述输入界面之前,提供一登陆界面,以供所述测试人员输入验证信息(如用户名与密码),然后获取所述测试人员输入的验证信息,并将所述验证信息与预设的注册信息比对。在判断所述验证信息与其中一测试人员的注册信息一致时,所述验证模块105允许所述测试人员登陆至所述计算机装置并提供所述输入界面以供所述测试人员上传所述测试用例,然后将上传的所述测试用例与所述验证信息相关联。
其中,每一测试用例的基本信息包括:名称、标识、输入参数、输出参数、功能、用例标识、测试步骤、预期结果、测试日期和更新日期等。所述计算机装置将所述验证信息添加至所述测试用例的名称中。由于所述验证信息包含有所述测试人员的用户名,从而使得每一测试用例与上传所述测试用例的测试人员相关联。
在另一实施方式中,所述测试人员还可经所述输入界面录入针对所述当前迭代版本的待评估数据。所述功能模块还可包括一用例生成模块(图未示)。所述用例生成模块用于根据所述待评估数据自动生成所述测试用例,即本实施方式不需要测试人员手动编写及导入所述测试用例。然后,所述用例生成模块将录入所述待评估数据的测试人员的验证信息添加至所述测试用例的名称中,从而使得每一测试用例与录入所述待评估数据的测试人员相关联。其中,所述待评估数据为在所述待开发测试软件开发过程中出现的需要评估的软件相关数据,所述待评估数据可以包含所述待开发测试软件的版本在进行迭代的过程中,修复的相关漏洞,新开发的或者修复的功能点等。生成所述测试用例可通过PICT工具((Pairwise Independent Combinatorial Testing tool))实现。
所述创建模块102用于创建一任务列表,所述任务列表包括所述测试用例。
所述运行模块103用于在接收到一测试指令时,运行所述任务列表中的所述测试用例以对所述待开发测试软件进行测试。
在本实施方式中,所述运行模块103在运行所述任务列表中的所述测试用例时,还用于判断所述测试用例是否运行成功,若是,则运行下一测试用例;否则,则重新运行所述测试用例,直至所述测试用例运行成功或运行次数超过一预设阈值。
具体地,所述运行模块103判断所述测试用例的运行次数是否达到所述预设阈值(如,4次),当运行次数未达到所述预设阈值时,则重新运行所述测试用例直至所述测试用例运行成功。当运行次数达到所述预设阈值时且所述测试用例仍然没有运行成功,则确定所述测试用例运行失败并运行下一个测试用例。
所述报告生成模块104用于在所有所述测试用例运行完毕时,生成一测试报告并显示所述测试报告,所述测试报告包括每一测试用例的运行结果以及与所述测试用例相关联的测试人员。
其中,所述测试用例并不是相互独立存在的,所述测试用例之间的运行结果可能会相互影响,所有测试用例的最终测试结果需以整个所述当前迭代版本为基础进行判断,单独的测试用例的运行结果无法保障所述当前迭代版本的测试效果。所以,测试效果的判定和评估需要以所有测试用例的运行结果为基准。因此,当所有所述测试用例运行完毕时,所述报告生成模块104用于获取每一测试用例的运行结果,并将所有测试用例的运行结果进行综合整理,从而生成所述测试报告以供测试人员或研发人员参考。再者,由于所述测试报告包括与每一测试用例相关联的测试人员,因此,便于整个测试过程中追踪每一测试用例对应的执行负责人。最后,由于在当前迭代版本的迭代周期内还可进行多次迭代,每一次迭代可对应调整不同子版本对应的测试人员并进行交叉测试。
优选地,当某一所述测试用例运行失败(即,出现漏洞)时,所述测试用例的运行结果包括所述测试用例的基本信息、所述漏洞的描述内容、以及利用所述测试用例所测试的所述迭代版本的对应功能。例如,当测试用例A在执行过程中出现漏洞时,所述测试用例A的运行结果包括所述测试用例A的名称、执行所述测试用例A的输入参数、漏洞的描述内容、所述迭代版本的a功能等。
所述筛选模块106用于从所述测试报告中筛选出运行失败的所述测试用例并标记为异常测试用例,然后将所述测试报告中的所述异常测试用例以及与所述异常测试用例相关联的测试人员进行重点显示。
在本实施方式中,所述重点显示为所述异常测试用例以及与所述异常测试用例相关联的测试人员通过不同的显示方式显示于所述测试报告中,例如,所述异常测试用例以 及与所述异常测试用例相关联的测试人员通过不同的颜色显示于所述测试报告中,从而便于快速追踪到对应的执行负责人。
如前所述,本申请实施例中的软件开发测试装置将不同测试用例具体划分至相关联的测试人员名下,后续生成的测试报告包括每一测试用例的运行结果以及与所述测试用例相关联的测试人员,便于整个测试过程跟踪测试情况以及每一测试用例对应的执行责任人,且便于同一迭代周期内多个测试人员进行交叉测试。
实施例三
图3为本申请计算机装置较佳实施例的示意图。
所述计算机装置1包括存储器20、处理器30以及存储在所述存储器20中并可在所述处理器30上运行的计算机可读指令40,例如软件开发测试程序。所述处理器30执行所述计算机可读指令40时实现上述用于异常跟踪的软件开发测试方法实施例中的步骤:
步骤S11,获取待开发测试软件的当前迭代版本所对应的多个测试用例,其中,每一测试用例与其中一测试人员相关联。
其中,所述当前迭代版本是指所述待开发测试软件开发过程中,进行不断更新以达到最终的软件产品目标的过程中的软件版本,或是初次开发的软件,或是对所述待开发测试软件漏洞进行修复的新版本,亦或是对所述待开发测试软件进行新功能点的开发得到的新版本。
在本实施方式中,所述计算机装置提供一输入界面,以供多个测试人员将针对所述当前迭代版本编写的测试用例(Test Case)上传到所述计算机装置的测试库,使得所述计算机装置可从所述测试库中获取所述测试用例。所述测试用例用于测试所述迭代版本是否满足特定需求。优选地,假设所述当前迭代版本为对外发行的版本,而针对所述当前迭代版本可进一步开发有多个子版本,所述测试人员可针对当前迭代版本下不同的子版本编写所述测试用例,此时,所述计算机装置获取每一子版本所对应的测试用例。
更具体地,在所述提供一输入界面之前,还包括如下步骤:
(a)提供一登陆界面,以供所述测试人员输入验证信息(如用户名与密码);
(b)获取所述测试人员输入的验证信息,并将所述验证信息与预设的注册信息比对;
(c)在判断所述验证信息与其中一测试人员的注册信息一致时,允许所述测试人员登陆至所述计算机装置并提供所述输入界面以供所述测试人员上传所述测试用例;
(d)将上传的所述测试用例与所述验证信息相关联。
其中,每一测试用例的基本信息包括:名称、标识、输入参数、输出参数、功能、用例标识、测试步骤、预期结果、测试日期和更新日期等。所述计算机装置将所述验证 信息添加至所述测试用例的名称中。由于所述验证信息包含有所述测试人员的用户名,从而使得每一测试用例与上传所述测试用例的测试人员相关联。
在另一实施方式中,所述测试人员还可经所述输入界面录入针对所述当前迭代版本的待评估数据。所述计算机装置根据所述待评估数据自动生成所述测试用例,即本实施方式不需要测试人员手动编写及导入所述测试用例。然后,所述计算机装置将录入所述待评估数据的测试人员的验证信息添加至所述测试用例的名称中,从而使得每一测试用例与录入所述待评估数据的测试人员相关联。其中,所述待评估数据为在所述待开发测试软件开发过程中出现的需要评估的软件相关数据,所述待评估数据可以包含所述待开发测试软件的版本在进行迭代的过程中,修复的相关漏洞,新开发的或者修复的功能点等。生成所述测试用例可通过PICT工具((Pairwise Independent Combinatorial Testing tool))实现。
步骤S12,创建一任务列表,所述任务列表包括所述测试用例。
步骤S13,在接收到一测试指令时,运行所述任务列表中的所述测试用例以对所述待开发测试软件进行测试。
在本实施方式中,所述运行所述任务列表中的所述测试用例以对所述待开发测试软件进行测试包括:
(a)判断所述测试用例是否运行成功,若是,则进行步骤(b);否则,则进行步骤(c)。
(b)运行下一测试用例;
(c)重新运行所述测试用例,直至所述测试用例运行成功或运行次数超过一预设阈值。
具体地,所述计算机装置判断所述测试用例的运行次数是否达到所述预设阈值(如,4次),当运行次数未达到所述预设阈值时,则重新运行所述测试用例直至所述测试用例运行成功。当运行次数达到所述预设阈值时且所述测试用例仍然没有运行成功,则确定所述测试用例运行失败并运行下一个测试用例。
步骤S14,在所有所述测试用例运行完毕时,生成一测试报告并显示所述测试报告,所述测试报告包括每一测试用例的运行结果以及与所述测试用例相关联的测试人员。
其中,所述测试用例并不是相互独立存在的,所述测试用例之间的运行结果可能会相互影响,所有测试用例的最终测试结果需以整个所述当前迭代版本为基础进行判断,单独的测试用例的运行结果无法保障所述当前迭代版本的测试效果。所以,测试效果的判定和评估需要以所有测试用例的运行结果为基准。因此,当所有所述测试用例运行完毕时,所述计算机装置获取每一测试用例的运行结果,并将所有测试用例的运行结果进 行综合整理,从而生成所述测试报告以供测试人员或研发人员参考。再者,由于所述测试报告包括与每一测试用例相关联的测试人员,因此,便于整个测试过程中追踪每一测试用例对应的执行负责人。最后,由于在当前迭代版本的迭代周期内还可进行多次迭代,每一次迭代可对应调整不同子版本对应的测试人员并进行交叉测试。
优选地,当某一所述测试用例运行失败(即,出现漏洞)时,所述测试用例的运行结果包括所述测试用例的基本信息、所述漏洞的描述内容、以及利用所述测试用例所测试的所述迭代版本的对应功能。例如,当测试用例A在执行过程中出现漏洞时,所述测试用例A的运行结果包括所述测试用例A的名称、执行所述测试用例A的输入参数、漏洞的描述内容、所述迭代版本的a功能等。
步骤S15,从所述测试报告中筛选出运行失败的所述测试用例并标记为异常测试用例,然后将所述测试报告中的所述异常测试用例以及与所述异常测试用例相关联的测试人员进行重点显示。
在本实施方式中,所述重点显示为所述异常测试用例以及与所述异常测试用例相关联的测试人员通过不同的显示方式显示于所述测试报告中,例如,所述异常测试用例以及与所述异常测试用例相关联的测试人员通过不同的颜色显示于所述测试报告中,从而便于快速追踪到对应的执行负责人。
或者,所述处理器30执行所述计算机可读指令40时实现上述用于异常跟踪的软件开发测试装置实施例中各模块/单元的功能,例如图2中的模块101-106。
示例性的,所述计算机可读指令40可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器20中,并由所述处理器30执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机可读指令段,该指令段用于描述所述计算机可读指令40在所述计算机装置1中的执行过程。例如,所述计算机可读指令40可以被分割成图2中的获取模块101、创建模块102、运行模块103、报告生成模块104、验证模块105以及筛选模块106。各模块具体功能参见实施例二。
所述计算机装置1可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。本领域技术人员可以理解,所述示意图仅仅是计算机装置1的示例,并不构成对计算机装置1的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述计算机装置1还可以包括输入输出设备、网络接入设备、总线等。
所称处理器30可以是中央处理单元(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现场可编程门阵列(Field-Programmable Gate Array, FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器30也可以是任何常规的处理器等,所述处理器30是所述计算机装置1的控制中心,利用各种接口和线路连接整个计算机装置1的各个部分。
所述存储器20可用于存储所述计算机可读指令40和/或模块/单元,所述处理器30通过运行或执行存储在所述存储器20内的计算机可读指令和/或模块/单元,以及调用存储在存储器20内的数据,实现所述计算机装置1的各种功能。所述存储器20可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据计算机装置1的使用所创建的数据(比如音频数据)等。此外,存储器20可以包括非易失性存储器,例如硬盘、内存、插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。
所述电子装置1集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个非易失性可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性可读存储介质中,该计算机可读指令在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机可读指令代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述非易失性可读介质可以包括:能够携带所述计算机可读指令代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)。
在本申请所提供的几个实施例中,应该理解到,所揭露的计算机装置和方法,可以通过其它的方式实现。例如,以上所描述的计算机装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
另外,在本申请各个实施例中的各功能单元可以集成在相同处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在相同单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其他的具体形式实现本申请。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落在权利要求的等同要件的含义和范围 内的所有变化涵括在本申请内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。此外,显然“包括”一词不排除其他单元或步骤,单数不排除复数。计算机装置权利要求中陈述的多个单元或计算机装置也可以由同一个单元或计算机装置通过软件或者硬件来实现。第一,第二等词语用来表示名称,而并不表示任何特定的顺序。
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。

Claims (20)

  1. 一种用于异常跟踪的软件开发测试方法,应用于一计算机装置中,其特征在于,所述软件开发测试方法包括:
    获取待开发测试软件的当前迭代版本所对应的多个测试用例,其中,每一测试用例与其中一测试人员相关联;
    创建一任务列表,所述任务列表包括所述测试用例;
    在接收到一测试指令时,运行所述任务列表中的所述测试用例以对所述待开发测试软件进行测试;
    在所有所述测试用例运行完毕时,生成一测试报告并显示所述测试报告,所述测试报告包括每一测试用例的运行结果以及与所述测试用例相关联的测试人员;以及
    从所述测试报告中筛选出运行失败的所述测试用例并标记为异常测试用例,然后将所述测试报告中的所述异常测试用例以及与所述异常测试用例相关联的测试人员进行重点显示。
  2. 如权利要求1所述的用于异常跟踪的软件开发测试方法,其特征在于,所述获取待开发测试软件的当前迭代版本所对应的测试用例之前还包括:
    提供一输入界面,以供所述测试人员将针对所述当前迭代版本编写的测试用例上传到所述计算机装置的测试库,使得所述计算机装置从所述测试库中获取所述测试用例。
  3. 如权利要求2所述的用于异常跟踪的软件开发测试方法,其特征在于,在所述提供一输入界面之前还包括:
    提供一登陆界面,以供所述测试人员输入验证信息;
    获取所述验证信息,并将所述验证信息与预设的注册信息比对;
    在判断所述验证信息与其中一测试人员的注册信息一致时,允许所述测试人员登陆至所述计算机装置并提供所述输入界面;
    将上传的所述测试用例与所述验证信息相关联,从而使得每一测试用例与所述测试人员相关联。
  4. 如权利要求1所述的用于异常跟踪的软件开发测试方法,其特征在于,所述重点显示为所述异常测试用例以及与所述异常测试用例相关联的测试人员以不同的显示方式显示于所述测试报告中。
  5. 如权利要求1所述的用于异常跟踪的软件开发测试方法,其特征在于,所述获取 待开发测试软件的当前迭代版本所对应的测试用例之前还包括:
    提供一输入界面,以供所述测试人员经所述输入界面录入针对所述当前迭代版本的待评估数据;
    根据所述待评估数据自动生成所述测试用例;
    将录入所述待评估数据的测试人员的验证信息与所述测试用例相关联,从而使得每一测试用例与录入所述待评估数据的测试人员相关联。
  6. 如权利要求1所述的用于异常跟踪的软件开发测试方法,其特征在于,所述运行所述任务列表中的所述测试用例以对所述待开发测试软件进行测试包括:
    判断所述测试用例是否运行成功;
    当所述测试用例运行成功时,运行下一测试用例;
    当所述测试用例运行失败时,重新运行所述测试用例,直至所述测试用例运行成功或运行次数超过一预设阈值。
  7. 一种用于异常跟踪的软件开发测试装置,其特征在于,所述装置包括:
    获取模块,用于获取待开发测试软件的当前迭代版本所对应的多个测试用例,其中,每一测试用例与其中一测试人员相关联;
    创建模块,用于创建一任务列表,所述任务列表包括所述测试用例;
    运行模块,用于在接收到一测试指令时,运行所述任务列表中的所述测试用例以对所述待开发测试软件进行测试;
    报告生成模块,用于在所有所述测试用例运行完毕时,生成一测试报告并显示所述测试报告,所述测试报告包括每一测试用例的运行结果以及与所述测试用例相关联的测试人员;以及
    筛选模块,用于从所述测试报告中筛选出运行失败的所述测试用例并标记为异常测试用例,然后将所述测试报告中的所述异常测试用例以及与所述异常测试用例相关联的测试人员进行重点显示。
  8. 如权利要求7所述的用于异常跟踪的软件开发测试装置,其特征在于,所述重点显示为所述异常测试用例以及与所述异常测试用例相关联的测试人员以不同的显示方式显示于所述测试报告中。
  9. 一种计算机装置,其特征在于:所述计算机装置包括处理器,所述处理器用于执行存储器中存储的计算机可读指令以实现如下步骤:
    获取待开发测试软件的当前迭代版本所对应的多个测试用例,其中,每一测试用例与其中一测试人员相关联;
    创建一任务列表,所述任务列表包括所述测试用例;
    在接收到一测试指令时,运行所述任务列表中的所述测试用例以对所述待开发测试软件进行测试;
    在所有所述测试用例运行完毕时,生成一测试报告并显示所述测试报告,所述测试报告包括每一测试用例的运行结果以及与所述测试用例相关联的测试人员;以及
    从所述测试报告中筛选出运行失败的所述测试用例并标记为异常测试用例,然后将所述测试报告中的所述异常测试用例以及与所述异常测试用例相关联的测试人员进行重点显示。
  10. 如权利要求9所述的计算机装置,其特征在于,在所述获取待开发测试软件的当前迭代版本所对应的测试用例之前,所述处理器执行所述计算机可读指令还用以实现以下步骤:
    提供一输入界面,以供所述测试人员将针对所述当前迭代版本编写的测试用例上传到所述计算机装置的测试库,使得所述计算机装置从所述测试库中获取所述测试用例。
  11. 如权利要求10所述的计算机装置,其特征在于,在所述提供一输入界面之前,所述处理器执行所述计算机可读指令还用以实现以下步骤:
    提供一登陆界面,以供所述测试人员输入验证信息;
    获取所述验证信息,并将所述验证信息与预设的注册信息比对;
    在判断所述验证信息与其中一测试人员的注册信息一致时,允许所述测试人员登陆至所述计算机装置并提供所述输入界面;
    将上传的所述测试用例与所述验证信息相关联,从而使得每一测试用例与所述测试人员相关联。
  12. 如权利要求9所述的计算机装置,其特征在于,所述重点显示为所述异常测试用例以及与所述异常测试用例相关联的测试人员以不同的显示方式显示于所述测试报告中。
  13. 如权利要求9所述的计算机装置,其特征在于,在所述获取待开发测试软件的当前迭代版本所对应的测试用例之前,所述处理器执行所述计算机可读指令还用以实现以下步骤:
    提供一输入界面,以供所述测试人员经所述输入界面录入针对所述当前迭代版本的待评估数据;
    根据所述待评估数据自动生成所述测试用例;
    将录入所述待评估数据的测试人员的验证信息与所述测试用例相关联,从而使得每 一测试用例与录入所述待评估数据的测试人员相关联。
  14. 如权利要求9所述的计算机装置,其特征在于,所述处理器执行所述计算机可读指令以实现运行所述任务列表中的所述测试用例以对所述待开发测试软件进行测试时,包括以下步骤:
    判断所述测试用例是否运行成功;
    当所述测试用例运行成功时,运行下一测试用例;
    当所述测试用例运行失败时,重新运行所述测试用例,直至所述测试用例运行成功或运行次数超过一预设阈值。
  15. 一种非易失性可读存储介质,其上存储有计算机可读指令,其特征在于:所述计算机可读指令被处理器执行时实现如下步骤:
    获取待开发测试软件的当前迭代版本所对应的多个测试用例,其中,每一测试用例与其中一测试人员相关联;
    创建一任务列表,所述任务列表包括所述测试用例;
    在接收到一测试指令时,运行所述任务列表中的所述测试用例以对所述待开发测试软件进行测试;
    在所有所述测试用例运行完毕时,生成一测试报告并显示所述测试报告,所述测试报告包括每一测试用例的运行结果以及与所述测试用例相关联的测试人员;以及
    从所述测试报告中筛选出运行失败的所述测试用例并标记为异常测试用例,然后将所述测试报告中的所述异常测试用例以及与所述异常测试用例相关联的测试人员进行重点显示。
  16. 如权利要求15所述的存储介质,其特征在于,在所述获取待开发测试软件的当前迭代版本所对应的测试用例之前,所述计算机可读指令被所述处理器执行还用以实现以下步骤:
    提供一输入界面,以供所述测试人员将针对所述当前迭代版本编写的测试用例上传到所述计算机装置的测试库,使得所述计算机装置从所述测试库中获取所述测试用例。
  17. 如权利要求16所述的存储介质,其特征在于,在所述提供一输入界面之前,所述计算机可读指令被所述处理器执行还用以实现以下步骤:
    提供一登陆界面,以供所述测试人员输入验证信息;
    获取所述验证信息,并将所述验证信息与预设的注册信息比对;
    在判断所述验证信息与其中一测试人员的注册信息一致时,允许所述测试人员登陆至所述计算机装置并提供所述输入界面;
    将上传的所述测试用例与所述验证信息相关联,从而使得每一测试用例与所述测试人员相关联。
  18. 如权利要求15所述的存储介质,其特征在于,所述重点显示为所述异常测试用例以及与所述异常测试用例相关联的测试人员以不同的显示方式显示于所述测试报告中。
  19. 如权利要求15所述的存储介质,其特征在于,在所述获取待开发测试软件的当前迭代版本所对应的测试用例之前,所述计算机可读指令被所述处理器执行还用以实现以下步骤:
    提供一输入界面,以供所述测试人员经所述输入界面录入针对所述当前迭代版本的待评估数据;
    根据所述待评估数据自动生成所述测试用例;
    将录入所述待评估数据的测试人员的验证信息与所述测试用例相关联,从而使得每一测试用例与录入所述待评估数据的测试人员相关联。
  20. 如权利要求15所述的存储介质,其特征在于,所述计算机可读指令被所述处理器执行以实现运行所述任务列表中的所述测试用例以对所述待开发测试软件进行测试时,包括以下步骤:
    判断所述测试用例是否运行成功;
    当所述测试用例运行成功时,运行下一测试用例;
    当所述测试用例运行失败时,重新运行所述测试用例,直至所述测试用例运行成功或运行次数超过一预设阈值。
PCT/CN2019/118976 2019-04-09 2019-11-15 用于异常追踪的软件开发测试方法及相关装置 WO2020207016A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910279854.0 2019-04-09
CN201910279854.0A CN110147312A (zh) 2019-04-09 2019-04-09 软件开发测试方法、装置、计算机装置及存储介质

Publications (1)

Publication Number Publication Date
WO2020207016A1 true WO2020207016A1 (zh) 2020-10-15

Family

ID=67588280

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/118976 WO2020207016A1 (zh) 2019-04-09 2019-11-15 用于异常追踪的软件开发测试方法及相关装置

Country Status (2)

Country Link
CN (1) CN110147312A (zh)
WO (1) WO2020207016A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110147312A (zh) * 2019-04-09 2019-08-20 平安科技(深圳)有限公司 软件开发测试方法、装置、计算机装置及存储介质
CN111209206B (zh) * 2020-01-13 2024-05-24 卡斯柯信号(北京)有限公司 一种软件产品的自动测试方法及系统
CN111352839B (zh) * 2020-02-28 2023-09-12 中国工商银行股份有限公司 一种软件系统的问题排查方法及装置
CN112286792A (zh) * 2020-09-27 2021-01-29 长沙市到家悠享网络科技有限公司 一种接口测试方法、装置、设备和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7827339B2 (en) * 2005-01-25 2010-11-02 American Megatrends, Inc. System management interrupt interface wrapper
CN106055466A (zh) * 2015-04-13 2016-10-26 中兴通讯股份有限公司 一种实现服务器测试的方法、测试服务器及待测服务器
CN107797919A (zh) * 2017-07-24 2018-03-13 平安普惠企业管理有限公司 一种自动化测试的方法及计算设备
CN110147312A (zh) * 2019-04-09 2019-08-20 平安科技(深圳)有限公司 软件开发测试方法、装置、计算机装置及存储介质

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9268672B1 (en) * 2014-05-27 2016-02-23 Amazon Technologies, Inc. Automated test case generation for applications
CN108984418B (zh) * 2018-08-22 2023-04-11 中国平安人寿保险股份有限公司 软件测试管理方法、装置、电子设备及存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7827339B2 (en) * 2005-01-25 2010-11-02 American Megatrends, Inc. System management interrupt interface wrapper
CN106055466A (zh) * 2015-04-13 2016-10-26 中兴通讯股份有限公司 一种实现服务器测试的方法、测试服务器及待测服务器
CN107797919A (zh) * 2017-07-24 2018-03-13 平安普惠企业管理有限公司 一种自动化测试的方法及计算设备
CN110147312A (zh) * 2019-04-09 2019-08-20 平安科技(深圳)有限公司 软件开发测试方法、装置、计算机装置及存储介质

Also Published As

Publication number Publication date
CN110147312A (zh) 2019-08-20

Similar Documents

Publication Publication Date Title
WO2020207016A1 (zh) 用于异常追踪的软件开发测试方法及相关装置
WO2020151344A1 (zh) 针对迭代子版本的软件开发测试方法及相关装置
US10552301B2 (en) Completing functional testing
WO2018010552A1 (zh) 测试方法和装置
JP6635963B2 (ja) モデル駆動手法による自動化されたユーザインターフェース(ui)テストのための方法およびシステム
US7926038B2 (en) Method, system and computer program for testing a command line interface of a software product
US20200133658A1 (en) Change governance using blockchain
US20170357809A1 (en) Systems and methods for flaw attribution and correlation
US10049031B2 (en) Correlation of violating change sets in regression testing of computer software
GB2506122A (en) Integrating data transform test with data transform tool
US9513889B2 (en) System and method of automating installation of applications
US9256509B1 (en) Computing environment analyzer
JP7155626B2 (ja) フィールドデバイスコミッショニングシステムおよびフィールドデバイスコミッショニング方法
US9842044B2 (en) Commit sensitive tests
US10789563B2 (en) Building confidence of system administrator in productivity tools and incremental expansion of adoption
US8589734B2 (en) Verifying correctness of processor transactions
US11086768B1 (en) Identifying false positives in test case failures using combinatorics
US10481969B2 (en) Configurable system wide tests
US20210271592A1 (en) Executing tests in deterministic order
US11522898B1 (en) Autonomous configuration modeling and management
US10785108B1 (en) Intelligent learning and management of a networked architecture
US10204319B2 (en) Enterprise system and method for facilitating common platform for multiple-users working parallelly in enterprise environment
JP2014071775A (ja) システム開発支援装置およびシステム開発支援方法
US10261925B2 (en) Enhanced techniques for detecting programming errors in device drivers
US20090276444A1 (en) Adaptive Methodology for Updating Solution Building Block Architectures and Associated Tooling

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19924458

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19924458

Country of ref document: EP

Kind code of ref document: A1