CN114328250A - Automatic self-checking method, medium and device for software system - Google Patents

Automatic self-checking method, medium and device for software system Download PDF

Info

Publication number
CN114328250A
CN114328250A CN202111657088.0A CN202111657088A CN114328250A CN 114328250 A CN114328250 A CN 114328250A CN 202111657088 A CN202111657088 A CN 202111657088A CN 114328250 A CN114328250 A CN 114328250A
Authority
CN
China
Prior art keywords
test
operations
software system
self
system under
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111657088.0A
Other languages
Chinese (zh)
Inventor
周高峰
黄循榜
吴晓燕
谢宇刚
王可盛
胡行伟
陈美花
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202111657088.0A priority Critical patent/CN114328250A/en
Publication of CN114328250A publication Critical patent/CN114328250A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Debugging And Monitoring (AREA)

Abstract

The disclosure relates to an automatic self-checking method for a software system to be tested. The method includes receiving, by a robotic self-test system, a selection of a test scenario for a software system under test. The method also includes determining, by the robotic self-inspection system, test data corresponding to the selected test scenario. The test data includes a description of one or more test cases. Each test case includes one or more operations on the software system under test under the selected test scenario. A common operation of the one or more operations is encapsulated as a common key that can be shared by multiple test cases. At least one test case includes a reference to a common key. The method also includes parsing the determined test data to generate the one or more test cases. The method also includes executing the one or more test cases. The method also includes generating a test report based on the response of the software system under test to the one or more operations.

Description

Automatic self-checking method, medium and device for software system
Technical Field
The present disclosure relates generally to automatic self-testing of software systems, and more particularly, to a method, medium, and apparatus for automatic self-testing of software systems under test using a robotic self-testing system.
Background
At present, software systems of large, medium and small enterprises are increasingly complicated, intelligent and diversified. For such complex software systems, high availability and high performance assurance capabilities are often inadequate. In the on-line and stable use process of the software system, the software system needs to be tested continuously. However, the conventional manual testing cannot perform a stable check of the software system quickly, comprehensively and repeatedly, and usually requires a great deal of manpower. If a special test flow is developed by using a test tool with poor compatibility, the development efficiency is low, and the pertinence is too strong to expand the test applied to other software or programs. Once the original software is on-line, a test tool developed for the software is no longer used, which causes a great deal of waste.
Therefore, an automatic and efficient software self-checking method is needed, which can meet the self-checking requirements of various software systems and reduce the personnel investment.
Disclosure of Invention
The following presents a simplified summary of the disclosure in order to provide a basic understanding of some aspects of the disclosure. However, it should be understood that this summary is not an exhaustive overview of the disclosure. It is not intended to identify key or critical elements of the disclosure or to delineate the scope of the disclosure. Its sole purpose is to present some concepts of the disclosure in a simplified form as a prelude to the more detailed description that is presented later.
The invention aims to provide a method for automatically and automatically self-checking a software system, which is used for solving at least one or some problems that manual testing cannot be quickly, comprehensively and repeatedly checked, and special testing tools are low in compatibility insertion and reusability.
According to one aspect of the present disclosure, an automatic self-test method for a software system under test is provided. The method includes receiving, by a robotic self-test system, a selection of a test scenario for a software system under test. The method also includes determining, by the robotic self-inspection system, test data corresponding to the selected test scenario. The test data includes a description of one or more test cases. Each test case includes one or more operations on the software system under test under the selected test scenario. A common operation of the one or more operations is encapsulated as a common key that can be shared by multiple test cases. At least one test case includes a reference to a common key. The method also includes parsing the determined test data to generate the one or more test cases. The method also includes executing the one or more test cases. The method also includes generating a test report based on the response of the software system under test to the one or more operations.
According to another aspect of the present disclosure, an automatic self-test device for a software system under test is provided. The apparatus includes a memory having instructions stored thereon and a processor configured to execute the instructions stored on the memory to perform the method as described above.
According to yet another aspect of the present disclosure, there is provided a computer-readable storage medium comprising computer-executable instructions that, when executed by one or more processors, cause the one or more processors to perform a method according to the above-described aspect of the present disclosure.
One of the advantages of the embodiment according to the present disclosure is that since the robot self-inspection system can automatically perform the test, manual testing is replaced, manual intervention is reduced, and labor cost is saved.
Another advantage according to an embodiment of the present disclosure is that a test case replaces general-purpose operations by using a common keyword, which is highly reusable and extensible, has high development efficiency, and can be conveniently customized twice when a change occurs.
Yet another advantage according to embodiments of the present disclosure is that the robot self-inspection system may be arranged to perform a self-inspection test during idle time of the software system server, saving server resources.
Yet another advantage of embodiments according to the present disclosure is that software systems can be operated and tested in parallel through a variety of interfaces, reducing the time required for testing.
It should be appreciated that the above advantages need not all be achieved in one or some particular embodiments, but may be partially dispersed among different embodiments according to the present disclosure. Embodiments in accordance with the present disclosure may have one or more of the above advantages, as well as other advantages alternatively or additionally.
Other features of the present invention and advantages thereof will become more apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The present disclosure may be more clearly understood from the following detailed description with reference to the accompanying drawings, in which:
FIG. 1 shows a system diagram for automated self-testing of a software system under test, in accordance with an embodiment of the present disclosure;
FIG. 2 illustrates a flow diagram of a method for automatic self-testing of a software system under test, in accordance with an embodiment of the present disclosure; and
FIG. 3 illustrates an exemplary configuration of a computing device in which embodiments in accordance with the present disclosure may be implemented.
Detailed Description
The following detailed description is made with reference to the accompanying drawings and is provided to assist in a comprehensive understanding of various exemplary embodiments of the disclosure. The following description includes various details to aid understanding, but these details are to be regarded as examples only and are not intended to limit the disclosure, which is defined by the appended claims and their equivalents. The words and phrases used in the following description are used only to provide a clear and consistent understanding of the disclosure. In addition, descriptions of well-known structures, functions, and configurations may be omitted for clarity and conciseness. Those of ordinary skill in the art will recognize that various changes and modifications of the examples described herein can be made without departing from the spirit and scope of the disclosure.
As mentioned above, the manual testing method requires a lot of manpower input and cannot meet the testing requirements of the current complicated and numerous software systems, and the special testing tool customized for a certain software system is inefficient in development and has poor reusability because it cannot be migrated to other software systems.
Embodiments of the present disclosure provide methods, systems, devices, and media for automatic self-testing of software systems under test. The method may be performed primarily by a robotic self-inspection system. The robot self-inspection system completes an automatic test process by mapping the selected test scene to test data, analyzing the test data to generate a test case, executing the test case and generating a test report. The whole process can reduce manual participation. Furthermore, some common operations of the system to be tested are packaged into common keywords which can be shared among different test cases, and the common keywords are quoted in the test cases, so that test codes can be written once for repeated operations, unnecessary repeated human input is further saved, and the development efficiency of the test system is improved.
Fig. 1 shows a system diagram for automatic self-testing of a software system under test according to an embodiment of the present disclosure. As shown in fig. 1, the automatic self-inspection of the software system under test 120 is mainly implemented by the robotic self-inspection system 100. The robotic self-testing system 100 may include a test scenario-data mapping module 102, an auto-resolution module 104, an execution module 108, and a reporting module 110.
The software system under test 120 may be any collection of one or more programs. For example, the software system under test 120 may be a combination of one or more business systems in a telecommunications operator's office network (OA) system, Customer Relationship Management (CRM) system, resource system, billing system, portal system, and the like. For example, the software system 120 may operate in response to access requests from various terminals (web, mobile terminals, Personal Computer (PC) terminals, etc.), in response to requests from remote clients to connect to and operate on remote servers, in response to requests to access databases, in response to user interface (I/O, including Graphical User Interface (GUI) operations, etc. The software system 120 may run on a variety of different operating systems (IOS, Android, Windows, Linux) or may be accessible through different operating system interfaces. Accordingly, testing the software system 120 means providing various inputs (i.e., performing various operations) to the software system 120 to traverse its various states, phases or functions to verify whether it is working properly or to check how well it is working.
The test scenario-data mapping module 102 of the robotic self-test system 100 receives a selection of test scenarios for the software system under test 120. The test scenarios may be divided according to the service systems of the software system to be tested 120, and each test scenario tests one or more service logics of the corresponding service system. For example, a test scenario may be divided into "OA system", "CRM system", "resource system", "portal system", "resource system", etc., where a business logic of "early data preparation", "OA system login", and "system logout" needs to be tested in the "OA system" test scenario. The test scenario may also be formed by combining some functions in multiple service systems. For example, a round of traversal test can be performed on login/access operations of each business system in the "OA system", "CRM system", "resource system", "portal system", "resource system" in the "automation tour inspection" test scenario. Through the design of the customized test scene, the software system can be conveniently and uniformly managed. For example, a dedicated unified portal may be constructed according to portal management requirements of different industries, and all portals may be registered in a single test scenario.
In some embodiments, the selection of the test scenario may be fixedly set in the robot self-inspection system by a developer in advance. Alternatively or additionally, the selection of the test scenario may be received by the robotic self-test system 100 from an external visualization interface 101. The visualization interface 101 is, for example, a Graphical User Interface (GUI), wherein a tester can customize a test scenario by manipulating controls in the GUI.
Based on receiving the selected test scenario, test scenario-to-data mapping module 102 may determine test data corresponding to the selected test scenario. The robot self-inspection system 100 may be preset with a plurality of test data files, each test data file corresponding to at least one test scenario. The mapping relationship between the test scenario and the test data (or the file of the test data) may be preset in the mapping module 102. When a test scenario is selected, the mapping module 102 may read a corresponding file of test data based on the mapping relationship to determine the test data.
After determining the test data, the auto-parse module 104 parses the determined test data to generate one or more test cases (e.g., test case 1 and test case 2 in fig. 1, collectively referred to as test case 105).
The test data includes a description of one or more test cases, and each test case includes one or more operations on the software system under test 120 under the selected test scenario. For example, in FIG. 1, the test data determined by mapping module 102 includes descriptions of test case 1 and test case 2. In some embodiments, the test data is defined in a tabular format. For example, the test data may be tabular data using any one of hypertext markup language (HTML), Tab Separator Value (TSV), plain text (TXT), or a destructuredtext (rest) format. At this time, the test data includes a table of test cases. The description for each test case includes a name of the test case, one or more actions, and parameters corresponding to the actions. The combination of actions and parameters constitutes an operation. In light of the above description of the software system under test 120, accordingly, one or more operations on the software system under test 120 in the test case may include, but are not limited to: driving a specific browser and accessing a specific web address located to a server of the software system under test 120, driving a specific application on the operating system of the mobile terminal for starting the software system under test 120, connecting and performing a specific operation on a specific remote server associated with the software system under test 120, sending and parsing a request according to the HTTP protocol, or issuing a specific request from an I/O interface of the software system under test 120.
In some embodiments, the test data may further include one or more of a settings table, a variables table, and a keywords table. By analyzing the test data, the setting information, the quoted variables, and the keyword information of the corresponding test case can be obtained for use when the execution module 108 executes the test case.
In some embodiments, the test cases may be one or more of the following types: (1) interface-based testing; (2) a protocol-based test; and (3) code-based testing. The interface-based test is mainly responsible for automatic simulation operation of the user interface, including interface operation of various client/server (C/S) and browser/server (B/S) architectures. The protocol-based test is mainly responsible for interface scheduling, mobile application simulation self-checking and database operation, and realizes interaction among different servers. The code-based test is mainly responsible for performance self-test and reliability self-test, such as self-test of server performance for frequent and high-speed access operations of a client. As will be described later, the test cases can save test development time by sharing common keywords, so that testers can develop more test cases at the same time cost, and a more comprehensive test scene is covered. Moreover, the common keywords can be developed based on library keywords of the test library, and the test library can comprise a third-party extension library aiming at various functions, so that the functions covered by the test cases are enriched.
Common operations among the one or more operations on the software system under test 120 may be packaged as a common key for sharing by multiple test cases. In some embodiments, a general operation may be an operation in which the frequency of operations reaches a certain threshold. For example, if certain GUI controls are found to be operated frequently (above a certain threshold), the operations on these GUI controls may be packaged as a common keyword. In some embodiments, the common operations may be operations having the same or similar operating logic. For example, if there are operation logics of "open a fire browser" - "open a certain website" - "enter account password login" in a plurality of test cases, a series of operations with such operation logics may be packaged into a common keyword. Generally, an operation is composed of a combination of actions that embody logic and parameters that represent objects. When an operation is extracted as a common keyword, only the logic of the operation may be extracted without fixing the parameter (i.e., the operation object) or the parameter, and the operation object may be used as a variable used together with the common keyword.
And regarding the packaged public keywords, corresponding general operations can be embodied by referring to the packaged public keywords in the test cases. Public keywords can have good reusability, so that testers can build a test system more conveniently. For example, in FIG. 1, test case 1 references a common key 106. This avoids the human overhead of repeatedly writing test code for common operations. The test case may consist of a set of one or more common keywords. Such test cases may be considered key-driven. Because the underlying implementation logic of the general operation represented by the key can be hidden in the test case, and only the name of the key needs to be referred to (if necessary, an operation object is also specified), the test case has better readability and maintainability, and is object-oriented. Such test cases may further be considered data driven when the common key encapsulates only the logic of the operation, and does not encapsulate the operands. Once the business logic or the operation object is changed, the whole change is not needed, and only the bottom logic implementation of the key words or the operation object data are needed to be modified.
Execution module 108 executes the generated one or more test cases (e.g., test case 105). For common keywords in test cases (e.g., common keyword 106), execution module 108 may decapsulate the common keywords and execute the general operations represented by the common keywords. By executing the test case, the execution module 108 performs one or more operations in the selected test scenario on the software system under test 120, and receives a corresponding response from the software system under test 120. The execution module 108 passes the response to the reporting module 110.
In some embodiments, execution module 108 includes or calls test library 109. The test library 109 provides an interface for interacting with the software system under test. The test library 109 may include a built-in library of the robotic self-test system 100, an external extension library imported from a third party, or other test library designed by a developer. External extension libraries from third parties may be written in the same or different language as the robotic self-test system 100. For example, the robotic self-test system 100 may be written in Java, Python, or C, while the external extension library may be written in the same or a different language. The test library 109 can support being referred by different servers, so that the test library is compatible with various execution environments, the deployment cost of the self-checking system is greatly reduced, and the deployment process is simplified.
In some cases, the test library 109 includes library keywords, and the common keywords 106 include user keywords written based on the library keywords. For example, the library key may be associated with one or more operations selected from the group consisting of: user interface UI automation testing for mobile terminal operating systems, remote server connection and operation testing, HTTP request sending and parsing testing, and input/output I/O interface testing. As can be seen, library keywords are used as the underlying interface method, while user keywords are encapsulated at a higher level. Through the hierarchical design of keywords, the test case based on the keywords has better compatibility and flexibility. For example, support for various interfaces may be implemented at the library key level, while the underlying interface implementation may not be of particular interest at the user key level, but rather focus on the implementation of business logic. Thus, once the underlying interface changes, only the implementation of the library key is modified. Moreover, the layered design is more beneficial to positioning the fault point when a test error occurs.
In some embodiments, execution module 108 may operate software system under test 120 in parallel over multiple interfaces. As previously described, the software system under test 120 may be accessed through different operating system interfaces. Accordingly, execution module 108 may test software system 120 through the plurality of different operating system interfaces simultaneously, thereby reducing the time overhead of testing. The different operating system interfaces may include Web page based interfaces, mobile terminal operating system (e.g., IOS, Android, etc.) based interfaces, and PC desktop operating system (e.g., Windows, Linux, etc.) based interfaces. In embodiments where the interface to the software system under test 120 is provided by the test library 109, the test library 109 may include a corresponding plurality of external extension libraries from third parties, such as Selenium for Web user interfaces, Appium for mobile terminals, Requests for http protocol interfaces, and the like, in order to provide a plurality of different operating system interfaces.
Reporting module 110 analyzes the responses received from execution module 108 from software system under test 120 and generates one or more test reports based on the responses. In one or more embodiments, the goal of the test is whether the software system under test 120 has high availability. At this time, the response from the software system under test 120 to the specific test case may be whether the business process is working normally (i.e., not down or not available), for example, whether a welcome interface is normally logged in and displayed, or whether a package order of a customer can be normally ordered, etc. In response to indicating that the business process is working properly, the reporting module 110 indicates in the generated test report that the test of the corresponding specific test case is successful. In other alternative or additional embodiments, the goal of testing may also be whether the software system under test 120 has high performance. At this time, the reporting module 110 may determine a response time of the software system under test 120 based on a response of the software system under test to one or more operations of a specific test case, and indicate in the generated test report that the test of the corresponding specific test case is successful if the response time is less than a threshold time.
In some embodiments, to facilitate rapid location of the failure point that caused the test to fail, auxiliary information may be provided in the test report. The test report may include the name of the common keyword referred to in the test case, the execution status, and error information at the time of execution failure, thereby helping the tester locate the location where the failure occurred.
In some embodiments, the test report may be generated in a number of different file formats, such as XML or HTML formats. In addition to the output report on each keyword information, the test report may include a detailed log file and a summary report file. By providing test reports in different forms, the tester can comprehensively know all aspects of the self-checking operation process conveniently.
In some embodiments, the automated self-test method for the software system under test 120 also involves the build tool 130. The build tool 130 may build and manage the robotic self-inspection system 100. In particular embodiments, build tool 130 may provide configuration information to robotic self-test system 100, including configuring when and/or how often the robotic self-test system performs self-test methods. Thus, the robotic self-test system 100 may be configured to perform the self-test method at a particular time or at a particular frequency. By way of example, the server idle time of the software system under test 120 may be monitored by the robotic self-test system 100 or the build tool 130. The building tool 130 may configure the robot self-testing system 100 to execute the self-testing method during the idle time of the server, thereby avoiding interference with the daily work of the software system to be tested 120 and improving the utilization rate of system resources. For example, the software system under test 120 may be frequently used/accessed during the daytime and have less traffic during the early morning or night, then the robotic self-test system 100 may be configured by the build tool 130 to perform the self-test procedure during the early morning or night. As another example, the build tool 130 may configure the robotic self-test system 100 to perform the self-test method at a relatively high frequency during an early period of time when the software system under test 120 is on-line, and at a relatively low frequency during a steady operation of the software system under test 120. The build tool 130 can record the operation status of the software system 120 under test to determine whether it is in the initial online period or the steady operation period.
In some embodiments, the building tool 130 may also post-process the test report generated by the reporting module 110, including adjustment of the report format, further pushing of the report. In some examples, the build tool 130 may push the generated test report via a variety of network communication tools to ensure that the test report can be successfully received and noticed by the target object. Examples of network communication tools include, but are not limited to: WeChat, mailbox, SMS, phone, etc. The test report may be pushed to the tester by the building tool 130 through the various network communication tools, so that the tester may only perform further work when the test fails, thereby helping the enterprise save labor costs.
In some embodiments, the robotic self-test system 100 may be based on a Robot Framework (RF) development platform. The robotic self-test system 100 may operate Pybot, Jybot interface to execute self-test scripts in a variety of common formats (e.g., py or txt) to initiate operation via RF drivers. In some embodiments, the build tool 130 may be developed based on Jenkins continuous integration tools.
According to the self-checking method disclosed by the invention, automatic self-checking is carried out by using the robot, a manual self-checking mode is replaced, the automatic execution of daily high-availability self-checking and immobilization scenes of the system can be supported, and the personnel investment is reduced. In addition, by writing the public key words, the reference can be made in self-checking scenes of different software systems, and the reusability is high. Moreover, the test case based on the keyword drive has strong expansion capability and high agility, and can flexibly and flexibly perform secondary customization according to subsequent service changes (general logic or data object changes).
FIG. 2 shows a flowchart of a method for performing automatic self-test on a software system under test according to an embodiment of the present disclosure. The automated self-testing method 200 may be performed by various modules of a robotic self-testing system (e.g., the robotic self-testing system 100 of fig. 1). Only the main steps of the automatic self-checking method 200 are briefly described below, and specific descriptions of the elements involved in the steps can be found in fig. 1, which is not described herein again.
In step S202, the robotic self-test system receives a selection of a test scenario for the software system under test. The selection of the test scenario may be implemented in a manner of being fixedly set in the robot self-inspection system in advance, or may be received by the robot self-inspection system from an external interface.
In step S204, the robotic self-inspection system determines test data corresponding to the test scenario selected in step S202. The robot self-inspection system may determine corresponding test data based on a mapping relationship between a test scenario of a preset number and the test data. The test data includes a description of one or more test cases. Each test case includes one or more operations on the software system under test under the selected test scenario. A common operation of the one or more operations is encapsulated as a common key that can be shared by multiple test cases. At least one of the test cases includes a reference to a common key. The description of the test case may include a name of the test case, one or more actions, and parameters corresponding to the actions. The combination of actions and corresponding parameters constitutes an operation.
In step S206, the robotic self-test system parses the test data determined in step S204 to generate the one or more test cases. The robotic self-test system may generate the one or more test cases based on a description of the one or more test cases included in the test data.
In step S208, the robotic self-test system executes the one or more test cases generated in step S206. For the public keywords in the test case, the robot self-checking system can unpack the public keywords, and then execute the general operation represented by the public keywords.
In step S210, the robotic self-inspection system generates a test report based on the response of the software system under test to the one or more operations.
In some embodiments, steps S202 to S210 may be set by a build tool (e.g., build tool 130 of fig. 1) to be performed at a particular time or at a particular frequency. For example, steps S202 to S210 of method 200 may be performed during a monitored server idle time of the software system under test.
In some embodiments, the method 200 further comprises pushing (e.g., by a build tool) the test report generated at step S210 via a variety of network communication tools.
Although operations are depicted in fig. 2 in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous.
Fig. 3 illustrates an exemplary configuration of a computing device 300 capable of implementing embodiments in accordance with the present disclosure.
Computing device 300 is an example of a hardware device to which the above-described aspects of the disclosure can be applied. Computing device 300 may be any machine configured to perform processing and/or computing. Computing device 300 may be, but is not limited to, a workstation, a server, a desktop computer, a laptop computer, a tablet computer, a Personal Data Assistant (PDA), a smart phone, an in-vehicle computer, or a combination thereof.
As shown in fig. 3, computing device 300 may include one or more elements that may be connected to or in communication with bus 302 via one or more interfaces. Bus 302 can include, but is not limited to, an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an enhanced ISA (eisa) bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnect (PCI) bus, among others. Computing device 300 may include, for example, one or more processors 304, one or more input devices 306, and one or more output devices 308. The one or more processors 304 may be any kind of processor and may include, but are not limited to, one or more general purpose processors or special purpose processors (such as special purpose processing chips). The processor 302 is for example configured to implement the robotic self-test system 100 in fig. 1 or further to implement the build tool 130. Input device 306 may be any type of input device capable of inputting information to a computing device and may include, but is not limited to, a mouse, a keyboard, a touch screen, a microphone, and/or a remote control. Output device 308 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, a vibrator, and/or a printer.
The computing device 300 may also include or be connected to a non-transitory storage device 314, which non-transitory storage device 314 may be any non-transitory and may implement a storage device for data storage, and may include, but is not limited to, a disk drive, an optical storage device, a solid state memory, a floppy disk, a flexible disk, a hard disk, a magnetic tape, or any other storage deviceMagnetic media, compact discs or any other optical media, cache memory and/or any other memory chip or module, and/or any other medium from which a computer can read data, instructions and/or code. The computing device 300 may also include Random Access Memory (RAM)310 and Read Only Memory (ROM) 33. The ROM 33 may store programs, utilities or processes to be executed in a nonvolatile manner. The RAM 310 may provide volatile data storage and store instructions related to the operation of the computing device 300. Computing device 300 may also include a network/bus interface 316 coupled to a data link 318. The network/bus interface 316 may be any kind of device or system capable of enabling communication with external devices and/or networks, and may include, but is not limited to, a modem, a network card, an infrared communication device, a wireless communication device, and/or a chipset (such as bluetooth)TMDevices, 802.11 devices, WiFi devices, WiMax devices, cellular communications facilities, etc.).
The present disclosure may be implemented as any combination of apparatus, systems, integrated circuits, and computer programs on non-transitory computer readable media. One or more processors may be implemented as an Integrated Circuit (IC), an Application Specific Integrated Circuit (ASIC), or a large scale integrated circuit (LSI), a system LSI, or a super LSI, or as an ultra LSI package that performs some or all of the functions described in this disclosure.
The present disclosure includes the use of software, applications, computer programs or algorithms. Software, applications, computer programs, or algorithms may be stored on a non-transitory computer readable medium to cause a computer, such as one or more processors, to perform the steps described above and depicted in the figures. For example, one or more memories store software or algorithms in executable instructions and one or more processors may associate a set of instructions to execute the software or algorithms to provide various functionality in accordance with embodiments described in this disclosure.
Software and computer programs (which may also be referred to as programs, software applications, components, or code) include machine instructions for a programmable processor, and may be implemented in a high-level procedural, object-oriented, functional, logical, or assembly or machine language. The term "computer-readable medium" refers to any computer program product, apparatus or device, such as magnetic disks, optical disks, solid state storage devices, memories, and Programmable Logic Devices (PLDs), used to provide machine instructions or data to a programmable data processor, including a computer-readable medium that receives machine instructions as a computer-readable signal.
By way of example, computer-readable media can comprise Dynamic Random Access Memory (DRAM), Random Access Memory (RAM), Read Only Memory (ROM), electrically erasable read only memory (EEPROM), compact disk read only memory (CD-ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to carry or store desired computer-readable program code in the form of instructions or data structures and which can be accessed by a general-purpose or special-purpose computer or a general-purpose or special-purpose processor. Disk or disc, as used herein, includes Compact Disc (CD), laser disc, optical disc, Digital Versatile Disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above are also included within the scope of computer-readable media.
In summary, the present disclosure provides an automatic self-checking method for a software system to be tested, including the following steps performed by a robot self-checking system: receiving a selection of a test scenario for a software system under test; determining test data corresponding to the selected test scenario, wherein the test data comprises a description of one or more test cases, each test case comprises one or more operations on the software system to be tested under the selected test scenario, and common operations in the one or more operations are packaged into a common keyword which can be shared by a plurality of test cases, wherein at least one test case comprises a reference to the common keyword; parsing the determined test data to generate one or more test cases; executing one or more test cases; and generating a test report based on the response of the software system under test to the one or more operations.
In some embodiments, the robotic self-test system is configured to perform the self-test method at a particular time or at a particular frequency.
In some embodiments, the method further comprises monitoring a server idle time of the software system under test, the robotic self-test system being configured to perform the self-test method during the server idle time.
In some embodiments, the common operations include operations that operate at a frequency that reaches a certain threshold or operations that operate logically the same or similar.
In some embodiments, the operation logic of a general operation of the one or more operations is encapsulated as a common key that can be shared by multiple test cases.
In some embodiments, the one or more test cases include test cases having one or more selected from the following types: interface-based testing; a protocol-based test; and code-based testing.
In some embodiments, the common keywords comprise user keywords written based on library keywords from the test library.
In some embodiments, the library key is associated with one or more operations selected from the group consisting of: the method comprises the following steps of automatically testing a user interface UI for a mobile terminal operating system; connecting a remote server and performing operation test; HTTP request sending and analytic testing; and input/output I/O interface testing.
In some embodiments, executing the one or more test cases includes operating on the software system under test in parallel through more than one of: an interface based on a Web page; an interface based on a mobile terminal operating system; and a PC desktop operating system based interface.
In some embodiments, generating a test report based on the response of the software system under test to the one or more operations comprises: determining the response time of the software system to be tested to one or more operations of a specific test case; in response to the response time being less than the threshold time, a test success for the particular test case is indicated in the test report.
In some embodiments, the method further comprises pushing the generated test report via a plurality of network communication tools.
In some embodiments, the test data is defined in a tabular format.
In some embodiments, the test report includes the name of the common key, the execution status, and error information at the time of execution failure.
The present disclosure also provides an automatic self-test apparatus for a software system under test, comprising a processor and a memory storing computer program instructions which, when executed by the processor, cause the processor to perform the automatic self-test method as described above.
The present disclosure also provides a computer-readable storage medium storing computer program instructions that, when executed by a computer system, cause the computer system to perform a method according to any of the preceding claims.
The subject matter of the present disclosure is provided as examples of apparatus, systems, methods, and programs for performing the features described in the present disclosure. However, other features or variations are contemplated in addition to the features described above. It is contemplated that the implementation of the components and functions of the present disclosure may be accomplished with any emerging technology that may replace the technology of any of the implementations described above.
Additionally, the above description provides examples, and does not limit the scope, applicability, or configuration set forth in the claims. Changes may be made in the function and arrangement of elements discussed without departing from the spirit and scope of the disclosure. Various embodiments may omit, substitute, or add various procedures or components as appropriate. For example, features described with respect to certain embodiments may be combined in other embodiments.

Claims (15)

1. An automatic self-checking method for a software system to be tested, comprising:
by the robot self-checking system:
receiving a selection of a test scenario for a software system under test;
determining test data corresponding to the selected test scenario, the test data including a description of one or more test cases, wherein each test case includes one or more operations on the software system under test under the selected test scenario, a common operation of the one or more operations being packaged as a common keyword that can be shared by a plurality of test cases, wherein at least one of the test cases includes a reference to the common keyword;
parsing the determined test data to generate the one or more test cases;
executing the one or more test cases; and
generating a test report based on the response of the software system under test to the one or more operations.
2. A method according to claim 1, wherein the robotic self-test system is configured to perform the self-test method at a particular time or at a particular frequency.
3. The method of claim 2, further comprising:
monitoring server idle time of the software system under test,
wherein the robotic self-test system is configured to perform the self-test method at the server idle time.
4. The method of claim 1, wherein the common operations comprise operations that operate at a frequency up to a certain threshold or operations that operate logically the same or similar.
5. The method of claim 1, wherein operational logic of a general operation of the one or more operations is encapsulated as a common key that can be shared by multiple test cases.
6. The method of claim 1, wherein the one or more test cases comprise test cases having one or more of the following types selected from:
interface-based testing;
a protocol-based test; and
code-based testing.
7. The method of claim 1, wherein the common keywords comprise user keywords written based on library keywords from a test library.
8. The method of claim 7, wherein the library key is associated with one or more operations selected from the group consisting of:
the method comprises the following steps of automatically testing a user interface UI for a mobile terminal operating system;
connecting a remote server and performing operation test;
HTTP request sending and analytic testing; and
input/output I/O interface testing.
9. The method of claim 1, wherein executing the one or more test cases comprises operating the software system under test in parallel through more than one of:
an interface based on a Web page;
an interface based on a mobile terminal operating system; and
based on the interface of the PC desktop operating system.
10. The method of claim 1, wherein generating a test report based on the response of the software system under test to the one or more operations comprises:
determining a response time of the software system under test to the one or more operations of a particular test case;
in response to the response time being less than a threshold time, indicating in the test report that the test for the particular test case was successful.
11. The method of claim 1, further comprising:
the generated test report is pushed through various network communication tools.
12. The method of claim 1, wherein the test data is defined in a tabular format.
13. The method of claim 1, wherein the test report includes a name of a common key, an execution status, and error information at execution failure.
14. An automatic self-test device for a software system under test, comprising:
a memory having instructions stored thereon; and
a processor configured to execute instructions stored on the memory to perform the method of any of claims 1 to 13.
15. A computer-readable storage medium comprising computer-executable instructions that, when executed by one or more processors, cause the one or more processors to perform the method of any one of claims 1-13.
CN202111657088.0A 2021-12-30 2021-12-30 Automatic self-checking method, medium and device for software system Pending CN114328250A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111657088.0A CN114328250A (en) 2021-12-30 2021-12-30 Automatic self-checking method, medium and device for software system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111657088.0A CN114328250A (en) 2021-12-30 2021-12-30 Automatic self-checking method, medium and device for software system

Publications (1)

Publication Number Publication Date
CN114328250A true CN114328250A (en) 2022-04-12

Family

ID=81019036

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111657088.0A Pending CN114328250A (en) 2021-12-30 2021-12-30 Automatic self-checking method, medium and device for software system

Country Status (1)

Country Link
CN (1) CN114328250A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114970486A (en) * 2022-07-13 2022-08-30 港胜技术服务(深圳)有限公司 Method, apparatus and medium for generating PDF reports for software test results
CN115022220A (en) * 2022-05-25 2022-09-06 平安资产管理有限责任公司 Inspection abnormity identification method and device, computer equipment and readable storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115022220A (en) * 2022-05-25 2022-09-06 平安资产管理有限责任公司 Inspection abnormity identification method and device, computer equipment and readable storage medium
CN114970486A (en) * 2022-07-13 2022-08-30 港胜技术服务(深圳)有限公司 Method, apparatus and medium for generating PDF reports for software test results
CN114970486B (en) * 2022-07-13 2022-10-25 港胜技术服务(深圳)有限公司 Method, apparatus and medium for generating PDF reports for software test results

Similar Documents

Publication Publication Date Title
CN109828903B (en) Automatic testing method and device, computer device and storage medium
CN108319547B (en) Test case generation method, device and system
WO2022016847A1 (en) Automatic test method and device applied to cloud platform
CN110013672B (en) Method, device, apparatus and computer-readable storage medium for automated testing of machine-run games
CN114328250A (en) Automatic self-checking method, medium and device for software system
CN110659167A (en) Server hardware testing method, equipment and storage medium
CN111695827B (en) Business process management method and device, electronic equipment and storage medium
CN107644075B (en) Method and device for collecting page information
CN112162915A (en) Test data generation method, device, equipment and storage medium
CN110990274A (en) Data processing method, device and system for generating test case
CN111240955A (en) Automatic testing method and device for Http interface, storage medium and electronic device
US20230214243A1 (en) One-machine multi-control method, apparatus, system and electronic device
CN115080398A (en) Automatic interface test system and method
CN115658529A (en) Automatic testing method for user page and related equipment
CN114115838A (en) Data interaction method and system based on distributed components and cloud platform
US20190212990A1 (en) Framework for generating adapters in an integrated development environment
CN114661594A (en) Method, apparatus, medium, and program product for automated testing
CN112241362A (en) Test method, test device, server and storage medium
CN111414154A (en) Method and device for front-end development, electronic equipment and storage medium
CN113886221B (en) Test script generation method and device, storage medium and electronic equipment
CN114048134A (en) Automatic testing method and device based on POM and data driving
CN110795338B (en) Front-end and back-end interaction-based automatic testing method and device and electronic equipment
CN115437903A (en) Interface test method, device, apparatus, storage medium, and program
CN112559346A (en) Service testing method and device
CN111858310B (en) Method, system, equipment and medium for dynamic test based on source code

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination