CN116909936A - Big data automatic test method, equipment and readable storage medium - Google Patents

Big data automatic test method, equipment and readable storage medium Download PDF

Info

Publication number
CN116909936A
CN116909936A CN202311178992.2A CN202311178992A CN116909936A CN 116909936 A CN116909936 A CN 116909936A CN 202311178992 A CN202311178992 A CN 202311178992A CN 116909936 A CN116909936 A CN 116909936A
Authority
CN
China
Prior art keywords
test
data
big data
interface
automated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311178992.2A
Other languages
Chinese (zh)
Other versions
CN116909936B (en
Inventor
华子仪
方朋朋
谢炎东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Zhicheng Software Technology Service Co ltd
Shenzhen Smart City Technology Development Group Co ltd
Original Assignee
Shenzhen Zhicheng Software Technology Service Co ltd
Shenzhen Smart City Technology Development Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Zhicheng Software Technology Service Co ltd, Shenzhen Smart City Technology Development Group Co ltd filed Critical Shenzhen Zhicheng Software Technology Service Co ltd
Priority to CN202311178992.2A priority Critical patent/CN116909936B/en
Publication of CN116909936A publication Critical patent/CN116909936A/en
Application granted granted Critical
Publication of CN116909936B publication Critical patent/CN116909936B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3676Test management for coverage analysis
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application discloses a big data automatic test method, equipment and a readable storage medium, and belongs to the technical field of data processing. According to the application, the SQL query statement is configured into a big data test index interface by detecting the SQL query statement; generating a data verification Beanshell script according to the interface information of the big data test index interface; and configuring the data verification Beanshell script to the assertion of the initial data test case to generate an automatic data test case. Different query operations and customized data verification logic are flexibly executed according to different test requirements, and a more comprehensive test scene can be covered. And the script is used for automatically executing data verification and assertion operation, so that the manual intervention is reduced and the testing efficiency is improved. The test data and the test logic can be reused by configuring the big data test index interface and the data verification script, and the consistency and the efficiency of the test are improved.

Description

Big data automatic test method, equipment and readable storage medium
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a method and apparatus for automated testing of big data, and a readable storage medium.
Background
An automated test platform is a software tool developed to improve test efficiency and quality. The method can automatically execute test cases, collect test results, generate test reports and the like, so as to reduce the workload of manual testing and provide more comprehensive and accurate test coverage. However, the current automatic test platform can only provide specific test types for users, and cannot provide multiple test fields for users. For example, foreign large-scale system software companies such as ibm, microsoft and the like have begun to design universal automated test systems, and use common hardware and software platforms to maximize resource utilization; evosuite (automatic test suite generation for java) is currently tested only for java program generation units and no other tests can be performed.
Therefore, there is a technical problem that an automated test service cannot be provided for big data including various scenes and use cases.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present application and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The application mainly aims to provide a big data automatic test method, equipment and a readable storage medium, which aim to solve the technical problem that automatic test service cannot be provided for big data containing various scenes and use cases at present.
In order to achieve the above object, the present application provides a big data automation test method, which includes the following steps:
detecting an SQL query statement, and configuring the SQL query statement into a big data test index interface;
generating a data verification Beanshell script according to the interface information of the big data test index interface;
and configuring the data verification Beanshell script to the assertion of the initial data test case to generate an automatic data test case.
Optionally, the step of detecting the SQL query statement and configuring the SQL query statement into the big data test index interface includes, before:
acquiring a data index document, and extracting verification information based on the data index document, wherein the verification information comprises a calculation formula, a data unit, a reference field, a scheduling frequency, a data format, a space dimension, a time dimension and a tested interface parameter;
writing corresponding calculation logic according to the calculation formula; and
returning a field to be verified according to the reference field; and
determining a range condition of the query according to the space dimension and the time dimension; and
revising the numerical format returned by the SQL query statement according to the data format; and
determining a timing scheduling frequency of an automatic test plan according to the scheduling frequency; and
determining the query condition of the SQL query statement according to the tested interface parameter;
and generating the SQL query statement based on the calculation logic, the field to be verified, the range condition of the query, the returned numerical format, the timing scheduling frequency and the query condition.
Optionally, after the step of configuring the data verification Beanshell script to the assertion of the initial data test case to generate the automated data test case, the method includes:
receiving an arrangement instruction of a tester, and arranging the automatic data test case into a data case scene corresponding to the arrangement instruction;
and receiving an automatic test plan creation instruction of the tester, and generating an automatic test plan corresponding to the automatic test plan creation instruction based on the automatic data test case and/or the data case scene.
Optionally, the step of receiving an automated test plan creation instruction of the tester, and generating an automated test plan corresponding to the automated test plan creation instruction based on the automated data test case and/or the data case scenario further includes:
setting a timing scheduling task for the automated test plan;
and after the timing scheduling task is executed, storing the generated test report to a designated position.
Optionally, after the step of storing the generated test report to the designated location after the execution of the timing task, the method further includes:
if the test result of the test report is not passed, a JsonDiff comparison tool is called to compare the returned result and the response value of the big data test index interface;
and outputting the comparison result of the returned result and the response value so that the tester can check the specific position of the difference, the change of the key value pair and the newly added or missing field.
Optionally, the step of detecting the SQL query statement and configuring the SQL query statement into the big data test index interface includes, before:
receiving the setting of personnel of each role of the organization architecture;
distributing corresponding rights to the personnel in each role according to the setting;
if a working space creation instruction of a project manager is received, creating a corresponding working space and adding personnel belonging to the working space;
if a project space creation instruction of a patent responsible person to which the working space belongs is received, creating a corresponding project space, and adding the person to which the project space belongs;
and if an interface grouping instruction of the tester belonging to the project space is received, creating a corresponding interface grouping.
Optionally, if an interface grouping instruction of a tester to which the project space belongs is received, the step of creating the corresponding interface grouping includes:
creating interfaces for the interface groups and debugging the interfaces;
and if the debugging is passed, receiving the initial data test case written by the tester.
In addition, to achieve the above object, the present application also provides a big data automation test device, the device including: the system comprises a memory, a processor and a big data automatic test program which is stored in the memory and can run on the processor, wherein the big data automatic test program is configured to realize the steps of the big data automatic test method.
In addition, in order to achieve the above object, the present application further provides a readable storage medium, on which a big data automation test program is stored, which when executed by a processor, implements the steps of the big data automation test method described above.
In order to solve the technical problem that automatic test service cannot be provided for big data containing various scenes and use cases at present, the application configures the SQL query statement into a big data test index interface by detecting the SQL query statement; generating a data verification Beanshell script according to the interface information of the big data test index interface; and configuring the data verification Beanshell script to the assertion of the initial data test case to generate an automatic data test case. Different query operations and customized data verification logic are flexibly executed according to different test requirements, and a more comprehensive test scene can be covered. And the script is used for automatically executing data verification and assertion operation, so that the manual intervention is reduced and the testing efficiency is improved. The test data and the test logic can be reused by configuring the big data test index interface and the data verification script, and the consistency and the efficiency of the test are improved.
Drawings
FIG. 1 is a flow chart of a first embodiment of the automated big data testing method of the present application;
FIG. 2 is a schematic diagram illustrating internal interactions of the automated big data testing platform in a first embodiment of the automated big data testing method of the present application;
FIG. 3 is a flow chart of a second embodiment of the automated big data testing method of the present application;
FIG. 4 is a schematic diagram of a full flow of a big data based automated test platform in a second embodiment of the big data automated test method of the present application;
fig. 5 is a schematic structural diagram of a big data automation test device of a hardware running environment according to an embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
An automated test platform is a software tool developed to improve test efficiency and quality. The method can automatically execute test cases, collect test results, generate test reports and the like, so as to reduce the workload of manual testing and provide more comprehensive and accurate test coverage. However, the current automatic test platform can only provide specific test types for users, and cannot provide multiple test fields for users. For example, foreign large-scale system software companies such as ibm, microsoft and the like have begun to design universal automated test systems, and use common hardware and software platforms to maximize resource utilization; evosuite (automatic test suite generation for java) is currently tested only for java program generation units and no other tests can be performed. Therefore, there is a technical problem that an automated test service cannot be provided for big data including various scenes and use cases.
In order to solve the technical problems, the SQL query statement is configured into a big data test index interface by detecting the SQL query statement; generating a data verification Beanshell script according to the interface information of the big data test index interface; and configuring the data verification Beanshell script to the assertion of the initial data test case to generate an automatic data test case. Different query operations and customized data verification logic are flexibly executed according to different test requirements, and a more comprehensive test scene can be covered. And the script is used for automatically executing data verification and assertion operation, so that the manual intervention is reduced and the testing efficiency is improved. The test data and the test logic can be reused by configuring the big data test index interface and the data verification script, and the consistency and the efficiency of the test are improved.
In order that the above-described aspects may be better understood, exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art.
The embodiment of the application provides a big data automatic testing method, referring to fig. 1, fig. 1 is a flow diagram of a first embodiment of the big data automatic testing method of the application.
In this embodiment, the automated big data testing method includes:
step S10: and detecting the SQL query statement, and configuring the SQL query statement into a big data test index interface.
Optionally, the application is applied to a big data automatic test platform, and as an implementation mode, the big data automatic test platform comprises a big data test platform, a tool platform and an automatic test platform, and the big data automatic test is realized by combining the big data test platform and the automatic test platform through the tool platform. As another implementation mode, the big data automatic test platform integrates the functions of the big data test platform and the automatic test platform, and the big data automatic test can be realized by calling tools in series.
In this embodiment, the big data automated test platform includes a big data test platform, a tool platform, and an automated test platform. Referring to fig. 2, fig. 2 is an internal interaction schematic diagram of a big data automation testing platform in a first embodiment of the big data automation testing method of the present application. And the testers complete special test of big data test by utilizing the big data test platform, verify the integrity, correctness, accuracy, consistency and instantaneity of the data by writing SQL query sentences, and configure the SQL query sentences into big data test index interfaces after the verification is completed, and provide the big data test index interfaces for the automation platform for calling.
Optionally, the SQL query statement can be flexibly written according to different test requirements by referring to the data index document. The data index document is a document in which information such as definition of data index, calculation formula, data unit, data source, data format and the like is recorded. It describes the specific meaning and calculation method of various data indicators used in a system or application. In the data index document, each data index may be described in detail, including the name, definition, calculation formula, data unit, data source, etc. of the index. Such information may help the tester understand and verify the accuracy and reliability of the data indicators. So that a tester can accurately write test cases, verify the correctness of data and design a proper automatic test flow.
Optionally, acquiring a data index document, and extracting verification information based on the data index document, wherein the verification information comprises a calculation formula, a data unit, a reference field, a scheduling frequency, a data format, a space dimension, a time dimension and a tested interface parameter; writing corresponding calculation logic according to the calculation formula; returning a field to be verified according to the reference field; determining the range condition of the query according to the space dimension and the time dimension; revising the numerical format returned by the SQL query statement according to the data format; determining a timing scheduling frequency of an automatic test plan according to the scheduling frequency; determining the query condition of the SQL query statement according to the tested interface parameter; and generating the SQL query statement based on the calculation logic, the field to be verified, the range condition of the query, the returned numerical format, the timing scheduling frequency and the query condition.
Optionally, field information such as the name, API group, path, description, message type, data source, interface type, interface parameter, authority and the like of the big data test index interface is filled in and saved. This information describes the attributes and uses of the big data test index interface.
Further, the filled big data test index interface is put on line for the automatic test platform to call. Thus, a tester can use the automated test platform to call the big data test index interface for data verification.
Step S20: generating a data verification Beanshell script according to the interface information of the big data test index interface;
optionally, the interface information provided by the big data test index interface is input into a data verification Beanshell script generator of the tool platform to generate a corresponding data verification Beanshell script, wherein the Beanshell is a Java-based script language, can be written by using an API and grammar of Java, and is a script for verifying interface response data. The method can be written by using the Beanshell language, and the correctness and the accuracy of the data are judged by analyzing and verifying the interface response data. The script may perform various data verification operations, such as comparing values, determining whether strings match, verifying data structures, etc., according to specific test requirements.
Step S30: and configuring the data verification Beanshell script to the assertion of the initial data test case to generate an automatic data test case.
Optionally, the initial data test case is purposefully written in an interface test module of the automatic test platform according to the data verification bell script template, the data verification bell script is configured to the assertion of the initial data test case, and when the test case is executed, the test framework automatically executes the data verification script and judges whether the test case passes or not according to the result of the script. Therefore, automatic data verification and assertion operation can be realized, manual intervention is reduced, and testing efficiency and quality are improved.
Further, the step S30 includes:
step S40: receiving an arrangement instruction of a tester, and arranging the automatic data test case into a data case scene corresponding to the arrangement instruction;
optionally, the tester needs to provide orchestration instructions, including descriptions of test scenarios, requirements and desired test results of test data, etc. As one implementation, the automated test platform selects appropriate test cases to cover each test scenario by analyzing existing automated data test cases according to the orchestration instructions. As another implementation mode, a tester selects related test cases according to the requirements of a test scene and combines or modifies the test cases to meet the requirements of arranging instructions.
Optionally, by verifying the programmed data use case scenarios, it is ensured that the data and operational flow of each scenario are in line with expectations.
Step S50: and receiving an automatic test plan creation instruction of the tester, and generating an automatic test plan corresponding to the automatic test plan creation instruction based on the automatic data test case and/or the data case scene.
Optionally, the tester needs to provide automated test plan creation instructions including information such as description of the test plan, test scope, test targets, test environment, test data, etc. Further, the automated test platform or the tester selects related test cases and scenes according to the requirements of the test plan, and combines or modifies the test cases and scenes to generate an automated test plan so as to meet the requirements of an automated test plan creation instruction, wherein the automated test plan comprises information such as test targets, test ranges, test environments, test data, test steps, expected results and the like.
Further, the step S50 includes:
step S51: setting a timing scheduling task for the automated test plan;
alternatively, the tester may set timed scheduling tasks to execute the test plan, as required by the automated test plan. The timed scheduled tasks may be implemented using functionality provided by a test tool or framework, such as setting and execution of the timed scheduled tasks using Jenkins.
Step S52: and after the timing scheduling task is executed, storing the generated test report to a designated position.
Optionally, the automated test plan may be automatically executed at a specified time, based on the setting of the timed scheduled task. And continuously generating a test report according to the test execution result so as to monitor the data and interface abnormality in the responsible project and report error information in time. The test report should include information on test results, test coverage, error log, execution time, etc. The tester may use the functionality provided by the test tool or framework to generate test reports into a file in a specified format, such as HTML, PDF, excel, or the like.
Alternatively, the generated test report may be stored to a designated location, such as a local file system, shared folder, or cloud storage service, or the like. The tester can select a proper storage mode according to the needs and ensure the accessibility and the safety of the test report.
Further, the step S52 includes:
step S53: if the test result of the test report is not passed, a JsonDiff comparison tool is called to compare the returned result and the response value of the big data test index interface;
step S54: and outputting the comparison result of the returned result and the response value so that the tester can check the specific position of the difference, the change of the key value pair and the newly added or missing field.
Alternatively, jsonDiff is a tool for comparing differences between two JSON objects. The method can compare key value pairs of two JSON objects to find out specific positions of differences, changes of the key value pairs and newly added or missing fields. In the automatic test execution process, when the test result is not passed, the returned result and the response value can be used as input, and a JsonDiff comparison tool of the tool platform can be called to compare the returned result and the response value. The comparison results will show differences between the two JSON objects, including differences in specific locations, changes in key-value pairs, and newly added or missing fields. The tester can look at the comparison result and understand the difference between the returned result and the response value. From the comparison, the tester can determine where the problem is, for example, whether there is a field missing, a change in the field value, or the like. Thus, the testing personnel can be helped to more accurately locate the problem and carry out corresponding debugging and repair.
In the embodiment, by detecting an SQL query statement, configuring the SQL query statement into a big data test index interface; generating a data verification Beanshell script according to the interface information of the big data test index interface; and configuring the data verification Beanshell script to the assertion of the initial data test case to generate an automatic data test case. Different query operations and customized data verification logic are flexibly executed according to different test requirements, and a more comprehensive test scene can be covered. And the script is used for automatically executing data verification and assertion operation, so that the manual intervention is reduced and the testing efficiency is improved. The test data and the test logic can be reused by configuring the big data test index interface and the data verification script, and the consistency and the efficiency of the test are improved. According to the arrangement instructions of the testers, arranging the automated data test cases into corresponding data case scenes, and generating an automated test plan based on the scenes. By scheduling tasks to execute test plans at regular time and storing the generated test reports to the designated positions, automated test execution and result management can be achieved. Therefore, the testing efficiency and the testing quality can be improved, and the testing personnel can conveniently analyze and evaluate the testing result. The JsonDiff comparison tool can be used for comparing the returned result and the response value of the big data test index interface and outputting the comparison result so that a tester can check the specific position of the difference, the change of the key value pair and the newly added or missing field. Thus, the testing personnel can be helped to better understand the testing result and conduct problem positioning and repairing.
Further, referring to fig. 3, fig. 3 is a flow chart of a second embodiment of the automated big data testing method of the present application, and a second embodiment of the present application is provided, and before step S10, the method includes:
step S01: receiving the setting of personnel of each role of the organization architecture;
step S02: distributing corresponding rights to the personnel in each role according to the setting;
optionally, referring to fig. 4, fig. 4 is a schematic flowchart illustrating a full flow of the automated big data based test platform in the second embodiment of the automated big data testing method of the present application. As an embodiment, the application is applied to a big data automation test platform. As another embodiment, the present application is applied to a big data test platform and an automated test platform inside the big data automated test platform.
Optionally, the authority range of each role is determined according to the setting of the organization architecture. For example, a project manager may have the right to create and manage a workspace, a patent principal may have the right to create and manage a project space, and a tester may have the right to create and manage an interface grouping.
Alternatively, depending on the scope of rights for each character, a corresponding rights management tool or framework may be used to assign corresponding rights to individual character personnel. For example, authentication and authorization systems may be used to manage the rights of a user, with corresponding rights assigned to the user according to their role. This ensures that the personnel of each character can only access and operate the resources within their scope of authority.
Step S03: if a working space creation instruction of a project manager is received, creating a corresponding working space and adding personnel belonging to the working space;
step S04: if a project space creation instruction of a patent responsible person to which the working space belongs is received, creating a corresponding project space, and adding the person to which the project space belongs;
step S05: and if an interface grouping instruction of the tester belonging to the project space is received, creating a corresponding interface grouping.
Alternatively, when a workspace creation instruction of the project manager is received, a corresponding workspace may be created according to the instruction, and the belonging person may be added. A workspace management tool or framework may be used to create a workspace and add the belonging to the workspace according to the instructions. This ensures that only designated personnel can access and manage the workspace.
Similarly, when receiving an instruction for creating a project space of a patent responsible person to whom a workspace belongs, it is possible to create a corresponding project space according to the instruction and add the belonging person. A project space may be created using a project space management tool or framework and the people to whom the project space pertains added according to instructions.
Finally, when receiving the interface grouping instruction of the tester to which the project space belongs, a corresponding interface grouping can be created. Interface groupings may be created using an interface management tool or framework and configured according to instructions. Therefore, the interfaces can be managed in groups according to the project space, and the management and execution of the interfaces by the testers are facilitated.
Further, the step S05 includes:
step S06: creating interfaces for the interface groups and debugging the interfaces;
step S07: and if the debugging is passed, receiving the initial data test case written by the tester.
Optionally, by using an interface management tool or framework, a corresponding interface is created according to the requirements of the interface packet. The interface may include information such as request methods, request paths, request parameters, request headers, response data, etc.
Alternatively, by using an interface test tool or framework, a request is sent according to information of a request method, a request path, a request parameter, and the like of an interface, and response data is acquired. By checking the response data, whether the interface works normally or not can be judged, and corresponding debugging and repair are performed.
Alternatively, if the interface is debugged through, i.e. the interface can work normally and return correct response data, then the initial data test case written by the tester can be received.
In this embodiment, according to the setting of the organization architecture, corresponding rights may be allocated to each character personnel, and a workspace, a project space, and an interface group may be created according to the instruction. Through a proper authority management tool and a resource management tool, each role personnel can only access and operate the resources within the authority range, and the safety and management efficiency of the resources are improved. Through debugging and testing of the interface, the function and the correctness of the interface can be ensured.
In addition, the embodiment of the application also provides big data automatic test equipment.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a big data automation test device of a hardware running environment according to an embodiment of the present application.
As shown in fig. 5, the big data automated test equipment may include: a processor 1001, such as a central processing unit (CentralProcessingUnit, CPU), a communication bus 1002, a user interface 1003, a network interface 1004, a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a WIreless interface (e.g., a WIreless-FIdelity (WI-FI) interface). The memory 1005 may be a high-speed random access memory (RandomAccessMemory, RAM) or a stable nonvolatile memory (Non-VolatileMemory, NVM), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Those skilled in the art will appreciate that the configuration shown in fig. 5 is not limiting of the big data automated test equipment and may include more or fewer components than shown, or may combine certain components, or may be a different arrangement of components.
As shown in fig. 5, an operating system, a data storage module, a network communication module, a user interface module, and a big data automation test program may be included in the memory 1005 as one type of readable storage medium.
In the big data automated test equipment shown in fig. 5, the network interface 1004 is mainly used for data communication with other equipment; the user interface 1003 is mainly used for data interaction with a user; the processor 1001 and the memory 1005 in the big data automatic test equipment can be arranged in the big data automatic test equipment, and the big data automatic test equipment calls the big data automatic test program stored in the memory 1005 through the processor 1001 and executes the big data automatic test method provided by the embodiment of the application.
In addition, the embodiment of the application also provides a readable storage medium.
The readable storage medium of the present application stores thereon a big data automation test program which, when executed by a processor, implements the steps of the big data automation test method as described above.
The specific embodiment of the big data automatic test program stored in the readable storage medium of the present application executed by the processor is basically the same as each embodiment of the big data automatic test method described above, and will not be described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one big data automated test" does not exclude that there are additional identical elements in a process, method, article or system comprising the element.
The foregoing embodiment numbers of the present application are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a readable storage medium (such as ROM/RAM, magnetic disk, optical disk) as described above, comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the application, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (9)

1. The big data automatic test method is characterized by comprising the following steps of:
detecting an SQL query statement, and configuring the SQL query statement into a big data test index interface;
generating a data verification Beanshell script according to the interface information of the big data test index interface;
and configuring the data verification Beanshell script to the assertion of the initial data test case to generate an automatic data test case.
2. The automated big data testing method of claim 1, wherein the step of detecting an SQL query statement and configuring the SQL query statement into a big data test index interface is preceded by the step of:
acquiring a data index document, and extracting verification information based on the data index document, wherein the verification information comprises a calculation formula, a data unit, a reference field, a scheduling frequency, a data format, a space dimension, a time dimension and a tested interface parameter;
writing corresponding calculation logic according to the calculation formula; and
returning a field to be verified according to the reference field; and
determining a range condition of the query according to the space dimension and the time dimension; and
revising the numerical format returned by the SQL query statement according to the data format; and
determining a timing scheduling frequency of an automatic test plan according to the scheduling frequency; and
determining the query condition of the SQL query statement according to the tested interface parameter;
and generating the SQL query statement based on the calculation logic, the field to be verified, the range condition of the query, the returned numerical format, the timing scheduling frequency and the query condition.
3. The automated big data testing method of claim 1, wherein the step of configuring the data verification Beanshell script to the assertion of the initial data test case, after generating the automated data test case, comprises:
receiving an arrangement instruction of a tester, and arranging the automatic data test case into a data case scene corresponding to the arrangement instruction;
and receiving an automatic test plan creation instruction of the tester, and generating an automatic test plan corresponding to the automatic test plan creation instruction based on the automatic data test case and/or the data case scene.
4. The automated big data testing method of claim 3, wherein the step of receiving the automated test plan creation instruction of the tester, and generating the automated test plan corresponding to the automated test plan creation instruction based on the automated data test case and/or the data case scenario further comprises:
setting a timing scheduling task for the automated test plan;
and after the timing scheduling task is executed, storing the generated test report to a designated position.
5. The automated big data testing method of claim 4, wherein after the step of storing the generated test report to the specified location after the execution of the timed scheduled task, further comprising:
if the test result of the test report is not passed, a JsonDiff comparison tool is called to compare the returned result and the response value of the big data test index interface;
and outputting the comparison result of the returned result and the response value so that the tester can check the specific position of the difference, the change of the key value pair and the newly added or missing field.
6. The automated big data testing method of claim 1, wherein the step of detecting an SQL query statement and configuring the SQL query statement into a big data test index interface is preceded by the step of:
receiving the setting of personnel of each role of the organization architecture;
distributing corresponding rights to the personnel in each role according to the setting;
if a working space creation instruction of a project manager is received, creating a corresponding working space and adding personnel belonging to the working space;
if a project space creation instruction of a patent responsible person to which the working space belongs is received, creating a corresponding project space, and adding the person to which the project space belongs;
and if an interface grouping instruction of the tester belonging to the project space is received, creating a corresponding interface grouping.
7. The automated big data testing method of claim 6, wherein the step of creating the corresponding interface group if the interface group instruction of the tester to which the project space belongs is received comprises:
creating interfaces for the interface groups and debugging the interfaces;
and if the debugging is passed, receiving the initial data test case written by the tester.
8. A big data automated test equipment, the equipment comprising: a memory, a processor and a big data automation test program stored on the memory and executable on the processor, the big data automation test program being configured to implement the steps of the big data automation test method of any of claims 1 to 7.
9. A readable storage medium, characterized in that the readable storage medium has stored thereon a big data automation test program, which when executed by a processor, implements the steps of the big data automation test method according to any of claims 1 to 7.
CN202311178992.2A 2023-09-13 2023-09-13 Big data automatic test method, equipment and readable storage medium Active CN116909936B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311178992.2A CN116909936B (en) 2023-09-13 2023-09-13 Big data automatic test method, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311178992.2A CN116909936B (en) 2023-09-13 2023-09-13 Big data automatic test method, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN116909936A true CN116909936A (en) 2023-10-20
CN116909936B CN116909936B (en) 2024-05-14

Family

ID=88356986

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311178992.2A Active CN116909936B (en) 2023-09-13 2023-09-13 Big data automatic test method, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116909936B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109101415A (en) * 2018-06-25 2018-12-28 平安科技(深圳)有限公司 Interface test method, system, equipment and the storage medium compared based on database
CN109542765A (en) * 2018-10-18 2019-03-29 深圳壹账通智能科技有限公司 Database script verification method, device, computer equipment and storage medium
CN111046073A (en) * 2019-12-11 2020-04-21 广州品唯软件有限公司 Test case query method and device and readable storage medium
CN111400356A (en) * 2020-06-04 2020-07-10 浙江口碑网络技术有限公司 Data query method, device and equipment
CN112948233A (en) * 2020-07-30 2021-06-11 深圳市明源云链互联网科技有限公司 Interface testing method, device, terminal equipment and medium
CN115080389A (en) * 2022-06-09 2022-09-20 青岛民航凯亚系统集成有限公司 Test system, method, equipment and storage medium for improving index statistical efficiency

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109101415A (en) * 2018-06-25 2018-12-28 平安科技(深圳)有限公司 Interface test method, system, equipment and the storage medium compared based on database
CN109542765A (en) * 2018-10-18 2019-03-29 深圳壹账通智能科技有限公司 Database script verification method, device, computer equipment and storage medium
CN111046073A (en) * 2019-12-11 2020-04-21 广州品唯软件有限公司 Test case query method and device and readable storage medium
CN111400356A (en) * 2020-06-04 2020-07-10 浙江口碑网络技术有限公司 Data query method, device and equipment
CN112948233A (en) * 2020-07-30 2021-06-11 深圳市明源云链互联网科技有限公司 Interface testing method, device, terminal equipment and medium
CN115080389A (en) * 2022-06-09 2022-09-20 青岛民航凯亚系统集成有限公司 Test system, method, equipment and storage medium for improving index statistical efficiency

Also Published As

Publication number Publication date
CN116909936B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
US11281570B2 (en) Software testing method, system, apparatus, device medium, and computer program product
CN105094783B (en) method and device for testing stability of android application
US10127141B2 (en) Electronic technology resource evaluation system
US8914679B2 (en) Software testing automation framework
US7512933B1 (en) Method and system for associating logs and traces to test cases
KR101410099B1 (en) Function Test Apparatus based on Unit Test Cases Reusing and Function Test Method thereof
CN109977012B (en) Joint debugging test method, device, equipment and computer readable storage medium of system
CN112286806A (en) Automatic testing method and device, storage medium and electronic equipment
CN106628250B (en) JL9 aircraft function modularization comprehensive automatic detection system
CN111897721A (en) Automatic test method of API interface and storage medium
CN112231206A (en) Script editing method for application program test, computer readable storage medium and test platform
CN115080389A (en) Test system, method, equipment and storage medium for improving index statistical efficiency
Sumalatha et al. Uml based automated test case generation technique using activity-sequence diagram
KR20150030297A (en) Verification apparatus, terminal device, system, method and computer-readable medium for verifying application
US20050203717A1 (en) Automated testing system, method and program product using testing map
CN117493188A (en) Interface testing method and device, electronic equipment and storage medium
CN116909936B (en) Big data automatic test method, equipment and readable storage medium
CN112612706A (en) Automated testing method, computer device and storage medium
CN117194259A (en) Interface testing method, system, electronic equipment and storage medium
Maâlej et al. Wsclt: a tool for ws-bpel compositions load testing
CN115934559A (en) Testing method of intelligent form testing system
CN114124769B (en) Base station testing method and device, electronic equipment and storage medium
CN113986753A (en) Interface test method, device, equipment and storage medium
CN113220586A (en) Automatic interface pressure test execution method, device and system
CN113282496A (en) Automatic interface test method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant