CN116204410A - Database performance test method and device, electronic equipment and storage medium - Google Patents

Database performance test method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116204410A
CN116204410A CN202211644475.5A CN202211644475A CN116204410A CN 116204410 A CN116204410 A CN 116204410A CN 202211644475 A CN202211644475 A CN 202211644475A CN 116204410 A CN116204410 A CN 116204410A
Authority
CN
China
Prior art keywords
test
data
performance
database
testing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211644475.5A
Other languages
Chinese (zh)
Inventor
黄方蕾
尚璇
魏晓彤
邱炜伟
胡麦芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Qulian Technology Co Ltd
Original Assignee
Hangzhou Qulian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Qulian Technology Co Ltd filed Critical Hangzhou Qulian Technology Co Ltd
Priority to CN202211644475.5A priority Critical patent/CN116204410A/en
Publication of CN116204410A publication Critical patent/CN116204410A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3684Test management for test design, e.g. generating new test cases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • G06F11/3688Test management for test execution, e.g. scheduling of test suites
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3696Methods or tools to render software testable
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Debugging And Monitoring (AREA)

Abstract

The application relates to a database performance test method, a database performance test device, electronic equipment and a storage medium, which are applied to the technical field of computers, wherein the method comprises the following steps: acquiring test data of a database to be tested; determining a test tool corresponding to the test data; generating a target test case based on the test data; testing the target test case by using the testing tool to obtain an execution result; and generating a performance test result of the database to be tested based on the execution result. In order to solve the problem that in the prior art, corresponding threads are configured by staff each time, so that the difficulty of test development is high; moreover, performance data can be obtained only after log data are manually analyzed, so that the efficiency of obtaining test results is low.

Description

Database performance test method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a database performance testing method, a database performance testing device, an electronic device, and a storage medium.
Background
Multiple types of databases (dbs) are self-developed in the blockchain system, each characteristic iteration or defect repair of each db warehouse can cause performance change, and therefore performance tests and corresponding result analysis are required to be performed on the dbs in a targeted manner.
In the related art, the performance test of db is performed manually after the performance test is completed, performance data is obtained by manually analyzing log data, and report comparison analysis is written.
According to the database performance test method, the corresponding threads are required to be configured by staff each time, so that the test development difficulty is high; moreover, performance data can be obtained only after log data are manually analyzed, so that the efficiency of obtaining test results is lower.
Disclosure of Invention
The application provides a database performance test method, a database performance test device, electronic equipment and a database storage medium, which are used for solving the problem that in the prior art, corresponding threads are required to be configured by staff each time, so that the test development difficulty is high; moreover, performance data can be obtained only after log data are manually analyzed, so that the efficiency of obtaining test results is low.
In a first aspect, an embodiment of the present application provides a database performance testing method, including:
acquiring test data of a database to be tested;
determining a test tool corresponding to the test data;
generating a target test case based on the test data;
testing the target test case by using the testing tool to obtain an execution result;
and generating a performance test result of the database to be tested based on the execution result.
Optionally, the determining the test tool corresponding to the test data includes:
extracting version information of the database to be tested in the test data;
the test tool is determined based on the version information.
Optionally, the generating the target test case based on the test data includes:
identifying altered target test data in the test data;
determining target performance affecting the database to be tested based on the target test data;
and determining the test case corresponding to the target performance as the target test case.
Optionally, the generating the target test case based on the test data includes:
determining a target type of the database to be tested based on the test data;
and determining the test case corresponding to the target type as the target test case.
Optionally, the execution result includes: equipment resource data, wherein the equipment resource data is the resource data of equipment for executing the test process;
the generating the performance test result of the database to be tested based on the execution result comprises the following steps:
comparing the equipment resource data with a first preset threshold value;
when the equipment resource data is larger than a first preset threshold value, determining a sub-functional module with the most consumption of the equipment resource data in the database to be tested;
and generating the performance test result based on the sub-functional module and the equipment resource data.
Optionally, the execution result includes performance data, where the performance data is generated in a testing process;
the testing of the target test case by using the testing tool further comprises the following steps after the execution result is obtained:
calculating deviation data between the performance data and the previous performance data, wherein the previous performance data is the performance data obtained by the last test of the database of the same type as the database to be tested;
calculating a difference value between the performance data and performance reference data, wherein the performance reference data is calculated based on historical performance data under the condition that the deviation data is not in a preset deviation range;
if the difference value is larger than a second preset threshold value, updating the performance reference data; discarding the test data if the difference is smaller than a third preset value, wherein the third preset threshold is smaller than the second preset threshold;
and updating the performance reference data under the condition that the deviation data is within a preset deviation range.
Optionally, the method further comprises:
monitoring the testing process of the database to be tested;
and reporting and summarizing the performance data and/or the equipment resource data in the execution result when the occurrence of the abnormal performance data and/or the abnormal equipment resource data is detected.
In a second aspect, an embodiment of the present application provides a database performance testing apparatus, including:
the acquisition module is used for acquiring the test data of the database to be tested;
the determining module is used for determining a testing tool corresponding to the testing data;
the first generation module is used for generating a target test case based on the test data;
the test module is used for testing the target test case by using the test tool to obtain an execution result;
and the second generation module is used for generating a performance test result of the database to be tested based on the execution result.
In a third aspect, an embodiment of the present application provides an electronic device, including: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory is used for storing a computer program;
the processor is configured to execute the program stored in the memory, and implement the database performance testing method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium storing a computer program, where the computer program when executed by a processor implements the database performance testing method according to the first aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: according to the method provided by the embodiment of the application, the test data of the database to be tested are obtained; determining a test tool corresponding to the test data; generating a target test case based on the test data; testing the target test case by using the testing tool to obtain an execution result; and generating a performance test result of the database to be tested based on the execution result. Therefore, the corresponding test tool and the target test case can be automatically generated based on the test data of the database to be tested, so that the performance test is completed, the performance test data can be generated based on the obtained execution result, corresponding threads are written each time, the development cost is reduced, the automation of the database test process is realized, and the test efficiency is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is an application scenario diagram of a database performance test method according to an embodiment of the present application;
FIG. 2 is a flowchart of a database performance testing method according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of a database performance testing method according to another embodiment of the present application;
FIG. 4 is a block diagram of a database performance testing apparatus according to an embodiment of the present application;
FIG. 5 is a block diagram of a database performance testing apparatus according to another embodiment of the present application;
fig. 6 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present application based on the embodiments herein.
Before describing embodiments of the present invention in further detail, the terms and terminology involved in the embodiments of the present invention will be described, and the terms and terminology involved in the embodiments of the present invention will be used in the following explanation.
GitLab is an open source project for a warehouse management system, uses Git as a code management tool, and builds a Web service on this basis.
Jenkins is an open source software project, is a continuous integration tool based on Java development, is used for monitoring continuous repeated work, and aims to provide an open and easy-to-use software platform so that the software project can be continuously integrated.
According to an embodiment of the application, a database performance testing method is provided. Alternatively, in the embodiment of the present application, the database performance test method described above may be applied to a hardware environment formed by the terminal 101 and the server 102 as shown in fig. 1. As shown in fig. 1, the server 102 is connected to the terminal 101 through a network, which may be used to provide services (such as video services, application services, etc.) to the terminal or clients installed on the terminal, and a database may be provided on the server or independent of the server, for providing data storage services to the server 102, where the network includes, but is not limited to: the terminal 101 is not limited to a PC, a mobile phone, a tablet computer, or the like.
The database performance test method in the embodiment of the present application may be executed by the server 102, may be executed by the terminal 101, or may be executed by both the server 102 and the terminal 101. The terminal 101 may execute the database performance test method according to the embodiment of the present application, or may be executed by a client installed thereon.
Taking a server executing the database performance test method according to the embodiment of the present application as an example, fig. 2 is a schematic flow chart of an alternative database performance test method according to the embodiment of the present application, as shown in fig. 2, the flow of the method may include the following steps:
step 201, test data of a database to be tested is obtained.
In some embodiments, the database to be tested may be a version of the database that is updated by the developer, or may be obtained after repairing a defect of the version of the database.
The test data may be code data of a database to be tested.
Furthermore, when the method is applied to electronic equipment such as a server or a terminal, some general services including, but not limited to, jenkins, mongodb, influxdb, grafana, collectd and the like are generally built on the electronic equipment, and by deploying the general services, a testing process can be monitored as a monitoring tool, and pprof data can be further generated and stored in a file.
Step 202, determining a testing tool corresponding to the testing data.
In some embodiments, the variety of test tools may be varied and may be selected based on the actual situation, for example, but not limited to, a pressure test tool (goycsb).
In order to improve the suitability of the test tool, in this embodiment, the test tool is correspondingly improved, so that the test tool can support the performance test adaptation of interfaces of various db types, including Insert, batchInsert, delete, batchDelete, update and the like, which are self-developed. In this manner, when the test data is input into the test tool in any type, the test tool can be matched to the type of test data using the improved plurality of interfaces.
In an alternative embodiment, the determining the test tool corresponding to the test data includes:
extracting version information of the database to be tested in the test data;
the test tool is determined based on the version information.
In some embodiments, a jenkins task of performance test is established, a gitlab may be set in a server, and key information such as performance deviation, repetition number and the like may be filled in when the gitlab submits codes or in description information, so that the task is automatically triggered, and after the jenkins obtains webhook related information corresponding to test data, the key information is analyzed and identified and distributed to a subordinate task. The key information also comprises the name, version information, description information and the like of the database to be tested.
Because the version of db to be tested is strongly coupled in the test tool, the extracted version information of db triggering the automatic test is required, and the configuration data of the test tool of the version information is configured for the test tool to obtain the test tool corresponding to the version information.
Furthermore, after the webhook information is obtained, the webhook information can be checked first, and if the webhook information does not accord with the rule, friendly prompt is carried out and the webhook information is pushed to a trigger.
And 203, generating a target test case based on the test data.
In some embodiments, a test case library is stored in the server, and in order to shorten the execution time of the new test, a target test case suitable for the test data is screened from the test case library. The method for screening the target test case is various, and the following two methods are specifically described.
In an alternative embodiment, the generating the target test case based on the test data includes:
identifying altered target test data in the test data;
determining target performance affecting the database to be tested based on the target test data;
and determining the test case corresponding to the target performance as the target test case.
In some embodiments, a correspondence table between db code modules and test cases is stored in a server, where db code modules are different functional sub-modules in test data. Based on the above, the target test case corresponding to the test data can be determined by using the correspondence table.
Furthermore, the test data acquired at this time can be compared with the test data of the same type through a comparison tool (such as a diff tool), the changed target test data is determined, the possible influence range of the code change is identified, the test case for the changed code is accurately generated, if the change only affects the insertion flow, only the use case related to the insertion flow is generated;
and screening out test cases to be executed through the affected target performance and other information. Considering that the test scenes of various dbs are approximately the same, in order to avoid excessive case maintenance work, only one set of test cases is maintained for all dbs, and in order to shorten the execution time of performance test, screening can be performed through target performance, for example, the target performance is insertion, and the performance test cases related to the insertion operation are screened out.
In an alternative embodiment, the generating the target test case based on the test data includes:
determining a target type of the database to be tested based on the test data;
and determining the test case corresponding to the target type as the target test case.
In some embodiments, a set of correspondence tables between db types and test cases may be preconfigured, so that a target test case corresponding to a target type is determined based on the correspondence tables. For example, a certain db type only supports sequential insertion, and no test of random key patterns is required.
And 204, testing the target test case by using the testing tool to obtain an execution result.
In some embodiments, after the test tool and the target test case are obtained, the test tool may be used to execute the target test case, so as to obtain an execution result.
In some embodiments, the execution results include, but are not limited to, device resource data and performance data, wherein the performance data is generated during a test procedure, and the device resource data is resource data of a device executing the test procedure.
The device resource data may be, but is not limited to, resource data such as cpu and memory of the server; the performance data may be, but is not limited to, TPS (transactions per second) and delay information run out for the server.
In order to improve the test efficiency, it is considered that all test tasks can run in parallel, including: tasks of different dbs can be executed in parallel; to eliminate performance variability due to server performance fluctuations or network fluctuations that may exist for a single db test, the tasks of a single db may also be distributed to multiple servers to run in parallel.
State management of the server is also required due to support of parallel execution of jobs. During testing, a server with the server state being an allocatable state is selected for testing, the state of the server is changed to be allocated after the server is selected, other tasks can not be used any more, and the server is changed to be the allocatable state again after the operation is finished.
In an optional embodiment, after the testing the target test case by using the testing tool to obtain an execution result, the method further includes:
calculating deviation data between the performance data and the previous performance data, wherein the previous performance data is the performance data obtained by the last test of the database of the same type as the database to be tested;
calculating a difference value between the performance data and performance reference data, wherein the performance reference data is calculated based on historical performance data under the condition that the deviation data is not in a preset deviation range;
if the difference value is larger than a second preset threshold value, updating the performance reference data; discarding the test data if the difference is smaller than a third preset value, wherein the third preset threshold is smaller than the second preset threshold;
and updating the performance reference data under the condition that the deviation data is within a preset deviation range.
In some embodiments, comparing the current version TPS data with the test results of the same db type and the same disk in the previous run to obtain the influence of the code change in the current test data on the performance, and comparing the expected performance deviation data submitted by development, if not submitting the default + -5%. If the influence is within + -5%, the system is judged to be normal, and if the deviation is more than +5% of the expected deviation, the system is judged to be improved in performance, and in both cases, the performance reference data needs to be updated and the code can be incorporated. If the deviation is less than-5% of the expected deviation, the deviation is determined to be reduced, and the deviation is fed back to the gitlab, and code merging is not allowed.
After the development receives the descending feedback, the descending feedback is confirmed to be the expected condition, the task can be reconstructed, and the allowable range of the performance deviation is marked in the merge request, so that the automation task can judge that the TPS result is normal according to the deviation data during the rerun process and can normally merge into and further code.
And 205, generating a performance test result of the database to be tested based on the execution result.
In an optional embodiment, the generating, based on the execution result, a performance test result of the database to be tested includes:
comparing the equipment resource data with a first preset threshold value;
when the equipment resource data is larger than a first preset threshold value, determining a sub-functional module with the most consumption of the equipment resource data in the database to be tested;
and generating the performance test result based on the sub-functional module and the equipment resource data.
In some embodiments, comparing TPS and delay information of a plurality of servers of the same disk in the test, setting TPS and delay deviation not to exceed a specified value in a merge request (merge request), defaulting to 5% if not filling, otherwise, considering that the execution error is larger, pushing a re-execution notification and re-executing the test. In addition, the re-execution process is set to be executed for 1 time at most, so that repeated execution is avoided to continuously occupy server resources.
In an alternative embodiment, the database performance testing method of the present application further includes:
monitoring the testing process of the database to be tested;
and reporting and summarizing the performance data and/or the equipment resource data in the execution result when the occurrence of the abnormal performance data and/or the abnormal equipment resource data is detected.
In some embodiments, monitoring of the test process primarily includes process monitoring, resource monitoring, and performance data monitoring.
The process monitoring starts a timer, periodically polls the current execution state, and if the execution state is found to be abnormal, calls an abnormal pushing module to report the abnormality.
The resource monitoring starts a timer to periodically query the current server for resource usage, e.g., if cpu resources are found to exceed 90% of the threshold, an exception is reported.
The performance data monitoring is used for providing a visual page, so that the performance change condition can be observed at any time in the testing process.
Furthermore, after the abnormality is monitored, the abnormality pushing and the result pushing can be performed,
the exception pushing can be combined with the monitoring system, so that the exception captured in the monitoring system or after a round of test is executed, the result deviation is larger, and the exception such as re-execution is needed to be pushed to related development and test through related programs, so that the attention is facilitated.
The results push may send the final test report to the mailbox of the relevant person.
In a specific embodiment, referring to fig. 3, the database performance test method of the present application includes:
first, performance testing tool adaptation and modification: selecting a goycsb as a tool for the performance test of the self-grinding db, and performing corresponding adaptation and transformation, wherein the main transformation is that a, a self-grinding database is adapted according to requirements except for the self-grinding db according to the use instruction of the tool so that the goycsb can support the performance test of the self-grinding db, b, a performance test data real-time pushing function is added, and the grafana is combined for visualization; c. the performance image process data of the process is stored for subsequent automatic performance bottleneck analysis.
Second, test task analysis and parallel testing: and (3) establishing a jenkins task for performance test, configuring a corresponding gitlab webhook, enabling the task to be automatically triggered after development and submission codes, analyzing and identifying key information after jenkins acquire webhook related information, and distributing the key information to a subordinate task.
Third, the db version of the performance testing tool is replaced: because the db version to be tested is strongly coupled in the test tool, db version information of the current trigger automatic test identified in the second step is needed, the db version in the original code is replaced by a text analysis technology, and then the test tool is constructed.
Fourth, accurate analysis of test scenes: because the time consumption of the performance test is often longer, in order to shorten the execution time, the development and modification codes are considered to be subjected to accurate analysis, the possible influence range of the code modification is identified, test cases aiming at the modified codes are accurately generated, for example, the modification only affects the insertion flow, and only the use cases related to the insertion flow are generated;
fifth, multitasking parallel execution and server state management: in order to improve the test efficiency, consider that all jobs can run in parallel, including a. Tasks of different dbs can be executed in parallel; b. to eliminate performance variability due to server performance fluctuations or network fluctuations that may exist for a single db test, the tasks of a single db may also be distributed to multiple servers to run in parallel. State management of the server is also required due to support of parallel execution of jobs.
Sixth, test execution and process monitoring: the method comprises the steps of starting test case execution, and simultaneously starting a monitoring system, wherein the process monitoring system comprises process monitoring, resource monitoring and performance data monitoring, and is mainly used for timely reporting the abnormality occurring in the execution process so as to facilitate manual timely intervention processing, and the main function of the process monitoring system is used for monitoring abnormal conditions such as sudden rising of resources such as CPU/memory and the like or error of execution of a certain process and the like, and pushing the abnormal conditions to testers triggering development of the automatic test and related tasks. The resource monitoring and the performance data monitoring are convenient for observing performance test execution conditions at any time and automatically analyzing the performance data obtained later.
Seventh, test result deviation confirmation: and after the test is finished, acquiring the most critical TPS data and delay data in the test result, and performing comparison analysis, wherein the comparison is mainly performed on the running results of a plurality of servers in the test, the test result deviation is acquired, and if the execution error is considered to be larger, the re-execution notification is pushed and the test is re-executed.
Eighth, automatic analysis of performance bottlenecks: after all test results are obtained, the possible performance bottlenecks are automatically analyzed, and the analysis is mainly performed from the two aspects of machine resource utilization rate and process portrait.
And ninth, evaluating whether code change is allowed, comparing the current version TPS and other data with the db performance reference data (TPS data) to obtain the influence degree of the current code change on the performance, if the current code change is judged to be reduced, linking to the gitlab, not allowing code combination, and if the current code change is not reduced, updating the performance reference data.
According to the database performance test method, the performance test automatic regression and the performance bottleneck analysis of the block chain bottom db are realized, the performance test regression process is very efficient, the low risk that a test flow and a test result are not in line with expectations due to easy mistakes of manual execution and uneven personnel technical level is avoided, the test efficiency is improved, the test execution cost is reduced, and the continuous tracking of db performance is facilitated.
Including but not limited to the following benefits:
the performance testing tool is improved, real-time display of performance testing data is realized, and a performance data monitoring page with good visualization is provided.
And the change codes are precisely analyzed, so that the performance test cases aiming at the change codes are generated, and the execution time is shortened.
And the code is submitted for the same time, and multiple servers are tested in parallel, so that the execution time is shortened.
And comparing the multiple test results of different servers, guaranteeing the reliability of the test results, and if the test results are unreliable, automatically retrying.
An execution process monitoring system is configured, and the first time of the execution exception and the resource exception is ensured to be discovered and solved.
And an analysis report is provided by automating the performance bottleneck analysis process, so that the test result is clearer and more definite.
Based on the same conception, the embodiment of the present application provides a database performance testing device, and the specific implementation of the device may be referred to the description of the embodiment of the method, and the repetition is omitted, as shown in fig. 4, where the device mainly includes:
an obtaining module 401, configured to obtain test data of a database to be tested;
a determining module 402, configured to determine a test tool corresponding to the test data;
a first generating module 403, configured to generate a target test case based on the test data;
the test module 404 is configured to test the target test case by using the test tool to obtain an execution result;
and a second generating module 405, configured to generate a performance test result of the database to be tested based on the execution result.
In one embodiment, referring to fig. 5, the database performance testing apparatus of the present application includes: the device comprises a preprocessing module, an executing module, a pushing module, a persistence module and a monitoring module.
Before implementation, some general services including, but not limited to, jenkins, mongab, influxdb, grafana, collectd, etc. are quasi-built for all servers performing the test, and after the services are deployed, implementation work can be performed.
In addition, in the front-end preparation work, the selected performance test tool is required to be modified to support the self-grinding of various db types, including the performance test adaptation of interfaces such as Insert, batchInsert, delete, batchDelete, update; the real-time performance test data pushing function is added, and the data are stored in a time sequence database, wherein the time sequence database comprises BPS (block number per second) time-varying data, TPS (transaction number per second) time-varying data, transaction delay, block delay, kv data total amount, block data total amount and other varying data; and generating the pprofdata every fixed time interval and storing the data into a file.
Specifically, the preprocessing module mainly comprises a gitlab hook information acquisition module, a db basic information analysis module, a change range analysis module, a performance test tool adaptation module and a test server state control module.
The title-hook information acquisition module is used for firstly filling key information such as performance deviation, repetition number and the like in the description information of the title-hook code or the mr code when the development convention is submitted, and dividing the filling time division number. After the hook information is obtained, the information needs to be checked, and if the hook information does not accord with the rule, friendly prompt is carried out and the trigger personnel are pushed.
The db basic information analysis module is used for analyzing and identifying the name, version information, description information and the like of the db to be tested from the checked hook information, and extracting key information such as performance deviation, repetition number and the like affecting the performance test execution process from the description information.
And the change range analysis module is used for firstly establishing a corresponding relation table between the db code module and the use case, secondly identifying the module related to the change through a diff tool, and further identifying the performance influence range of the influence through the module use case relation table, for example, only the deletion performance is influenced by a certain change, or even the performance is not influenced by a certain change.
And the test case generation module is used for screening out the test cases to be executed through the performance influence range and other information. Considering that the test scenes of various dbs are approximately the same, in order to avoid excessive use case maintenance work, only one set of test use cases is maintained for all dbs, and in order to shorten the execution time of the performance test, two-aspect screening is performed: and screening out performance test cases related to the insertion operation through the performance influence range, such as insertion. And maintaining a set of corresponding relation tables between db types and use cases, wherein if a db type only supports sequential insertion, a test of a random key mode is not required.
And the server state control module is used for controlling the state of the test server. During testing, a server with the server state being an allocatable state is selected for testing, the state of the server is changed to be allocated after the server is selected, other tasks can not be used any more, and the server is changed to be the allocatable state again after the operation is finished.
The execution module comprises a use case execution module, a result confirmation module, a bottleneck analysis module, a report generation module and a performance change evaluation module.
And the use case execution module is used for starting the test tool according to the corresponding test configuration.
The test result confirmation module is used for comparing TPS and delay information operated by a plurality of servers of the same type of disk in the test, requiring TPS and delay deviation not to exceed a designated value in a merge request, defaulting to 5% if not filling, otherwise, considering that the execution error is large, pushing the re-execution notice and re-executing the test, and in addition, setting the re-execution process to be executed for at most 1 time, so that repeated execution is avoided, and the server resource is continuously occupied.
The bottleneck analysis module is used for automatically carrying out bottleneck analysis from server resources and process image data, and comprises the following steps: firstly, server resource data such as CPU, memory and disk read-write conditions in the test process are acquired and analyzed with a set threshold, if the threshold is exceeded, the bottleneck is considered, and if the CPU resource utilization rate exceeds 80%, the CPU resource is considered to be possibly the bottleneck of the test. Secondly, using a pproftool to analyze the process portrait stored in the test, obtaining the data of 10 code modules with highest cpu consumption, 10 code modules with highest memory consumption, 10 code modules with highest golutene development and the like in different stages,
and the report generation module is used for generating test reports from the performance data, the resource monitoring data, the bottleneck analysis data and the like, and storing the report data through the mongolib so as to confirm test results in the next test.
The performance change evaluation module is used for obtaining the influence of the current code change on the performance by comparing the current version TPS and other data with the test result of the same db type and the same disk in the previous period, comparing and developing the submitted expected performance deviation data, and if the expected performance deviation data is not submitted to be the default + -5%. If the influence is within + -5%, the system is judged to be normal, and if the deviation is more than +5% of the expected deviation, the system is judged to be improved in performance, and in both cases, the performance reference data needs to be updated and the code can be incorporated. If the deviation is less than-5% of the expected deviation, the deviation is determined to be reduced, and the deviation is fed back to the gitlab, and code merging is not allowed. After the development receives the descending feedback, the descending feedback is confirmed to be the expected condition, the task can be reconstructed, and the allowable range of the performance deviation is marked in the merge request, so that the automation task can judge that the TPS result is normal according to the deviation data during the rerun process and can normally merge into and further code.
The monitoring module comprises a process monitoring module, a resource monitoring module and a performance data monitoring module.
And the process monitoring module is used for starting a timer, periodically polling the current execution state, and calling the abnormal pushing module to report the abnormality if the execution state is found to be abnormal.
And the resource monitoring module is used for starting a timer, periodically inquiring the current server resource use condition, for example, reporting abnormality if cpu resources are found to exceed the threshold of 90 percent.
And the performance data monitoring module is used for providing a visual page, so that the performance change condition can be observed at any time in the testing process.
The pushing module comprises an abnormal pushing module and a result pushing module.
The abnormal pushing module is used for combining the monitoring system, pushing the abnormalities captured in the monitoring system or the abnormalities requiring re-execution to relevant development and test through nails after one round of test is executed, and facilitating the early attention.
And the result pushing module is used for sending the final test report to the mailbox of the relevant personnel.
The persistence module comprises a time sequence db module, a mongdb module, a file system module and the like.
The time sequence db module is used for storing time-varying performance index data, such as tps, bps, delay and the like at each moment, accessing the performance test tool in the improvement of the performance test tool, and storing the data after the performance test is started.
And the monglodb module is used for providing inquiry and storage of test data and reports.
And the file system module is used for storing the performance test tool and the coupled whole process portrait data of db.
Based on the same concept, the embodiment of the application also provides an electronic device, as shown in fig. 6, where the electronic device mainly includes: processor 601, memory 602, and communication bus 603, wherein processor 601 and memory 602 accomplish communication with each other through communication bus 603. The memory 602 stores a program executable by the processor 601, and the processor 601 executes the program stored in the memory 602 to implement the following steps:
acquiring test data of a database to be tested;
determining a test tool corresponding to the test data;
generating a target test case based on the test data;
testing the target test case by using the testing tool to obtain an execution result;
and generating a performance test result of the database to be tested based on the execution result.
The communication bus 603 mentioned in the above-mentioned electronic device may be a peripheral component interconnect standard (Peripheral Component Interconnect, abbreviated to PCI) bus, an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated to EISA) bus, or the like. The communication bus 603 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 6, but not only one bus or one type of bus.
The memory 602 may include random access memory (Random Access Memory, simply RAM) or may include non-volatile memory (non-volatile memory), such as at least one disk memory. Alternatively, the memory may be at least one memory device located remotely from the aforementioned processor 601.
The processor 601 may be a general-purpose processor including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a digital signal processor (Digital Signal Processing, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA), or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components.
In yet another embodiment of the present application, there is also provided a computer-readable storage medium having stored therein a computer program which, when run on a computer, causes the computer to perform the database performance test method described in the above embodiments.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions described in accordance with the embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, by a wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, microwave, etc.) means from one website, computer, server, or data center to another. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape, etc.), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A database performance testing method, comprising:
acquiring test data of a database to be tested;
determining a test tool corresponding to the test data;
generating a target test case based on the test data;
testing the target test case by using the testing tool to obtain an execution result;
and generating a performance test result of the database to be tested based on the execution result.
2. The method for testing the performance of the database according to claim 1, wherein the determining the test tool corresponding to the test data comprises:
extracting version information of the database to be tested in the test data;
the test tool is determined based on the version information.
3. The database performance testing method according to claim 1, wherein the generating the target test case based on the test data includes:
identifying altered target test data in the test data;
determining target performance affecting the database to be tested based on the target test data;
and determining the test case corresponding to the target performance as the target test case.
4. The database performance testing method according to claim 1, wherein the generating the target test case based on the test data includes:
determining a target type of the database to be tested based on the test data;
and determining the test case corresponding to the target type as the target test case.
5. The method for testing the performance of the database according to claim 1, wherein the execution result comprises: equipment resource data, wherein the equipment resource data is the resource data of equipment for executing the test process;
the generating the performance test result of the database to be tested based on the execution result comprises the following steps:
comparing the equipment resource data with a first preset threshold value;
when the equipment resource data is larger than a first preset threshold value, determining a sub-functional module with the most consumption of the equipment resource data in the database to be tested;
and generating the performance test result based on the sub-functional module and the equipment resource data.
6. The database performance testing method of claim 1, wherein the execution results include performance data generated during the testing process;
the testing of the target test case by using the testing tool further comprises the following steps after the execution result is obtained:
calculating deviation data between the performance data and the previous performance data, wherein the previous performance data is the performance data obtained by the last test of the database of the same type as the database to be tested;
calculating a difference value between the performance data and performance reference data, wherein the performance reference data is calculated based on historical performance data under the condition that the deviation data is not in a preset deviation range;
if the difference value is larger than a second preset threshold value, updating the performance reference data; discarding the test data if the difference is smaller than a third preset value, wherein the third preset threshold is smaller than the second preset threshold;
and updating the performance reference data under the condition that the deviation data is within a preset deviation range.
7. The database performance testing method of claim 1, further comprising:
monitoring the testing process of the database to be tested;
and reporting and summarizing the performance data and/or the equipment resource data in the execution result when the occurrence of the abnormal performance data and/or the abnormal equipment resource data is detected.
8. A database performance testing apparatus, comprising:
the acquisition module is used for acquiring the test data of the database to be tested;
the determining module is used for determining a testing tool corresponding to the testing data;
the first generation module is used for generating a target test case based on the test data;
the test module is used for testing the target test case by using the test tool to obtain an execution result;
and the second generation module is used for generating a performance test result of the database to be tested based on the execution result.
9. An electronic device, comprising: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory is used for storing a computer program;
the processor is configured to execute a program stored in the memory, and implement the database performance testing method of any one of claims 1 to 7.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the database performance testing method of any of claims 1-7.
CN202211644475.5A 2022-12-20 2022-12-20 Database performance test method and device, electronic equipment and storage medium Pending CN116204410A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211644475.5A CN116204410A (en) 2022-12-20 2022-12-20 Database performance test method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211644475.5A CN116204410A (en) 2022-12-20 2022-12-20 Database performance test method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116204410A true CN116204410A (en) 2023-06-02

Family

ID=86518186

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211644475.5A Pending CN116204410A (en) 2022-12-20 2022-12-20 Database performance test method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116204410A (en)

Similar Documents

Publication Publication Date Title
US20140033176A1 (en) Methods for predicting one or more defects in a computer program and devices thereof
US7712087B2 (en) Methods and systems for identifying intermittent errors in a distributed code development environment
US20150095892A1 (en) Systems and methods for evaluating a change pertaining to a service or machine
EP2113874A1 (en) Method and system for monitoring computer-implemented processes
CN111522728A (en) Method for generating automatic test case, electronic device and readable storage medium
CN113342689A (en) Automatic testing method and device for interface, electronic equipment and storage medium
US8327189B1 (en) Diagnosing an incident on a computer system using a diagnostics analyzer database
CN107992420B (en) Management method and system for test item
CN114528201A (en) Abnormal code positioning method, device, equipment and medium
CN112148616B (en) Performance test management platform
CN114077540A (en) Interface test system and interface test method
CN116340159A (en) Regression test case recommendation method, system, equipment and storage medium
CN116204410A (en) Database performance test method and device, electronic equipment and storage medium
CN116302736A (en) Method and device for testing components of server, electronic equipment and storage medium
US10255128B2 (en) Root cause candidate determination in multiple process systems
CN114385498A (en) Performance test method, system, computer equipment and readable storage medium
CN114677779A (en) Vehicle configuration state monitoring method and device, storage medium and computer equipment
US9201771B2 (en) Method for evaluating a production rule for a memory management analysis
CN111198798B (en) Service stability measuring method and device
CN113360389A (en) Performance test method, device, equipment and storage medium
CN115129575A (en) Code coverage result generation method and device
CN107577546B (en) Information processing method and device and electronic equipment
CN112346994A (en) Test information correlation method and device, computer equipment and storage medium
CN110908918A (en) Unit testing method and device for multiple interdependent node
CN111752786A (en) Data storage method, data summarization method, equipment and medium in pressure test process

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination