CN116974874A - Database testing method and device, electronic equipment and readable storage medium - Google Patents

Database testing method and device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN116974874A
CN116974874A CN202311010236.9A CN202311010236A CN116974874A CN 116974874 A CN116974874 A CN 116974874A CN 202311010236 A CN202311010236 A CN 202311010236A CN 116974874 A CN116974874 A CN 116974874A
Authority
CN
China
Prior art keywords
test
node
program
target database
database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311010236.9A
Other languages
Chinese (zh)
Inventor
叶安达
潘安群
雷海林
张文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202311010236.9A priority Critical patent/CN116974874A/en
Publication of CN116974874A publication Critical patent/CN116974874A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3457Performance evaluation by simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/008Reliability or availability analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/3006Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system is distributed, e.g. networked systems, clusters, multiprocessor systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3003Monitoring arrangements specially adapted to the computing system or computing system component being monitored
    • G06F11/302Monitoring arrangements specially adapted to the computing system or computing system component being monitored where the computing system component is a software system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/3089Monitoring arrangements determined by the means or processing involved in sensing the monitored data, e.g. interfaces, connectors, sensors, probes, agents
    • G06F11/3093Configuration details thereof, e.g. installation, enabling, spatial arrangement of the probes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment
    • G06F11/3409Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment
    • G06F11/3419Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment by assessing time
    • G06F11/3423Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation ; Recording or statistical evaluation of user activity, e.g. usability assessment for performance assessment by assessing time where the assessed time is active or idle time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • G06F2201/80Database-specific techniques

Abstract

The embodiment of the application discloses a method and a device for testing a database, electronic equipment and a readable storage medium, wherein the method comprises the following steps: acquiring at least one test node corresponding to a target database, wherein the target database comprises a plurality of data nodes; loading a test program aiming at a target database, and transmitting the test program to a test node based on first communication connection so as to enable the test node to load the test program; transmitting configuration information corresponding to at least one test task to the test node so that the test node tests the data node in the target database through the test program based on the configuration information; and receiving an initial test result returned by each test node, and fusing the initial test results to obtain a target test result of the target database. The application can improve the stability and accuracy of the distributed database test.

Description

Database testing method and device, electronic equipment and readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and apparatus for testing a database, an electronic device, and a readable storage medium.
Background
Compared with the traditional single-node database, the distributed database consists of a plurality of data nodes, and can cope with the requirements of large-scale data processing and high concurrent access. Distributed databases are typically tested to ensure their operational stability and reliability. When testing the distributed database, the testing tool can initiate testing to a plurality of data nodes at the same time so as to simulate a real testing environment.
However, conventional test tools can only be deployed on one test machine, and simulating large-scale data processing tests with only one test machine can cause the CPU and network bandwidth resources of the test machine to reach bottlenecks. Moreover, the traditional testing tool is designed for a single-node database, and cannot accurately simulate the real testing environment of multi-node data processing.
Therefore, the adoption of the traditional testing tool to test the distributed database can lead to inaccurate testing results and affect the accurate assessment of the overall performance of the distributed database.
Disclosure of Invention
The embodiment of the application provides a method, a device, electronic equipment and a storage medium for testing a database, which can improve the stability and accuracy of testing the distributed database.
An embodiment of the present application provides a method for testing a database, where the method includes:
acquiring at least one test node corresponding to a target database, wherein the target database comprises a plurality of data nodes;
loading a test program aiming at the target database, and sending the test program to the test node so as to enable the test node to load the test program;
transmitting configuration information corresponding to at least one test task to the test node, so that the test node tests the data node in the target database through the test program based on the configuration information;
and receiving an initial test result returned by each test node, and fusing the initial test results to obtain a target test result of the target database.
Accordingly, a second aspect of the embodiment of the present application provides another method for testing a database, where the method includes:
receiving a test program which is sent by a main node and aims at a target database, and loading the test program, wherein the target database comprises a plurality of data nodes;
acquiring configuration information corresponding to at least one test task, wherein the configuration information comprises pressure generation information of the test task and access information of the target database;
Generating at least one processing request corresponding to the test task through the test program and the pressure generation information;
establishing a third communication connection with the target database according to the access information, sending the processing request to the data node, and collecting an initial test result of the data node for processing the test task;
and sending the initial test result to the master node so that the master node fuses the initial test result to obtain a target test result of the target database.
Accordingly, a third aspect of the embodiments of the present application provides a test apparatus for a database, the apparatus including:
the test node acquisition unit is used for acquiring at least one test node corresponding to a target database, wherein the target database comprises a plurality of data nodes;
a test program loading unit, configured to load a test program for the target database, and send the test program to the test node, so that the test node loads the test program;
the configuration information sending unit is used for sending configuration information corresponding to at least one test task to the test node so that the test node tests the data node in the target database through the test program based on the configuration information;
And the test result receiving unit is used for receiving the initial test result returned by each test node and fusing the initial test results to obtain the target test result of the target database.
Optionally, the configuration information includes pressure generation information and access information;
the configuration information sending unit is specifically further configured to:
configuring initial concurrent pressure of each test node and target concurrent pressure corresponding to the initial concurrent pressure to obtain the pressure generation information;
configuring an address, a port and an access code of the target database to obtain the access information;
and writing configuration information containing the pressure generation information and the access information into a configuration file of each test node.
Optionally, the test result receiving unit is further specifically configured to:
acquiring log data returned by the test node;
and analyzing the log data, and extracting the initial test result.
Optionally, the initial test result comprises a query rate per second and a task processing duration;
the test result receiving unit is further specifically configured to:
acquiring a weight corresponding to each initial test result;
According to the weight of each initial test result, carrying out weighted summation operation on the query rate per second of each data node to obtain the query rate per second of the target database;
and carrying out weighted summation operation on the task processing time length of each data node according to the weight of each initial test result to obtain the task processing time length of the target database.
Optionally, the test result receiving unit includes:
the subunit is used for loading an additional test program, and fusing the initial test result through the additional test program to obtain a fused test result;
the characteristic test subunit is used for carrying out characteristic test on the target database through the additional test program to obtain a characteristic test result of the target database;
and the target test result determining subunit is used for taking the fused test result and the characteristic test result as target test results of the target database.
Optionally, the characteristic testing subunit is further specifically configured to:
establishing a second communication connection with each data node of the target database according to the access information of the target database;
Transmitting a processing request of a characteristic test task to each data node through the additional test program based on the second communication connection;
and receiving an initial characteristic test result returned by each data node, and analyzing the initial characteristic test result to obtain a characteristic test result for detecting the characteristics of the target database.
Optionally, the apparatus further comprises:
an uninstall instruction generation unit configured to generate an uninstall instruction for the test program when a test stop condition is satisfied;
the test program unloading unit is used for unloading the local test program and the test program of each test node according to the unloading instruction;
and the additional test program unloading unit is used for unloading the local additional test program when the local test program and the test program of each test node are unloaded.
Accordingly, a fourth aspect of the embodiments of the present application provides another database testing apparatus, the apparatus including:
the test program receiving unit is used for receiving a test program aiming at a target database and sent by a main node, and loading the test program, wherein the target database comprises a plurality of data nodes;
The configuration information acquisition unit is used for acquiring configuration information corresponding to at least one test task, wherein the configuration information comprises pressure generation information of the test task and access information of the target database;
a processing request generating unit, configured to generate at least one processing request corresponding to the test task through the test program and the pressure generating information;
the processing request sending unit is used for establishing third communication connection with the target database according to the access information, sending the processing request to the data node, and collecting an initial test result of the data node for processing the test task;
and the test result sending unit is used for sending the initial test result to the master node so that the master node fuses the initial test result to obtain a target test result of the target database.
Optionally, the test program includes a task simulation program and a web service program;
the processing request generation unit is specifically configured to:
generating an initial processing request corresponding to an initial test task through the task simulation program and the pressure generation information;
And after receiving the initial processing request through the network service program, executing the initial test task to generate a processing request of the test task.
The fifth aspect of the embodiments of the present application further provides a computer readable storage medium storing a plurality of instructions adapted to be loaded by a processor to perform the steps in any of the methods for testing a database provided by the embodiments of the present application.
The sixth aspect of the embodiment of the present application further provides a computer program product, which includes a computer program or an instruction, where the computer program or the instruction implement any one of the database testing methods provided by the embodiments of the present application when executed by a processor.
Therefore, by applying the embodiment of the application, the testing tool of the distributed database can be deployed in a plurality of testing nodes, and the master node can establish communication connection with each testing node by acquiring the address information of each testing node, and each testing node can also establish communication connection with a plurality of data nodes of the distributed database. The main node can configure a test task aiming at the distributed database for each test node, so that the test task of the distributed database can be simulated and issued by a plurality of test nodes, and the occurrence of abnormity of CPU and network bandwidth resources caused by overlarge load pressure of each test node is avoided, so that the stability and reliability of the test of the distributed database are ensured.
In addition, the embodiment of the application simulates and issues the test tasks aiming at the distributed database through a plurality of test nodes, and simultaneously acquires the test data of the database test returned by each test node through the main node, so that a closed loop is formed in the whole test process of the distributed database, and the real test environment of the distributed database multi-node data processing can be effectively simulated, thereby improving the test efficiency and accuracy of the distributed database.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an application scenario schematic diagram of a database testing method provided by an embodiment of the present application;
FIG. 2a is a flowchart illustrating a method for testing a database according to an embodiment of the present application;
FIG. 2b is a flowchart illustrating a method for testing a database according to an embodiment of the present application;
FIG. 3a is a schematic diagram of a test system for testing a target database according to an embodiment of the present application;
FIG. 3b is a schematic diagram of a deployment of test programs at a host node and a test node according to an embodiment of the present application;
FIG. 3c is a schematic diagram of a master node control test node simulation test task provided by an embodiment of the present application;
FIG. 3d is a schematic diagram of a master node performing a property test on a database according to an embodiment of the present application;
FIG. 3e is a schematic diagram of an offloading test procedure according to an embodiment of the present application;
FIG. 4a is a schematic structural diagram of a database testing device according to an embodiment of the present application;
FIG. 4b is a schematic structural diagram of a database testing device according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to fall within the scope of the application.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or server that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In order to facilitate understanding of the technical solution and the technical effects thereof described in the embodiments of the present application, the embodiments of the present application explain related terms:
database (Database): in short, the method can be regarded as an electronic filing cabinet, namely a place for storing electronic files, and the object can perform operations such as adding, inquiring, updating, deleting and the like on the data in the files. A "database" is a collection of data stored together in a manner that can be shared with multiple objects, with as little redundancy as possible, independent of the application.
Distributed database: refers to determining a path selection procedure for a data packet from a source node to a destination node in a computer network. In network communications, packets need to pass through multiple network nodes to reach a destination node. The purpose of the routing is to determine the best path of the data packet from the source node to the destination node based on the network topology and routing policy to achieve efficient data transmission and network communication.
Database performance test: is a process of evaluating the performance and efficiency of a database system under different loads. The method is mainly used for measuring the response speed, throughput and resource utilization rate of the database in the process of inquiring, inserting, updating, deleting and other operations.
Database characteristic test: also known as ACID testing, is a method of testing a database to verify whether the database system meets the characteristics of ACID, namely, atomicity (atom), consistency (consistency), isolation (Isolation), and Durability (Durability). These features are an important basis for a relational database management system (RDBMS) to ensure the reliability of the database during transactions.
Wherein:
atomicity (atom) is a unit that tests whether a database transaction is atomic, i.e., all operations in the transaction either execute all successfully or rollback all, ensuring that the operation of the transaction is indivisible.
Consistency is the testing of whether a database transaction ensures that the database is transitioned from one consistent state to another, i.e., after execution of the transaction, the database should be in a valid state without disrupting the integrity of the database and constraint.
Isolation (Isolation) is a test of whether database transactions can remain isolated in the event of concurrency, i.e., concurrently executing transactions do not interfere with each other, ensuring that interactions between transactions are minimized.
Persistence (Durability) is the testing of whether database transactions will permanently save changes to the database after the transaction commits, i.e., changes to the database will not be lost due to a system crash or failure.
It should be noted that, when the conventional database test tools, such as the syshbench and benchmarkS QL test tools, are initially designed, they are not focused on the distributed database system, so the following problems exist when testing the distributed database in a distributed scenario:
limited to single-node test machines: conventional database testing tools were initially deployed to run on a single node machine, whereas distributed database systems typically consisted of multiple machine nodes, i.e., multiple data nodes. Because the CPU and network bandwidth of the single machine test machine are limited, the performance of the distributed database cannot be simulated truly, and the real performance of the database cannot be tested accurately.
Limited test pressure configuration: taking the sysbench and benchmarkSQL test tools as examples, the test pressure is changed by configuring the concurrency number, but the flexibility of dynamically adjusting the pressure cannot be realized. For example, one thread cannot be set to initiate only 30%, 50%, 80% of the pressure during testing, but only a fixed number of concurrency can be used for testing, limiting the authenticity and diversity of the test.
ACID test, which cannot realize database property test: common performance testing tools only provide performance testing capabilities, lacking testing of the ACID characteristics of the distributed database. This means that the transaction atomicity, consistency, isolation and durability of the distributed database cannot be verified at the same time, and the critical characteristics of the database to be tested may be ignored.
Lack of graphical interface: conventional database test tools typically record test results in text form, lacking an intuitive graphical interface. This makes it impossible to observe the performance change curves and test data in real time during testing, affecting intuitive understanding and analysis of test results.
In order to solve the above problems and drawbacks of the conventional database testing tool, embodiments of the present application provide a database testing method, which may be executed by a database testing device, and the database testing device may be integrated in a computer device. The computer device may include at least one of a terminal, a server, and the like. Namely, the method for testing the database provided by the embodiment of the application can be executed by the terminal, the server and the terminal and the server which can communicate with each other.
The terminals may include, but are not limited to, smart phones, tablet computers, notebook computers, personal computers (Personal Computer, PCs), smart appliances, wearable electronic devices, VR/AR devices, vehicle terminals, smart voice interaction devices, and the like.
The server may be an interworking server or a background server among a plurality of heterogeneous systems, may be an independent physical server, may be a server cluster or a distributed system formed by a plurality of physical servers, and may be a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, basic cloud computing services such as big data and an artificial intelligence platform, and the like.
It should be noted that the embodiments of the present application may be applied to various scenarios, including, but not limited to, cloud technology, artificial intelligence, intelligent transportation, driving assistance, and the like.
Cloud technology (Cloud technology): the system is a hosting technology for integrating hardware, software, network and other series resources in a wide area network or a local area network to realize calculation, storage, processing and sharing of data, and can be understood as a general term of network technology, information technology, integration technology, management platform technology, application technology and the like applied based on a cloud computing business model, and a resource pool can be formed, so that the system is used as required, and is flexible and convenient. Background service of the technical network system needs a large amount of computing and storage resources, such as video websites, picture websites and more portal websites, along with the high development and application of the internet industry, each object possibly has an own identification mark and needs to be transmitted to the background system for logic processing, data of different levels are processed separately, and various industry data needs powerful system rear shield support, so cloud technology needs to be supported by cloud computing. Cloud computing is a computing model that distributes computing tasks over a large number of computer-made resource pools, enabling various application systems to acquire computing power, storage space, and information services as needed. The network that provides the resources is referred to as the "cloud". Resources in the cloud are infinitely expandable in the sense of users, and can be acquired at any time, used as needed, expanded at any time and paid for use as needed. As a basic capability provider of cloud computing, a cloud computing resource pool platform, referred to as a cloud platform for short, is generally called infrastructure as a service (IaaS, infrastructure as a Service), and multiple types of virtual resources are deployed in the resource pool for external clients to select for use. The cloud computing resource pool mainly comprises: computing devices (which may be virtualized machines, including operating systems), storage devices, and network devices.
In an embodiment, as shown in fig. 1, the database testing device may be integrated on a computer device such as a terminal or a server, so as to implement the database testing method according to the embodiment of the present application. Specifically, the server 11 or the terminal 10 may acquire at least one test node corresponding to a target database, where the target database includes a plurality of data nodes; loading a test program aiming at the target database, and sending the test program to the test node so as to enable the test node to load the test program; transmitting configuration information corresponding to at least one test task to the test node, so that the test node tests the data node in the target database through the test program based on the configuration information; and receiving an initial test result returned by each test node, and fusing the initial test results to obtain a target test result of the target database.
The following detailed description is given, respectively, of the embodiments, and the description sequence of the following embodiments is not to be taken as a limitation of the preferred sequence of the embodiments.
The following describes a database testing method provided by the application. FIG. 2a is a flow chart of a method for testing a database according to an embodiment of the present application, which provides the method operational steps as described in the examples or flow charts, but may include more or less operational steps based on conventional or non-inventive labor. The order of steps recited in the embodiments is merely one way of performing the order of steps and does not represent a unique order of execution. When implemented in a real system or server product, the methods illustrated in the embodiments or figures may be performed sequentially or in parallel (e.g., in a parallel processor or multithreaded environment). Referring to fig. 2a, a method for testing a database according to an embodiment of the present application may include the following steps:
Step 101, obtaining at least one test node corresponding to the target database.
Wherein the target database may be a distributed database comprising a plurality of data nodes.
The test node may be a node other than the database to be tested. The test node may be an entity, may be a separate computer or virtual machine, and is configured to perform the functions of simulating the behavior and stress test of the object. The test node is responsible for simulating the operation of a real object, generating a request for the database, and sending test pressure to the database to evaluate the performance of the database under different loads and concurrent conditions.
In distributed database performance testing, there are typically multiple test nodes operating simultaneously, and by concurrently executing test tasks, a scenario is simulated in which multiple objects access the database simultaneously. The test nodes can cooperate with each other, and the distributed performance test target is realized by establishing communication connection with the test control node, receiving the test task and the configuration information and reporting the test result.
The number and configuration of the test nodes can be flexibly adjusted according to requirements so as to meet test scenes with different scales and complexity. By the cooperative work of a plurality of test nodes, the real object load can be more accurately simulated, and the performance and reliability of the distributed database can be comprehensively evaluated.
The master node may obtain address information of at least one test node. The master node may be a test control node for remotely controlling program installation of the test node, issuing test tasks, starting and stopping test procedures, etc. In general, the test control node is a command center of the distributed database performance test system and is responsible for coordinating and controlling the whole test process, ensuring the accuracy and reliability of the test, so as to provide reliable data basis for performance evaluation of the distributed database.
It will be appreciated that the master node facilitates subsequent connection establishment with each test node by obtaining the IP address of each test node.
Before testing the target database, the test system in fig. 3a needs to be deployed, and the main purpose is to install the required test program and dependency on each test node, so as to prepare for the subsequent performance test stage.
At least one test control node and one or more test nodes are required during the test deployment phase. The test control node is the core of the whole test system and is responsible for controlling and coordinating the whole test process. Test nodes are computing nodes for performing performance tests, and may be multiple, typically distributed across different machines.
When the machine resources of the test environment are sufficient, the test control node and the test node can be deployed on different machines, so that the real distributed environment of the distributed database can be better simulated. However, when machine resources are limited, the test control node and test node may also be deployed on the same machine, although not as realistic as the environments deployed separately on different machines, some simple performance tests may still be performed.
The test deployment phase is accomplished by the test control node. First, the IP addresses of all test nodes need to be configured on the test control node in order to subsequently establish a first communication connection with each test node. Then, the test control node can remotely install the test program so as to realize that the installation operation is performed on all the test nodes in parallel.
It will be appreciated that after the test program and dependency installation are completed, the test node is ready to perform subsequent performance test tasks. The test control node can remotely control the configuration of each test node through communication connection with the test node, so as to perform preparation work of performance test. After the test deployment phase is completed, the entire test system is ready for performance testing phase tasks.
Step 102, loading a test program aiming at the target database, and sending the test program to the test node so that the test node loads the test program.
Referring to fig. 3b, fig. 3b is a schematic diagram of a test program deployed between a host node and a test node according to an embodiment of the present application. It will be appreciated that in order to ensure that the test work is performed smoothly, test programs required for testing need to be loaded successfully at each test node.
In some embodiments, the test control node may remotely control each test node to install the test program according to the IP of all the test nodes. As shown in fig. 3a and 3b, wherein the test program may include a task simulator program and a web service program.
In some embodiments, the task simulator may be a program for simulating the operation of a real object, and may be used to simulate various operations, such as ordering, paying, viewing orders, etc., to generate HTTP requests. The generated HTTP request simulates the behavior of the object in practical applications, such as purchasing goods, querying information, etc.
In some embodiments, the web service is a service for accepting HTTP requests for performing the actual business operations. For example, upon receipt of HTTP requests sent by the task simulator, the web service may translate the requests into queries, inserts, updates, and deletes records from the distributed database, such as, for example, HTTP requests into SQL requests for the database to perform the above operations. Specifically, the network service program may send the corresponding database operation to the target database to be tested according to the received request, so as to execute the business operation of the object simulation.
And step 103, sending configuration information corresponding to at least one test task to the test node, so that the test node tests the data node in the target database through the test program based on the configuration information.
The configuration information may include pressure generation information and access information, among others. In some embodiments, the test nodes may configure pressure generation information and access information for each test node. Specifically, when configuring the pressure generation information, a total pressure to be initiated on the target database may be configured on the test control node. One configuration would be to assign a total pressure level first and then distribute the total pressure equally to each test node. For example, the total pressure may be set to 1000 concurrent, assuming 5 test nodes, each test node will be equally assigned to 200 concurrent pressures. Accordingly, another configuration mode is to configure different pressure magnitudes for each test node separately, for example, setting 400 concurrency for test node 1, 300 concurrency for test node 2 and 150 concurrency for test node 3, so as to implement personalized pressure configuration for each node.
Optionally, step 103 may include:
configuring initial concurrent pressure of each test node and target concurrent pressure corresponding to the initial concurrent pressure to obtain the pressure generation information;
configuring an address, a port and an access code of the target database to obtain the access information;
and writing configuration information containing the pressure generation information and the access information into a configuration file of each test node.
The initial concurrency pressure is a pressure initial value configured by the test control node, and the target concurrency pressure is the concurrency pressure in a non-idle state in the initial concurrency pressures. By way of example only, assuming a total of 1000 concurrent pressures for each test node and defining testing of the target database within 10 minutes, the actual concurrent pressure is further configured to be 200, i.e., only 20% of the 1000 concurrent pressures are operational. For example, in an online shopping scenario, 1000 objects are browsing a commodity page within 10 minutes, but only 200 objects per minute actually perform commodity purchasing operations, and other 800 objects have not performed commodity purchasing operations, but still access the commodity page, and also occupy server resources.
By configuring the pressure generation information in the above manner, the load conditions in the actual use scene, including peak load and off-peak load, can be more truly simulated, the performance of the distributed database under different load conditions can be more comprehensively evaluated, and potential performance bottlenecks and resource underutilization problems can be found. Meanwhile, the system configuration is also facilitated to be optimized, and the overall performance is improved, so that the requirements of the object under the condition of high load are met.
In some embodiments, the distributed database may be configured for the test nodes participating in the test task to provide access information such as IP, port, account number, and password for services to the outside, so that the test nodes may access and operate the database correctly.
In some embodiments, the test control node may make configuration modifications to each test node remotely. Specifically, the test control node may update the pressure generation information and the access information set previously to the configuration file of the test node, so that the test node can perform the performance test according to the new configuration in the test stage.
And 104, receiving an initial test result returned by each test node, and fusing the initial test results to obtain a target test result of the target database.
In some embodiments, after the test control node has configured the configuration information for the test task, the test may be initiated by initiating an instruction to begin the test. As shown in fig. 3c, the test control node may remotely issue instructions to all test nodes to start testing in parallel. Each test node starts to simulate the request of the real object through the task simulation program, and initiates a corresponding HTTP request to the network service module to generate test pressure.
In some embodiments, the test nodes may collect initial test results returned by each test node according to the data collection modules in fig. 3a and 3 b. Specifically, when the test node performs the test task, it records an initial test result, that is, some important test indexes of the target database, for example, a task processing duration of each transaction, a query rate per second, the number of transaction requests, the number of successful transactions, and the number of overtime transactions, where the initial test result is recorded in the log of the test node.
Optionally, step 104 may include:
acquiring log data returned by the test node;
and analyzing the log data, and extracting the initial test result.
In some embodiments, the data collection program on the test control node may actively collect data on the test node, and extract key test indexes by analyzing log data recorded on the test node, so as to obtain an initial test result. For example, the task processing time length in the log data may be parsed to understand the execution time of each transaction; the request amount information can be parsed to learn the number of transaction requests per second; analyzing the success number information to calculate the successful transaction proportion; and analyzing the timeout number information to obtain the condition of the timeout transaction.
By collecting and analyzing the log data of the test nodes, the data collection program can acquire initial test results of the test nodes in real time, and return the initial test results to the test control node for subsequent performance analysis and display of the target database. The initial test result and the target test result obtained later can be displayed in the data display module in real time in the form of a graph or a histogram and the like, so that an object can intuitively know the performance of the test node and the running condition of the target database under high concurrent pressure. Therefore, by means of a mechanism that the test control node performs data acquisition on each test node, effective data support can be provided for performance test of the distributed database, and test results are more objective and accurate.
The initial test result comprises a query rate per second and a task processing time length;
optionally, step 104 may include:
acquiring a weight corresponding to each initial test result;
according to the weight of each initial test result, carrying out weighted summation operation on the query rate per second of each data node to obtain the query rate per second of the target database;
and carrying out weighted summation operation on the task processing time length of each data node according to the weight of each initial test result to obtain the task processing time length of the target database.
It will be appreciated that since the distributed database is stored in a plurality of data nodes, an operation request may be processed by a plurality of data nodes at the same time, each data node having a different processing effort and processing capacity, such that each data point and its initial test result may correspond to a different weight.
For example only, assume that the weight of test node 1 is 0.4, the weight of test node 2 is 0.3, the weight of test node 3 is 0.2, and the weight of test node 4 is 0.1, and the weight of each test node is the weight corresponding to the initial test result. And assume that the initial test results of each test node are as follows: the query rate per second of the test node 1 is 800 times, the task processing duration is 10 milliseconds, the query rate per second of the test node 2 is 600 times, the task processing duration is 12 milliseconds, the query rate per second of the test node 3 is 1200 times, the task processing duration is 8 milliseconds, the query rate per second of the test node 4 is 1000 times, and the task processing duration is 9 milliseconds.
Further, a weighted summation operation may be performed in combination with the weights of the initial test results:
the query rate per second of the target database is (800×0.4) + (600×0.3) + (1200×0.2) + (1000×0.1) =840 times, and the task processing duration of the target database is (10×0.4) + (12×0.3) + (8×0.2) + (9×0.1) =9.4 seconds.
In some embodiments, the weights of the test nodes can be set in the configuration information, and by configuring different weights for the test nodes and the corresponding initial test results, the importance degree of the test nodes in the performance index can be flexibly adjusted, so that the test results are closer to the actual situation of the distributed scene, and the accuracy and reliability of testing the distributed database can be improved.
Optionally, step 104 may include:
loading an additional test program, and fusing the initial test result through the additional test program to obtain a fused test result;
performing characteristic test on the target database through the additional test program to obtain a characteristic test result of the target database;
and taking the fused test result and the characteristic test result as target test results of the target database.
Wherein the additional test program is a test program installed on the test control node, but not necessarily on the test node. As shown in fig. 3b, the additional test programs may include a property test program, a data acquisition program, and a data presentation program. It will be appreciated that in performing the characteristic test on the target database, the characteristic of the transaction processed by the target database is to be tested, and the transaction processing capability and the data integrity of the database are verified, so that a great deal of pressure is not required to be simulated, and the characteristic test program is only required to be installed in the test control node.
In some embodiments, similar to performance testing, characteristic testing results returned by each testing node may be collected and then summarized and analyzed to obtain characteristic testing results of the target database.
Optionally, the step of performing a characteristic test on the target database by the additional test program to obtain a characteristic test result of the target database includes:
establishing a second communication connection with each data node of the target database according to the access information of the target database;
transmitting a processing request of a characteristic test task to each data node through the additional test program based on the second communication connection;
And receiving an initial characteristic test result returned by each data node, and analyzing the initial characteristic test result to obtain a characteristic test result for detecting the characteristics of the target database.
In some embodiments, the test control node may establish a second communication connection with each data node of the target database. As shown in FIG. 3d, the target database may be subjected to an atomicity test, an isolation test, a consistency test, and a durability test, respectively. For example, when an atomicity test is performed, the test control node may send a processing request of a characteristic test task of a transfer inquiry to each data node, and the transfer operation is successfully transferred from the object a to the object B, so that if the initial characteristic test result returned by each data node of the target database is "the inquiry result of the transfer operation is successful", the target database is verified to conform to atomicity. Similarly, consistency and isolation testing of the target database follows the above.
It should be noted that, the persistence test is a test for verifying that the distributed database can ensure that data is not lost when facing a fault scenario, so analysis is required by means of log data of the test node. During performance testing, the test node records the unique identification of each transaction processed by the target database and records in the log whether the transaction was successful or failed.
In some embodiments, in persistence testing, the test pressure may be set to remain unchanged and a system restart operation may be performed on data nodes of the target database, such as a compute node and a storage node, to simulate a failure scenario, in order to verify whether the target database can be correctly restored after a failure and ensure the integrity of the data.
Correspondingly, after the performance test is finished, the test control node can check the transaction recorded by all the test nodes, if a certain transaction is recorded in the log of the test node to be successful, the transaction can be checked in the distributed database according to the unique identification of the transaction, and the target database is verified to have durability. Otherwise, if a certain transaction is recorded in the log of the test node as failure, the transaction cannot be checked in the target database, which indicates that the target database cannot correctly protect data under the fault scene, the situation that the data is lost or damaged possibly occurs, and the target database is tested to have no persistence.
Optionally, after step 104, the method of the present application further comprises:
when the test stopping condition is met, generating an unloading instruction for the test program;
According to the unloading instruction, unloading a local test program and the test program of each test node;
and unloading the local additional test program when the local test program and the test program of each test node are completely unloaded.
In some embodiments, the test stop condition may be a timed stop, for example, the duration of the stress test may be set at the test control node, for example, 7 x 24 hours, i.e., one week. Once the duration is set, the stress test continues to run for a set period of time until it is automatically stopped after a specified time. This approach is applicable to scenarios where system performance needs to be evaluated over a range of times, such as long-term testing of system stability and durability.
In some embodiments, the stop condition may be an active stop, and an instruction to stop the stress test may be entered at the test control node, sent remotely to all the test nodes, commanding them to stop the stress test. This allows the active stopping of the pressure test at any point in time without waiting for the predetermined duration to end. The test can be stopped at any time to collect and analyze test results according to the test requirement, or stopped immediately when an abnormal condition occurs in the test to prevent data loss or system breakdown.
As shown in fig. 3e, in the test cleaning stage after the test is stopped, the test control node may remotely uninstall the test program of each test node, and uninstall the local test program and the test additional program. Through the cleaning operations, the test control node and each test node can return to the state before testing, no test program or service is run any more, and the cleanness and stability of the test environment are ensured. The purpose of this is to ensure the accuracy and reliability of the next round of test, and avoid the interference of the data and program of the previous round of test to the next round of test. Meanwhile, the system resources of the test nodes are released in the cleaning stage, so that all nodes are in an idle and stable state when the next round of test starts.
Therefore, by applying the embodiment of the application, the testing tool of the distributed database can be deployed in a plurality of testing nodes, and the master node can establish communication connection with each testing node by acquiring the address information of each testing node, and each testing node can also establish communication connection with a plurality of data nodes of the distributed database. The main node can configure a test task aiming at the distributed database for each test node, so that the test task of the distributed database can be simulated and issued by a plurality of test nodes, and the occurrence of abnormity of CPU and network bandwidth resources caused by overlarge load pressure of each test node is avoided, so that the stability and reliability of the test of the distributed database are ensured.
In addition, the embodiment of the application simulates and issues the test tasks aiming at the distributed database through a plurality of test nodes, and simultaneously acquires the test data of the database test returned by each test node through the main node, so that a closed loop is formed in the whole test process of the distributed database, and the real test environment of the distributed database multi-node data processing can be effectively simulated, thereby improving the test efficiency and accuracy of the distributed database.
Referring to fig. 2b, a method for testing a database according to an embodiment of the present application may include the following steps:
step 201, receiving a test program for a target database sent by a master node, and loading the test program, wherein the target database comprises a plurality of data nodes.
The main node is a test control node. The test control node can distribute the test program aiming at the target database to each data node, so that the comprehensive test of the target database in the whole distributed environment is realized. The test control node can remotely control the test process, collect test results from each test node and finally obtain comprehensive test results about the performance and characteristics of the distributed database.
By the method, the overall performance of the target database can be evaluated and verified, potential problems are found, and corresponding optimization and improvement are made.
Step 202, obtaining configuration information corresponding to at least one test task, wherein the configuration information comprises pressure generation information of the test task and access information of the target database.
It will be appreciated that by obtaining the configuration information, the test node is able to connect to the target database in a predetermined pressure generating manner and simulate the operation of the real object, generating the requested pressure against the target database. The test control node can be configured with different test tasks according to the requirements, and tests are conducted aiming at different pressures and access information of the target database, so that the performance and the characteristics of the distributed database are comprehensively evaluated.
And 203, generating at least one processing request corresponding to the test task through the test program and the pressure generation information.
In some embodiments, the test program may simulate the operation behavior of the real object according to the pressure generation information and generate the corresponding request pressure. For example, if the pressure generation information specifies that 100 concurrent pressure threads are generated per second, the test program may simulate 100 object initiation requests per second. Each processing request may correspond to a database transaction operation, such as querying, inserting, updating, or deleting records.
Correspondingly, the processing request can be sent to the target database through a network service program, and the operation of the real object on the database is simulated. The target database may perform the corresponding operation according to the received request and return the result to the test node. The test node may record the execution time, results, and other critical information of each processing request for subsequent performance evaluation and result analysis.
By generating these processing requests and executing on the target database, the actual user operations can be simulated, thereby comprehensively testing and evaluating the performance of the distributed database.
The test program may include a task simulation program and a web service program, among others.
Optionally, step 203 may include:
generating an initial processing request corresponding to an initial test task through the task simulation program and the pressure generation information;
and after receiving the initial processing request through the network service program, executing the initial test task to generate a processing request of the test task.
As can be seen from the description of step 102, the initial processing request may be an HTTP request, and when the HTTP request sent by the task simulator is received, the web service program may execute the corresponding initial test tasks, convert the requests into operations of querying, inserting, updating and deleting records from the distributed database, for example, convert the HTTP request into an SQL request for executing the above test tasks from the database.
And 204, establishing third communication connection with the target database according to the access information, sending the processing request to the data node, and collecting an initial test result of the data node for processing the test task.
It will be appreciated that, depending on the access information, the test control node may establish a third communication connection with each data node of the target database. The third communication connection is used for sending the processing request to each data node and collecting the initial test result of the data node on the test task.
The test control node may send the generated processing request to the respective data node via a third communication connection. Each data node can execute corresponding processing request, operate the target database, and return the processing result to the test control node.
Meanwhile, the data acquisition program on the test control node can acquire initial test results of processing the test tasks by each data node. The initial test result comprises key indexes such as execution time of the transaction, request amount of the transaction, success number of the transaction, overtime number of the transaction and the like.
By establishing a third communication connection and collecting initial test results, the test control node can collect performance of each data node in real time, so that performance evaluation and result analysis can be performed, and the collected initial test results can be used for subsequent performance test and ACID test verification of the target database.
Step 205, sending the initial test result to the master node, so that the master node fuses the initial test result to obtain a target test result of the target database.
It will be appreciated that the test control node may send the initial test results collected from the various data nodes to the master node. The master node is responsible for fusing the initial test results to obtain a target test result of the target database.
When the test control node collects the initial test results of all the data nodes, the results can be sent to the master node. The main node can perform weighted fusion on the initial test results, and the test results of different data nodes are combined according to preset weights and algorithms to obtain target test results of a target database.
This fusion process can integrate the performance of the different data nodes and their contribution throughout the distributed database. Through the fused test results, the performance and characteristics of the target database can be more comprehensively known, and corresponding optimization and improvement measures can be made.
Therefore, by applying the embodiment of the application, the testing tool of the distributed database can be deployed in a plurality of testing nodes, and the master node can establish communication connection with each testing node by acquiring the address information of each testing node, and each testing node can also establish communication connection with a plurality of data nodes of the distributed database. The main node can configure a test task aiming at the distributed database for each test node, so that the test task of the distributed database can be simulated and issued by a plurality of test nodes, and the occurrence of abnormity of CPU and network bandwidth resources caused by overlarge load pressure of each test node is avoided, so that the stability and reliability of the test of the distributed database are ensured.
In addition, the embodiment of the application simulates and issues the test tasks aiming at the distributed database through a plurality of test nodes, and simultaneously acquires the test data of the database test returned by each test node through the main node, so that a closed loop is formed in the whole test process of the distributed database, and the real test environment of the distributed database multi-node data processing can be effectively simulated, thereby improving the test efficiency and accuracy of the distributed database.
The method described in the above embodiments will be described in further detail below.
As shown in fig. 4a, a schematic structural diagram of a database testing device according to an embodiment of the present application is shown, where the device includes:
a test node obtaining unit 301, configured to obtain at least one test node corresponding to a target database, where the target database includes a plurality of data nodes;
a test program loading unit 302, configured to load a test program for the target database, and send the test program to the test node, so that the test node loads the test program;
a configuration information sending unit 303, configured to send configuration information corresponding to at least one test task to the test node, so that the test node tests, based on the configuration information, a data node in the target database through the test program;
And the test result receiving unit 304 is configured to receive an initial test result returned by each test node, and fuse the initial test results to obtain a target test result of the target database.
Optionally, the configuration information includes pressure generation information and access information;
the configuration information sending unit 303 is specifically further configured to:
configuring initial concurrent pressure of each test node and target concurrent pressure corresponding to the initial concurrent pressure to obtain the pressure generation information;
configuring an address, a port and an access code of the target database to obtain the access information;
and writing configuration information containing the pressure generation information and the access information into a configuration file of each test node.
Optionally, the test result receiving unit 304 is further specifically configured to:
acquiring log data returned by the test node;
and analyzing the log data, and extracting the initial test result.
Optionally, the initial test result comprises a query rate per second and a task processing duration;
the test result receiving unit 304 is specifically further configured to:
acquiring a weight corresponding to each initial test result;
According to the weight of each initial test result, carrying out weighted summation operation on the query rate per second of each data node to obtain the query rate per second of the target database;
and carrying out weighted summation operation on the task processing time length of each data node according to the weight of each initial test result to obtain the task processing time length of the target database.
Optionally, the test result receiving unit 304 includes:
the subunit is used for loading an additional test program, and fusing the initial test result through the additional test program to obtain a fused test result;
the characteristic test subunit is used for carrying out characteristic test on the target database through the additional test program to obtain a characteristic test result of the target database;
and the target test result determining subunit is used for taking the fused test result and the characteristic test result as target test results of the target database.
Optionally, the characteristic testing subunit is further specifically configured to:
establishing a second communication connection with each data node of the target database according to the access information of the target database;
Transmitting a processing request of a characteristic test task to each data node through the additional test program based on the second communication connection;
and receiving an initial characteristic test result returned by each data node, and analyzing the initial characteristic test result to obtain a characteristic test result for detecting the characteristics of the target database.
Optionally, the apparatus further comprises:
an uninstall instruction generation unit configured to generate an uninstall instruction for the test program when a test stop condition is satisfied;
the test program unloading unit is used for unloading the local test program and the test program of each test node according to the unloading instruction;
and the additional test program unloading unit is used for unloading the local additional test program when the local test program and the test program of each test node are unloaded.
As shown in fig. 4b, a schematic structural diagram of a database testing device according to an embodiment of the present application is shown, where the device includes:
a test program receiving unit 401, configured to receive a test program for a target database sent by a master node, and load the test program, where the target database includes a plurality of data nodes;
A configuration information obtaining unit 402, configured to obtain configuration information corresponding to at least one test task, where the configuration information includes pressure generation information of the test task and access information of the target database;
a processing request generating unit 403, configured to generate at least one processing request corresponding to the test task through the test program and the pressure generating information;
a processing request sending unit 404, configured to establish a third communication connection with the target database according to the access information, send the processing request to the data node, and collect an initial test result of the data node for processing the test task;
and the test result sending unit 405 is configured to send the initial test result to the master node, so that the master node fuses the initial test result to obtain a target test result of the target database.
Optionally, the test program includes a task simulation program and a web service program;
the processing request generation unit 403 is specifically further configured to:
generating an initial processing request corresponding to an initial test task through the task simulation program and the pressure generation information;
And after receiving the initial processing request through the network service program, executing the initial test task to generate a processing request of the test task.
Therefore, by applying the embodiment of the application, the testing tool of the distributed database can be deployed in a plurality of testing nodes, and the master node can establish communication connection with each testing node by acquiring the address information of each testing node, and each testing node can also establish communication connection with a plurality of data nodes of the distributed database. The main node can configure a test task aiming at the distributed database for each test node, so that the test task of the distributed database can be simulated and issued by a plurality of test nodes, and the occurrence of abnormity of CPU and network bandwidth resources caused by overlarge load pressure of each test node is avoided, so that the stability and reliability of the test of the distributed database are ensured.
In addition, the embodiment of the application simulates and issues the test tasks aiming at the distributed database through a plurality of test nodes, and simultaneously acquires the test data of the database test returned by each test node through the main node, so that a closed loop is formed in the whole test process of the distributed database, and the real test environment of the distributed database multi-node data processing can be effectively simulated, thereby improving the test efficiency and accuracy of the distributed database.
The embodiment of the application also provides electronic equipment which can be a terminal, a server and other equipment. As shown in fig. 5, a schematic structural diagram of an electronic device according to an embodiment of the present application is shown, specifically:
the electronic device may include one or more processing cores 'processors 501, one or more computer-readable storage media's memory 502, a power supply 503, an input unit 504, and a communication unit 505, among other components. Those skilled in the art will appreciate that the electronic device structure shown in fig. 5 does not create a limitation on the electronic device and may include more or fewer components than shown, or may combine certain components, or may be a different arrangement of components. Wherein:
the processor 501 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by running or executing software programs and/or elements stored in the memory 502, and invoking data stored in the memory 502. In some embodiments, processor 501 may include one or more processing cores; in some embodiments, the processor 501 may integrate an application processor that primarily processes operating systems, presentation interfaces, applications, etc., with a modem processor that primarily processes wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 501.
The memory 502 may be used to store software programs and units, and the processor 501 executes various functional applications and data processing by executing the software programs and units stored in the memory 502. The memory 502 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data created according to the use of the electronic device, etc. In addition, memory 502 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device. Accordingly, the memory 502 may also include a memory controller to provide access to the memory 502 by the processor 501.
The electronic device also includes a power supply 503 for powering the various components, and in some embodiments, the power supply 503 may be logically connected to the processor 501 via a power management system, such that functions such as charge, discharge, and power consumption management are performed by the power management system. The power supply 503 may also include one or more of any of a direct current or alternating current power supply, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
The electronic device may further comprise an input unit 504, which input unit 504 may be used for receiving input digital or character information and for generating keyboard, mouse, joystick, optical or trackball signal inputs in connection with object settings and function control.
The electronic device may also include a communication unit 505, and in some embodiments the communication unit 505 may include a wireless unit, through which the electronic device may wirelessly transmit over a short distance, thereby providing wireless broadband internet access to the subject. For example, the communication unit 505 may be used to assist an object in e-mail, browsing web pages, accessing streaming media, and the like.
Although not shown, the electronic device may further include a display unit or the like, which is not described herein. In particular, in this embodiment, the processor 501 in the electronic device loads executable files corresponding to the processes of one or more application programs into the memory 502 according to the following instructions, and the processor 501 executes the application programs stored in the memory 502, so as to implement various functions as follows:
acquiring at least one test node corresponding to a target database, wherein the target database comprises a plurality of data nodes;
Loading a test program aiming at the target database, and sending the test program to the test node so as to enable the test node to load the test program;
transmitting configuration information corresponding to at least one test task to the test node, so that the test node tests the data node in the target database through the test program based on the configuration information;
and receiving an initial test result returned by each test node, and fusing the initial test results to obtain a target test result of the target database.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
It can be seen from the above that, by applying the embodiment of the present application, the target jump field can be automatically and quickly identified in the source code of the application program, and multiple candidate pages of the application program to be packetized can be automatically screened out according to the target jump field. After a plurality of candidate pages are determined, a corresponding route directed graph can be constructed according to the jump relation among the plurality of candidate pages, and then at least one target page to be subcontracted is rapidly and accurately screened from the plurality of candidate pages according to the route weight of each candidate page represented by the route directed graph. Therefore, the embodiment of the application does not need manual sub-packaging by a developer, rapidly and accurately obtains the sub-packaging structure of the application program in an automatic sub-packaging mode, improves the sub-packaging efficiency and the development efficiency of the application program, reduces the complexity of sub-packaging of the application program, and reduces the manpower resource consumption required by sub-packaging.
To this end, an embodiment of the present application provides a computer readable storage medium having stored therein a plurality of instructions capable of being loaded by a processor to perform the steps of any one of the methods for testing a database provided by the embodiments of the present application. For example, the instructions may perform the steps of:
acquiring a source code of a target application program, and identifying a target jump field in the source code, wherein the target jump field indicates information of jumping among a plurality of candidate pages;
constructing a routing directed graph among the plurality of candidate pages based on the target jump field, wherein the routing directed graph characterizes jump relations among the candidate pages;
determining the routing weight of each candidate page based on the routing directed graph;
screening at least one target page from the candidate pages based on the routing weight, wherein the target page comprises the candidate pages for executing the subpackaging operation;
and extracting configuration information and page loading information of the target page from the source code, and generating a subcontracting file of the target page based on the configuration information and the page loading information.
Wherein the storage medium may include: read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disk, and the like.
The steps in the method for testing any database provided by the embodiment of the present application can be executed due to the instructions stored in the storage medium, so that the beneficial effects of the method for testing any database provided by the embodiment of the present application can be achieved, and detailed descriptions of the foregoing embodiments are omitted.
The foregoing describes in detail a database testing method, apparatus, electronic device and storage medium provided by the embodiments of the present application, and specific examples are applied to illustrate the principles and embodiments of the present application, where the foregoing examples are only used to help understand the method and core idea of the present application; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in light of the ideas of the present application, the present description should not be construed as limiting the present application.

Claims (14)

1. A method of testing a database, the method comprising:
acquiring at least one test node corresponding to a target database, wherein the target database comprises a plurality of data nodes;
loading a test program aiming at the target database, and sending the test program to the test node so as to enable the test node to load the test program;
Transmitting configuration information corresponding to at least one test task to the test node, so that the test node tests the data node in the target database through the test program based on the configuration information;
and receiving an initial test result returned by each test node, and fusing the initial test results to obtain a target test result of the target database.
2. The method of claim 1, wherein the configuration information includes pressure generation information and access information;
the sending configuration information corresponding to at least one test task to the test node includes:
configuring initial concurrent pressure of each test node and target concurrent pressure corresponding to the initial concurrent pressure to obtain the pressure generation information;
configuring an address, a port and an access code of the target database to obtain the access information;
and writing configuration information containing the pressure generation information and the access information into a configuration file of each test node.
3. The method of claim 1, wherein said receiving initial test results returned by each of said test nodes comprises:
Acquiring log data returned by the test node;
and analyzing the log data, and extracting the initial test result.
4. The method of claim 1, wherein the initial test results include a query rate per second and a task processing duration;
the fusing the initial test result to obtain a target test result of the target database includes:
acquiring a weight corresponding to each initial test result;
according to the weight of each initial test result, carrying out weighted summation operation on the query rate per second of each data node to obtain the query rate per second of the target database;
and carrying out weighted summation operation on the task processing time length of each data node according to the weight of each initial test result to obtain the task processing time length of the target database.
5. The method of claim 1, wherein fusing the initial test results to obtain target test results for the target database comprises:
loading an additional test program, and fusing the initial test result through the additional test program to obtain a fused test result;
Performing characteristic test on the target database through the additional test program to obtain a characteristic test result of the target database;
and taking the fused test result and the characteristic test result as target test results of the target database.
6. The method according to claim 5, wherein the performing the characteristic test on the target database by the additional test program obtains a characteristic test result of the target database, including:
establishing a second communication connection with each data node of the target database according to the access information of the target database;
transmitting a processing request of a characteristic test task to each data node through the additional test program based on the second communication connection;
and receiving an initial characteristic test result returned by each data node, and analyzing the initial characteristic test result to obtain a characteristic test result for detecting the characteristics of the target database.
7. The method of claim 5, wherein after said integrating the post-test result and the property test result as the target test result of the target database, the method further comprises:
When the test stopping condition is met, generating an unloading instruction for the test program;
according to the unloading instruction, unloading a local test program and the test program of each test node;
and unloading the local additional test program when the local test program and the test program of each test node are completely unloaded.
8. A method of testing a database, the method comprising:
receiving a test program which is sent by a main node and aims at a target database, and loading the test program, wherein the target database comprises a plurality of data nodes;
acquiring configuration information corresponding to at least one test task, wherein the configuration information comprises pressure generation information of the test task and access information of the target database;
generating at least one processing request corresponding to the test task through the test program and the pressure generation information;
establishing a third communication connection with the target database according to the access information, sending the processing request to the data node, and collecting an initial test result of the data node for processing the test task;
and sending the initial test result to the master node so that the master node fuses the initial test result to obtain a target test result of the target database.
9. The method of claim 8, wherein the test program comprises a task simulator program and a web service program;
generating at least one processing request corresponding to the test task through the test program and the pressure generation information, wherein the processing request comprises:
generating an initial processing request corresponding to an initial test task through the task simulation program and the pressure generation information;
and after receiving the initial processing request through the network service program, executing the initial test task to generate a processing request of the test task.
10. A database testing apparatus, the apparatus comprising:
the test node acquisition unit is used for acquiring at least one test node corresponding to a target database, wherein the target database comprises a plurality of data nodes;
a test program loading unit, configured to load a test program for the target database, and send the test program to the test node, so that the test node loads the test program;
the configuration information sending unit is used for sending configuration information corresponding to at least one test task to the test node so that the test node tests the data node in the target database through the test program based on the configuration information;
And the test result receiving unit is used for receiving the initial test result returned by each test node and fusing the initial test results to obtain the target test result of the target database.
11. A database testing apparatus, the apparatus comprising:
the test program receiving unit is used for receiving a test program aiming at a target database and sent by a main node, and loading the test program, wherein the target database comprises a plurality of data nodes;
the configuration information acquisition unit is used for acquiring configuration information corresponding to at least one test task, wherein the configuration information comprises pressure generation information of the test task and access information of the target database;
a processing request generating unit, configured to generate at least one processing request corresponding to the test task through the test program and the pressure generating information;
the processing request sending unit is used for establishing third communication connection with the target database according to the access information, sending the processing request to the data node, and collecting an initial test result of the data node for processing the test task;
and the test result sending unit is used for sending the initial test result to the master node so that the master node fuses the initial test result to obtain a target test result of the target database.
12. An electronic device, comprising:
a processor and a storage medium;
the processor is used for realizing each instruction;
the storage medium is for storing a plurality of instructions for loading and executing by a processor a method of testing a database according to any one of claims 1 to 9.
13. A computer readable storage medium storing executable instructions which when executed by a processor implement the method of testing a database according to any one of claims 1 to 9.
14. A computer program product comprising a computer program or instructions which, when executed by a processor, implements the method of testing a database according to any one of claims 1 to 9.
CN202311010236.9A 2023-08-10 2023-08-10 Database testing method and device, electronic equipment and readable storage medium Pending CN116974874A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311010236.9A CN116974874A (en) 2023-08-10 2023-08-10 Database testing method and device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311010236.9A CN116974874A (en) 2023-08-10 2023-08-10 Database testing method and device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN116974874A true CN116974874A (en) 2023-10-31

Family

ID=88481378

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311010236.9A Pending CN116974874A (en) 2023-08-10 2023-08-10 Database testing method and device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116974874A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117251351A (en) * 2023-11-10 2023-12-19 支付宝(杭州)信息技术有限公司 Database performance prediction method and related equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117251351A (en) * 2023-11-10 2023-12-19 支付宝(杭州)信息技术有限公司 Database performance prediction method and related equipment
CN117251351B (en) * 2023-11-10 2024-04-05 支付宝(杭州)信息技术有限公司 Database performance prediction method and related equipment

Similar Documents

Publication Publication Date Title
CN106528224B (en) Content updating method, server and system for Docker container
US10489283B2 (en) Software defect reporting
US9413604B2 (en) Instance host configuration
CN109656538A (en) Generation method, device, system, equipment and the medium of application program
US8433554B2 (en) Predicting system performance and capacity using software module performance statistics
CN102222042B (en) Automatic software testing method based on cloud computing
Shi et al. Evaluating scalability bottlenecks by workload extrapolation
EP4053699A1 (en) Instance host configuration
US8966025B2 (en) Instance configuration on remote platforms
JP2017514218A (en) Running third-party applications
JP6972796B2 (en) Software service execution equipment, systems, and methods
CN106095483A (en) The Automation arranging method of service and device
CN109614227A (en) Task resource concocting method, device, electronic equipment and computer-readable medium
CN116974874A (en) Database testing method and device, electronic equipment and readable storage medium
CN110233904B (en) Equipment updating method, device, system, storage medium and computer equipment
CN114817022A (en) Railway electronic payment platform test method, system, equipment and storage medium
CN112559525B (en) Data checking system, method, device and server
CN106549827A (en) The detection method and device of network state
US20210224102A1 (en) Characterizing operation of software applications having large number of components
CN112202647A (en) Test method, device and test equipment in block chain network
US10554502B1 (en) Scalable web services execution
CN116225690A (en) Memory multidimensional database calculation load balancing method and system based on docker
CN110971478A (en) Pressure measurement method and device for cloud platform service performance and computing equipment
CN114153427A (en) Optimization method and system of continuous integration assembly line
CN109995617A (en) Automated testing method, device, equipment and the storage medium of Host Administration characteristic

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication