CN112925807A - Database-oriented request batch processing method, device, equipment and storage medium - Google Patents

Database-oriented request batch processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN112925807A
CN112925807A CN202110270917.3A CN202110270917A CN112925807A CN 112925807 A CN112925807 A CN 112925807A CN 202110270917 A CN202110270917 A CN 202110270917A CN 112925807 A CN112925807 A CN 112925807A
Authority
CN
China
Prior art keywords
requests
request
database
batch processing
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110270917.3A
Other languages
Chinese (zh)
Inventor
段鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Baiguoyuan Information Technology Co Ltd
Original Assignee
Guangzhou Baiguoyuan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Baiguoyuan Information Technology Co Ltd filed Critical Guangzhou Baiguoyuan Information Technology Co Ltd
Priority to CN202110270917.3A priority Critical patent/CN112925807A/en
Publication of CN112925807A publication Critical patent/CN112925807A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2453Query optimisation
    • G06F16/24534Query rewriting; Transformation
    • G06F16/24549Run-time optimisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a batch processing method of a database-oriented request, which comprises the following steps: receiving one or more database-oriented requests; setting a trigger condition of batch processing operation; and when a trigger condition is met, performing merging processing on the request, wherein the merging processing is executed based on the operation type of the request. By the database-oriented request batch processing method, the database-oriented request batch processing device, the database-oriented request batch processing equipment and the database-oriented request storage medium, merging requests can be performed in a high-concurrency multi-thread request database scene, and database execution time is optimized, so that DB processing request delay is reduced or eliminated, and the throughput capacity of database system services is effectively improved. The invention reduces network consumption in multiple requests by combining the requests, combines multiple random operations into sequential reading and writing as much as possible, and reduces the processing time of the disk so as to achieve the aim of optimizing the hard disk and optimizing the throughput of the whole system.

Description

Database-oriented request batch processing method, device, equipment and storage medium
Technical Field
The present invention relates to the field of computer and communication technologies, and in particular, to a batch processing method, apparatus, device, and storage medium for database-oriented requests.
Background
Structured Query Language (SQL), a special purpose programming Language, is a database Query and programming Language for accessing data and querying, updating, and managing relational database systems.
SQL is a high-level, non-procedural programming language that allows users to work on high-level data structures. The method does not require a user to specify a data storage method and does not require the user to know a specific data storage mode, so that different database systems with completely different underlying structures can use the same structured query language as an interface for data input and management. Moreover, the structured query language statements can be nested, so that the method has great flexibility and strong functions.
MySQL is a relational database management system developed by MySQL AB, Sweden, and belongs to the product under Oracle flag. MySQL is one of popular Relational Database Management systems, and in terms of WEB applications, MySQL is one of the most popular RDBMS (Relational Database Management System) application software. The SQL language used by MySQL is the most common standardized language for accessing databases.
MySQL's relational database keeps data in different tables instead of putting all data in one large repository, which increases speed and flexibility. MySQL is widely used in the internet because of its powerful functions and the great flexibility of its relational databases.
However, with the rapid development of the internet, a plurality of database servers or a database server cluster is required to meet the demand of rapid development of services. These servers may be located in different rooms, and the online service then creates a need for the database to process requests across rooms. Processing service requests across a room is necessarily longer than processing service requests with the same room. Meanwhile, as the amount of service requests increases, the database processing requests are delayed obviously, a plurality of user requests are simply sent to the database through one connection, and the requirements cannot be met, and more service requests to the database need to be optimized in a more reasonable processing mode to reduce the delay of request processing or eliminate the delay.
Disclosure of Invention
The invention aims to provide a batch processing method, a batch processing device, a batch processing equipment and a storage medium for requests of a database, which can reduce the interaction times of the requests and the database and optimize the database execution time by combining the requests under the scene of high-concurrency multi-thread request of the database so as to reduce the delay of DB (database, DB for short) processing requests or eliminate the delay of DB processing requests, thereby effectively improving the throughput capacity of database system services. The operation type of the data request may include querying, updating, inserting, and the like of data in the database. The batch processing method, the device, the equipment and the storage medium for the database-oriented requests reduce network consumption during multiple requests by combining the requests, combine multiple random operations into sequential reading and writing as far as possible, and reduce the processing time of the hard disk so as to achieve the aim of optimizing the hard disk and optimizing the throughput of the whole system. The merging method is characterized in that the merging method matches a corresponding merging rule according to an automatic mode of an accessed SQL statement, and merges the same actions such as query, update, statement insertion and the like after aggregation. In addition, the batch processing method, device, equipment and storage medium for the database-oriented request, which are realized by the invention, also comprise the step of regularly sampling data such as QPS (Query Per Second: request quantity Per Second), DB processing delay, network delay and the like of the current request, and regularly adjusting the number N of queue triggers after smoothing processing. By dynamically adjusting the number N of queue triggers, the batch processing component in the invention is ensured to be in the optimal optimization state, thereby completing the time-consuming optimization of the self-adaptive batch requests.
The purpose of the invention and the technical problem to be solved are realized by adopting the following technical scheme.
According to one aspect of the invention, a batch processing method of a database-oriented request comprises the following steps: receiving one or more database-oriented requests; setting a trigger condition of batch processing operation; and when a trigger condition is met, performing merging processing on the request, wherein the merging processing is executed based on the operation type of the request.
A batch processing apparatus for database-oriented requests according to another aspect of the present invention includes: a receiving module for receiving one or more database-oriented requests; the setting module is used for setting triggering conditions of batch processing operation; and the merging processing module is used for merging the requests when the triggering condition is met, and the merging processing is executed based on the operation types of the requests.
According to yet another aspect of the invention, the invention also includes a computer-readable storage medium storing executable instructions that, when executed by a processor, cause execution of the aforementioned batch method of database-oriented requests. The readable storage medium may be a nonvolatile memory such as a hard disk or a magnetic disk, and may be applied to various terminals, such as a computer, a server, and the like.
According to yet another aspect of the present invention, the present invention also includes a database-oriented request batch processing apparatus including a processor and a storage device. The storage device is configured to store executable instructions that, when executed by the processor, may implement the batch processing method of the database-oriented request described above.
Compared with the prior art, the invention has obvious advantages and beneficial effects. By means of the technical scheme, the batch processing method, the batch processing device, the batch processing equipment and the batch processing storage medium for the database request, provided by the invention, have remarkable technical progress and practicability, have industrial wide utilization value and at least have the following advantages:
1. the method can optimize the database execution time by combining the requests in the high-concurrency multithread request database scene so as to reduce the DB processing request delay or eliminate the DB processing request delay, thereby effectively improving the throughput capacity of the database system service and simultaneously reducing the waste of hardware resources.
2. And regularly sampling data such as QPS (QPS), DB processing delay, network delay and the like of the current request, and regularly adjusting the triggering number N of the queue and/or the time length T of the received request after smoothing processing. By dynamically adjusting the number N of queue triggers and/or the time T for receiving requests, the batch processing component in the invention is ensured to be in the optimal optimization state, thereby completing the time-consuming optimization of the self-adaptive batch requests.
3. Network consumption during multiple requests is reduced by combining the requests, multiple random operations are combined into sequential reading and writing as much as possible, and the processing time of a disk is reduced, so that the aim of optimizing the hard disk and optimizing the throughput of the whole system is fulfilled.
The foregoing description is only an overview of the technical solutions of the present invention, and in order to make the technical means of the present invention more clearly understood, the present invention may be implemented in accordance with the content of the description, and in order to make the above and other objects, features, and advantages of the present invention more clearly understood, the following preferred embodiments are described in detail with reference to the accompanying drawings.
Drawings
FIG. 1 is a schematic diagram of the batch processing method steps of the database-oriented request of the present invention;
FIG. 2 is a schematic diagram of the operational modules of the database-oriented request batch processing apparatus of the present invention;
FIG. 3 is a flow diagram of the batch processing of database-oriented requests of the present invention;
FIG. 4 is a schematic diagram of executing multiple requests without using a batch component;
FIG. 5 is a schematic diagram of the execution of multiple requests using the batch component of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the embodiments, methods, steps and effects of the method, apparatus, device and storage medium for batch processing of database-oriented requests according to the present invention will be provided with reference to the accompanying drawings and preferred embodiments.
While the present invention has been described in terms of specific embodiments with a full understanding of the technical and functional aspects of the invention to achieve the intended purpose, the accompanying drawings are included to provide a reference and an illustration only, and are not intended to limit the invention.
One aspect of the present invention provides a batch processing method for database-oriented requests, as shown in fig. 1, including the following steps:
step S1: one or more database-oriented requests are received. After the batch processing program is started, initializing a waiting queue to receive the service request, monitoring the condition that the queue receives the service request, and controlling and managing the received service request. The database-oriented requests include requests for data queries, requests for data updates, and requests for data insertions. Different requests are based on different traffic demands. When a user needs to inquire a certain service, a data inquiry request for the service can be initiated; when a user needs to change parameters in a certain service, a related data updating request of the service can be initiated; when a user needs to add a new service to the database, a data insertion request of the service can be initiated.
Step S2: the trigger condition of the batch processing operation is set. And presetting a triggering condition of batch processing, wherein the triggering condition comprises that the selected parameter reaches a preset value. The selected parameters may include: the number of requests received and/or the duration of time the requests were received, wherein the initial time of the duration is the time the first request was received. After the batch process is started in step S1, if the first service request is entered into the queue, the timer is started, and the initial time is the time of the service request received in the queue. If the service request entered into the queue is not the first request, the request is inserted into the queue and it is determined whether the request makes the queue reach the trigger number. When the number of all the service requests received by the queue just reaches the queue triggering number, the batch processing component triggers the action of immediately executing batch processing, blocks the request to wait for the processing result, and simultaneously, the timer stops timing. When the number of the requests in the queue does not reach the queue triggering number and the timer reaches the preset time length T, the overtime action of the timer directly triggers the batch processing of the service requests in the queue, and meanwhile, the timer stops timing.
Step S3: and when a trigger condition is met, performing merging processing on the request, wherein the merging processing is executed based on the operation type of the request. When the service requests received by the queue reach the queue triggering number or the timer reaches the preset time length to generate overtime action, the batch processing component triggers the action of immediately executing batch processing, blocks the request waiting processing result and simultaneously stops the timer for timing. In the batch processing, N requests of the current queue or all requests received in T time are merged. Aggregate the requests and analyze, categorize, e.g., aggregate the content according to the same actions, e.g., query, update, insert, etc., statements. And performing corresponding combination processing on the request based on one or more parameters in the SQL statement of the same operation type. And then sending the SQL sentences subjected to batch processing to the DB for corresponding processing.
In another aspect of the present invention, there is provided a batch processing apparatus for database-oriented requests, as shown in fig. 2, including;
the system comprises a receiving module for receiving one or more database-oriented requests. After the batch processing program is started, initializing a waiting queue to receive the service request, monitoring the condition that the queue receives the service request, and controlling and managing the received service request. The database-oriented requests include requests for data queries, requests for data updates, and requests for data insertions. And the monitoring, control and management of various service requests are completed through the receiving module.
And the setting module is used for setting the trigger condition of the batch processing operation. And presetting a triggering condition of batch processing, wherein the triggering condition comprises that the selected parameter reaches a preset value. The selected parameters may include: the number of requests received and/or the duration of time the requests were received, wherein the initial time of the duration is the time the first request was received. The setting module can be provided with a plurality of queues, and the number of the requests set by each queue is N. After the batch process is started in step S1, if the first service request is entered into the queue, the timer is started, and the initial time is the time of the service request received in the queue. If the service request entered into the queue is not the first request, the request is inserted into the queue and it is determined whether the request makes the queue reach the trigger number. When the number of all the service requests received by the queue just reaches the queue triggering number, the batch processing component triggers the action of immediately executing batch processing, blocks the request to wait for the processing result, and simultaneously, the timer stops timing. When the number of the requests in the queue does not reach the queue triggering number and the timer reaches the preset time length T, the overtime action of the timer directly triggers the batch processing of the service requests in the queue, and meanwhile, the timer stops timing. And setting the trigger condition of the batch processing operation through a setting module.
And the merging processing module is used for merging the requests when the triggering condition is met, and the merging processing is executed based on the operation types of the requests. When the service requests received by the queue just reach the queue triggering number N, the batch processing module triggers to immediately execute batch processing action, blocks the requests to wait for processing results, and simultaneously, the timer stops timing. When the number of the requests in the queue does not reach the queue triggering number, but the timer reaches the preset duration T, the overtime action of the timer directly triggers the batch processing of the service requests in the queue, and meanwhile, the timer stops timing. In the batch processing, N requests of the current queue or all requests received in T time are merged. Aggregate the requests and analyze, categorize, e.g., aggregate the content according to the same actions, e.g., query, update, insert, etc., statements. And performing corresponding combination processing on the request based on one or more parameters in the SQL statement of the same operation type. And then sending the SQL sentences subjected to batch processing to the DB for corresponding processing. And the merging processing module is used for completing the merging processing of the service requests in the queue meeting the triggering conditions.
In one aspect of the present invention, a condition for triggering the batch processing of the service request is preset. The trigger condition is related to the number N of requests per group in the queue, which is also referred to as the "number N of queue triggers". Each group of requests has its corresponding request number. The number N of queue triggers can be adjusted according to a preset time interval after smoothing the data of the current request QPS, DB processing delay, network delay, etc. The batch processing component ensures that the batch processing component is in the optimal optimization state by dynamically adjusting the queue triggering number N. And triggering the action of executing batch processing when the number of the service requests in the queue reaches the preset number N. The trigger condition is also related to a preset time length T (set by a timer), wherein the initial time of the time length T is the insertion time Ti of the first service request into the queue. The time Ts (Ts ═ Ti + T) of the timer after the start of counting is the time when the timer triggers the timeout action. And under the condition that the number of the service requests in the queue does not reach the preset number N in the T period, triggering overtime action when the timer is at Ts, triggering action of executing batch processing on various requests in the queue, blocking the request to wait for a processing result, and stopping timing by the timer.
The batch processing method of the database-oriented request further comprises the step of initializing a waiting queue to receive the service request, wherein the step of initializing comprises the step of initializing a timer. The wait queue groups N requests, each group starting a timer. And each group triggers the timer to time when receiving the first service request, and the time for receiving the service request is the initial time of the timer. Further, "initializing" also includes initializing the sampler. And the sampler samples related data such as QPS, DB processing delay, network delay and the like according to a preset time interval so as to adjust the size of the request quantity N subsequently.
Specifically, when a business service initiates a request a to the database, the batch processing component inserts the request a into the waiting queue, and takes the insertion time as the initial time Ti. At this time, the timer starts to time, and Ti is the initial time timed by the timer. The timer takes Ts (Ts ═ Ti + T) time as trigger timeout time. Meanwhile, the service block waits for the processing result of the request a.
When the business service initiates another request B to the database, the batching component checks whether the queue is empty and inserts the waiting queue directly when the waiting queue is not empty. Meanwhile, the service block waits for the processing result of the request B.
When the business service initiates another request C to the database, the processing flow of the batch processing component when receiving the request C is the same as that of the processing request B, the batch processing component checks whether the queue is empty, and the batch processing component directly inserts the queue when the queue is not empty. Meanwhile, the service block waits for the processing result of request C. And the waiting queue continues to receive the requests facing the database until the number of the requests received by the waiting queue reaches the queue triggering number N. If N is set to 3, the third request C inserted into the queue makes the queue satisfy the trigger number of 3, thereby triggering the action of performing batch processing on the 3 requests received in the queue. That is, request A, request B, and request C are a set of requests, and the batch component triggers an action to immediately execute the batch and blocks requests from waiting for the results of the process. Meanwhile, the group of timers stops timing.
In the above case, if N is set to 5, and the queue only receives 3 requests during the time length T preset by the timer, that is, the number of triggers for batch execution is not reached to 5, then the timer triggers a timeout action at Ts, and at this time, the timer of the group directly triggers an action for performing batch processing on 3 requests in the queue, and blocks the request waiting processing result. And simultaneously, stopping the group of timers.
The service requests inserted into the current queue are recorded in time sequence. And when the number N of the trigger queues is met, triggering to execute batch processing action, namely merging and processing the first N requests of the current queue. The N requests are aggregated according to the same action, e.g., query, update, insert, etc., statements. For example, in the SQL database, the SQL statements "select", "insert" and "update" are used to correspond to the actions "query", "insert" and "update". Of course, the batch process of the present invention is not limited to the types of operations mentioned above. The merging method automatically matches the corresponding merging rules according to the accessed SQL sentences. For example, in an SQL statement, multiple requests to query the serial number of the user ID may be aggregated and merged together. And combining the SQL sentences which inquire the serial numbers of the plurality of user IDs according to the corresponding SQL sentence structures. In other words, the merged query statement contains multiple statements that query for the sequence number of a single user before merging.
After the completion of the merge execution, the connection of the merged request to the DB and the batch processing on the DB side are further executed. And after the execution is finished, the batch processing component obtains results in batches, wherein the obtained results sequentially correspond to the N corresponding merged requests according to the merged request numbers of each group. And simultaneously awakening the request corresponding to the blocking wait to obtain the result of the request after batch processing.
Through the description, the network consumption of a plurality of single requests can be reduced by combining a plurality of requests, wherein a plurality of random operations are combined into sequential reading and writing as much as possible, the processing time of a disk is reduced, and the aim of optimizing the hard disk and further optimizing the throughput of the whole system is fulfilled.
FIG. 3 is a flow diagram of the batch processing of database-oriented requests of the present invention. After the batch processing procedure is started, the queue receives the service request. At this time, the received service request is judged: it is determined whether the received service request is the first service request in the queue.
If the service request is the first service request in the queue, starting a timer, and simultaneously, the initial time is the time of receiving the service request in the queue. The timer continues to judge whether the time is out according to the preset time length T, namely whether the preset time length T is exceeded. If the timer determines that the time has not expired, the queue may continue to receive service requests. If the timer judges that the business request is overtime, the overtime action of the timer directly triggers the batch processing of the business request in the queue.
If the service request entered into the queue is not the first service request, the request is inserted into the queue and it is determined whether the request makes the queue reach the number of triggers. If the number of the requests inserted into the queue plus the number of the requests existing in the queue does not reach the number of the queue triggers, the queue can continue to receive new service requests. If the insertion of the service request just reaches the number of queue triggers, the batch processing component triggers an action of immediately executing batch processing, and blocks the request waiting for a processing result. And simultaneously, stopping the timer for timing.
In the batch processing process, the corresponding combination rule is matched according to the automatic mode of the accessed SQL sentences, and the same actions are inquired, updated and inserted sentences are combined after aggregation. And after the aggregation and combination execution is finished, further executing the connection and batch processing of the service request and the DB. And after the processing is finished, obtaining the result of the service request in batch.
After obtaining the batch results, the results sequentially correspond to the N merged requests according to the request numbers. And simultaneously awakening the request corresponding to the blocking waiting to obtain a result.
In a specific embodiment, a condition for triggering the batch processing of the service request is preset. The trigger condition is related to the number of service requests 200 in the queue. When the number of the service requests in the queue reaches the preset number 200, the action of executing batch processing is triggered. The trigger condition is also related to a preset time length of 3ms (set by a timer), wherein the initial time of the time length of 3ms is the insertion time Ti for inserting the first service request into the queue. The time Ts (Ts ═ Ti +3ms) of the timer after the start of counting is the time when the timer triggers the timeout action. And under the condition that the number of the service requests in the queue does not reach the preset number 200 within 3ms, triggering overtime action when the timer is at Ts, and simultaneously triggering action of performing batch processing on the service requests in the queue.
When the service initiates a query request to the database for the serial number of the user with ID 123:
select toseqid from tbl_immsg_last_seq where touid=123;。
the batch processing component inserts the request into the wait queue with the insertion time as the initial time Ti, at which time a timer is started. Ts (Ti +3ms) is used as trigger timeout. At the same time, the traffic service block waits for the processing result of the request.
When the business service initiates another request to the database, it asks for the user's serial number with ID 456:
select toseqid from tbl_immsg_last_seq where touid=456;。
the batching component checks whether the current queue is empty and inserts the wait queue directly when the wait queue is not empty. At the same time, the traffic service block waits for the processing result of the request.
When the service initiates another request to the database, it asks for the serial number of the user with ID 789:
select toseqid from tbl_immsg_last_seq where touid=789;。
the batching component checks whether the current queue is empty and inserts the wait queue directly when the wait queue is not empty. At the same time, the traffic service block waits for the processing result of the request.
At this time, the business service initiates another operation request to the database: the data information with user ID of 1709289947, sequence number of 6465450493870576025 and response time of 1562083994858 is inserted into the "final sequence message table":
insert into tbl_immsg_last_seq(touid,toseqid,acktime)values(1709289947,6465450493870576025,1562083994858)。
then, the business service initiates another operation request to the database: the data information with user ID of 1709289948, sequence number of 6465450493870576026 and response time of 1562083994859 is inserted into the "final sequence message table":
insert into tbl_immsg_last_seq(touid,toseqid,acktime)values(1709289948,6465450493870576026,1562083994859)。
and so on until the number of received requests reaches the queue trigger number 200. At this point, execution of the batch of the 200 statements is triggered. The 200 requests are a set of requests, and the batch component triggers an action to immediately perform the batch, blocking the requests from waiting for the results of the process. And simultaneously, stopping the group of timers.
And the batch processing component matches the corresponding combination rule according to the automatic mode of the accessed SQL statement, aggregates the query and the inserted statement with the same action and then combines the query and the inserted statement. In this embodiment, 200 requests are aggregated, and SQL statements of the same action are merged. And merging the query requests according to the same sentence structure, for example, merging the three query sentences into one:
select toseqid from tbl_immsg_last_seq where in touid=(123,456,789)
two insert statements are merged into one:
insert into tbl_immsg_last_seq(touid,toseqid,acktime)values(1709289947,6465450493870576025,1562083994858),(1709289948,6465450493870576026,1562083994859)
and after the aggregation and combination execution is finished, further executing the connection and batch processing of the service request and the DB. And after the processing is finished, obtaining the result of the service request in batch.
If 200 requests are not available in the queue within 3ms after the timer is started, the timer triggers overtime action at Ti +3ms only when 150 requests are received, and at the moment, the group of timers directly trigger the action of executing batch processing on 150 requests in the queue, and the requests are blocked to wait for a processing result. And simultaneously, stopping the group of timers.
The batch processing component matches the 150 requests with corresponding merging rules according to the automatic mode of the accessed SQL sentences, and merges the same action query and insertion sentences after aggregation.
It should be noted that the specific numerical values mentioned in the above embodiments are only examples, and do not specifically limit the practical application thereof.
The batch processing method of the invention also comprises the steps that the batch processing component samples data such as QPS, DB processing delay, network delay and the like of the current request quantity per second according to the preset time interval, and adjusts the queue triggering number N according to the preset time interval after smoothing processing. And the batch processing assembly is ensured to be in the optimal optimization state by dynamically adjusting the queue triggering number N. The concrete description is as follows.
The request processing flow without the use of a batch component is shown in FIG. 4, where different requests A, B, C, … …, N are sent to Mysql, respectively. Each request is sent to Mysql processing separately.
The server-to-Mysql single network time consumption is rtt (round-trip time), and Mysql processing one request time consumption is et1 (et: executive time). Calculated according to processing n requests, the total time consumed by n requests is:
T1=rtt*n+et1*n。
the request processing optimization flow after using the batch processing component is shown in fig. 5, where N requests are sent to Mysql after being batch processed.
Suppose Mysql takes et2 after processing n SQL merges, server-to-Mysql single network time rtt is unchanged, and suppose the current request QPS is q. The waiting time from the first request A to the batch processing is n/q, the waiting time of the last request C is 0, the waiting time of each request in the queue is approximate to an arithmetic progression, and the sum is (n/q) × n/2 according to an arithmetic progression summation formula. The total time consumed for executing n requests in batch is as follows:
T2=n2/(q*2)+rtt+et2。
the time can be optimized:
T=T1-T2=rtt*n+et1*n-n2/(q*2)-(rtt+et2)。
the expression takes n as a unary quadratic polynomial to solve the maximum value of T.
When n is q (rtt + et1),
t has the most optimal value of 0.5 q (rtt + et1)2-(rtt+et2)。
According to the formula, the request concurrency number in QPS is large, the network delay and the single SQL execution time are large, the queue waiting time is small, and the optimization effect is obvious when the T optimal value is larger than 0. The batch execution number n can be adjusted in real time according to QPS, network delay and DB execution time consumption, so that the system is in an optimal optimization state.
The batch processing method, the batch processing device, the batch processing equipment and the batch processing storage medium for the database-oriented requests are suitable for non-transactional operations and have certain limitation on the transactional operations. A database transaction, refers to a series of operations performed as a single logical unit of work, either performed entirely or not performed at all. Transaction processing can ensure that data-oriented resources are not permanently updated unless all operations within a transactional unit are successfully completed. By combining a set of related operations into one unit that either all succeeds or all fails, error recovery can be simplified and the application can be made more reliable. To become a logical unit of work a transaction must satisfy atomicity, consistency, isolation, and durability properties. The transaction is a logical work unit in database operation, and the transaction Management subsystem in the dbms (database Management system) is responsible for processing the transaction. The DB does not guarantee that non-transactional requests are processed exactly in their temporal order, in other words, non-transactional requests do not require the DB to process in the temporal order of the non-transactional requests.
The invention is not limited to the request processing of single business service, and the batch processing method can carry out efficient batch processing on non-transaction requests in various businesses. For example, it is applied to live broadcast services, various types of ranking list services, news, consultation and retrieval, etc.
In another aspect of the invention, a computer-readable storage medium is provided that stores executable instructions, software programs, and modules that, when executed by a processor, cause execution of a batch method of database-oriented requests. The readable storage medium may include a high-speed random access memory, and may further include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other non-volatile solid state storage device, and may be applied to various terminals, which may be computers, servers, and the like.
The storage medium also includes, but is not limited to, any type of disk including floppy disks, hard disks, optical disks, CD-ROMs, and magnetic-optical disks, ROMs (read-only memories), RAMs (random access memories), EPROMs (erasable programmable read-only memories), EEPROMs (electrically erasable programmable read-only memories), flash memory, magnetic cards, or optical cards. That is, a storage medium includes any medium that stores or transmits information in a form readable by a device (e.g., a computer). The storage medium may also be a read-only memory, a magnetic or optical disk, or the like.
Embodiments of the present invention also provide a computer program product, which when run on a computer, causes the computer to execute the above related steps to implement the batch processing method for the database-oriented request in the above embodiments.
In addition, an embodiment of the present invention further provides an apparatus, which may specifically be a chip, a component, or a module, and the apparatus may include a processor and a memory connected to each other; the memory is used for storing computer execution instructions, and when the device runs, the processor can execute the computer execution instructions stored in the memory, so that the chip executes the batch processing method of the database-oriented request in the above-mentioned method embodiments.
The apparatus, the computer storage medium, the computer program product, or the chip provided by the present invention are all configured to execute the corresponding methods provided above, and therefore, the beneficial effects achieved by the apparatus, the computer storage medium, the computer program product, or the chip may refer to the beneficial effects in the corresponding methods provided above, and are not described herein again.
Although the present invention has been described with reference to a preferred embodiment, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A method for batch processing of database-oriented requests, the method comprising:
receiving one or more database-oriented requests;
setting a trigger condition of batch processing operation; and
and when a trigger condition is met, performing merging processing on the request, wherein the merging processing is executed based on the operation type of the request.
2. The method of claim 1, wherein the trigger condition comprises a selected parameter reaching a preset value.
3. The method of claim 2, wherein the selected parameters comprise: the number of received requests and/or the duration of the received requests, wherein the starting time of the duration is the time of the first request received.
4. The method of any preceding claim, wherein the type of operation comprises one or more of a query request, an update request and an insert request.
5. The method of any preceding claim in which the request comprises an SQL statement, the merging comprising merging a plurality of SQL statements into a composite SQL statement, wherein one or more values corresponding to one or more parameters in the plurality of SQL statements are merged to assign the corresponding one or more parameters of the composite SQL statement.
6. The method of claim 3, further comprising: and sampling the quantity of requests per second (QPS), the delay data processed by the database and the network delay data according to a preset time interval.
7. The method of claim 6, further comprising:
and adjusting the quantity of the receiving requests and/or the duration of the receiving requests according to the preset time interval based on a plurality of data obtained by sampling.
8. An apparatus for batch processing of database-oriented requests, the apparatus comprising:
a receiving module for receiving one or more database-oriented requests;
the setting module is used for setting triggering conditions of batch processing operation; and
and the merging processing module is used for merging the requests when the triggering condition is met, and the merging processing is executed based on the operation types of the requests.
9. A computer-readable storage medium, characterized in that the storage medium stores executable instructions that, when executed by a processor, cause execution of a batch method of database-oriented requests according to any of claims 1-7.
10. A database-oriented request batch processing apparatus, comprising:
a processor;
a storage device for storing executable instructions,
the executable instructions, when executed by the processor, may implement a batch method of database-oriented requests as claimed in any of claims 1-7.
CN202110270917.3A 2021-03-12 2021-03-12 Database-oriented request batch processing method, device, equipment and storage medium Pending CN112925807A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110270917.3A CN112925807A (en) 2021-03-12 2021-03-12 Database-oriented request batch processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110270917.3A CN112925807A (en) 2021-03-12 2021-03-12 Database-oriented request batch processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112925807A true CN112925807A (en) 2021-06-08

Family

ID=76172905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110270917.3A Pending CN112925807A (en) 2021-03-12 2021-03-12 Database-oriented request batch processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112925807A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023165343A1 (en) * 2022-03-04 2023-09-07 北京字节跳动网络技术有限公司 Data operation method and apparatus, computer device, and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104169950A (en) * 2012-04-26 2014-11-26 艾玛迪斯简易股份公司 Database system using batch-oriented computation
CN111221634A (en) * 2019-11-21 2020-06-02 望海康信(北京)科技股份公司 Method, device and equipment for processing merging request and storage medium
CN112148711A (en) * 2020-09-21 2020-12-29 建信金融科技有限责任公司 Processing method and device for batch processing tasks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104169950A (en) * 2012-04-26 2014-11-26 艾玛迪斯简易股份公司 Database system using batch-oriented computation
CN111221634A (en) * 2019-11-21 2020-06-02 望海康信(北京)科技股份公司 Method, device and equipment for processing merging request and storage medium
CN112148711A (en) * 2020-09-21 2020-12-29 建信金融科技有限责任公司 Processing method and device for batch processing tasks

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023165343A1 (en) * 2022-03-04 2023-09-07 北京字节跳动网络技术有限公司 Data operation method and apparatus, computer device, and storage medium

Similar Documents

Publication Publication Date Title
Ding et al. Improving optimistic concurrency control through transaction batching and operation reordering
US11048599B2 (en) Time-based checkpoint target for database media recovery
US11397709B2 (en) Automated configuration of log-coordinated storage groups
US10691722B2 (en) Consistent query execution for big data analytics in a hybrid database
Lin et al. Towards a non-2pc transaction management in distributed database systems
Luo et al. On performance stability in LSM-based storage systems (extended version)
CN111433764A (en) High-throughput distributed transaction management of global consistency sliced O L TP system and implementation method thereof
US11475006B2 (en) Query and change propagation scheduling for heterogeneous database systems
US20130297565A1 (en) Database Management System
WO2021036768A1 (en) Data reading method, apparatus, computer device, and storage medium
WO2019109854A1 (en) Data processing method and device for distributed database, storage medium, and electronic device
US9064013B1 (en) Application of resource limits to request processing
US20140279840A1 (en) Read Mostly Instances
CN110580258B (en) Big data free query method and device
WO2021238902A1 (en) Data import method and apparatus, service platform, and storage medium
US20130159287A1 (en) Database query optimizer that takes network choice into consideration
CN112084206A (en) Database transaction request processing method, related device and storage medium
CN116108057B (en) Distributed database access method, device, equipment and storage medium
US11449241B2 (en) Customizable lock management for distributed resources
US20100199058A1 (en) Data Set Size Tracking and Management
CN108090056B (en) Data query method, device and system
CN112925807A (en) Database-oriented request batch processing method, device, equipment and storage medium
US20210382863A1 (en) Use of time to live value during database compaction
CN110807046B (en) Novel distributed NEWSQL database intelligent transaction optimization method
WO2024098363A1 (en) Multicore-processor-based concurrent transaction processing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20210608