CN111459981A - Query task processing method, device, server and system - Google Patents

Query task processing method, device, server and system Download PDF

Info

Publication number
CN111459981A
CN111459981A CN201910108362.5A CN201910108362A CN111459981A CN 111459981 A CN111459981 A CN 111459981A CN 201910108362 A CN201910108362 A CN 201910108362A CN 111459981 A CN111459981 A CN 111459981A
Authority
CN
China
Prior art keywords
query
issuing
task
query task
window
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910108362.5A
Other languages
Chinese (zh)
Other versions
CN111459981B (en
Inventor
周祥
王烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201910108362.5A priority Critical patent/CN111459981B/en
Publication of CN111459981A publication Critical patent/CN111459981A/en
Application granted granted Critical
Publication of CN111459981B publication Critical patent/CN111459981B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24568Data stream processing; Continuous queries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2458Special types of queries, e.g. statistical queries, fuzzy queries or distributed queries
    • G06F16/2471Distributed queries
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention provides a processing method of query tasks, a database query method, a database query device, a server, a system and a computer storage medium. The method comprises the following steps: acquiring a query task from a current query service queue; adding the query task to a query issuing window; judging whether other query tasks exist in the current query service queue; if the judgment result is yes, returning to the step of executing the query task obtained from the current query service queue; if the judgment result is negative, the query task in the query issuing window is issued to the computing node for execution. According to the invention, the processing efficiency of the query task can be improved, and the utilization rate of system resources can be improved.

Description

Query task processing method, device, server and system
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method and an apparatus for processing a query task, a database query method, a database query apparatus, a server, a system, and a computer storage medium.
Background
The Data lake (Data L ake Analytics) is an interactive query analysis service on the cloud without server (Serverless). without ET L (extract-transform-load), cloud storage such as NoSQ L Data source, object storage (OSS) and the like and Data in cloud databases are queried and analyzed using a standard structured query language (SQ L).
In the prior art, a database acquires a query task from a current query service queue each time, and distributes the query task to a computing node for execution. However, the processing capability of a computing node is not limited to this, and this processing manner of the query task is not only inefficient, but also causes a serious waste of system resources. Accordingly, the inventors have determined that there is a need for improvement in at least one of the problems of the prior art described above.
Disclosure of Invention
An object of the embodiments of the present invention is to provide a new technical solution for processing a query task.
According to a first aspect of the embodiments of the present invention, a method for processing a query task is provided, where the method includes:
acquiring a query task from a current query service queue;
adding the query task to a query issuing window;
judging whether other query tasks exist in the current query service queue;
if the judgment result is yes, returning to the step of executing the query task obtained from the current query service queue;
if the judgment result is negative, the query task in the query issuing window is issued to the computing node for execution.
Optionally, the method further includes:
before the query task is obtained from the current query service queue, the method further includes:
acquiring a list of the number of currently executed user query tasks; the user query number list comprises the number of query tasks currently executed by different users; wherein the query task at least comprises a user ID;
correspondingly, before the step of adding the query task to the query issue window, the method further includes:
judging whether the number of the query tasks corresponding to the user ID exceeds the user preset maximum query concurrency number corresponding to the user ID after the query tasks are added to the query issuing window;
if the number of the query concurrences does not exceed the user preset maximum number, adding the query task to the query issuing window;
and if the query concurrency number exceeds the user preset maximum query concurrency number, returning to the step of executing the query task obtained from the current query service queue.
Optionally, after the step of adding the query task to the query issue window, the method further includes:
determining a first resource consumption estimation value of the query issuing window;
judging whether the sum of the first resource consumption estimated value and the current system resource utilization rate exceeds a preset capacity expansion water level threshold value or not;
if the current query service queue does not exceed the preset expansion water level threshold, executing a step of judging whether other query tasks exist in the current query service queue;
if the preset expansion water level threshold value is exceeded, expansion is carried out according to the calculated quantity of the expansion nodes; and issuing the query task in the query issuing window to a computing node for execution.
Optionally, after the step of determining whether there are other query tasks in the current query service queue, the method further includes:
judging whether the sum of the first resource consumption estimated value and the current system resource utilization rate is smaller than a preset capacity reduction water level threshold value or not;
if the number of the capacity reduction nodes is smaller than the preset capacity reduction water level threshold, carrying out capacity reduction according to the calculated number of the capacity reduction nodes; issuing the query task in the query issuing window to a computing node for execution;
and if the size of the query task is not less than the preset shrinkage water level threshold, issuing the query task in the query issuing window to a computing node for execution.
Optionally, before the step of issuing the query task in the query issue window to the computing node for execution, the method further includes:
and determining the sharing issuing grouping of the query task in the query issuing window.
Optionally, before the step of performing capacity expansion according to the calculated number of capacity expansion nodes, the method further includes:
determining a shared issuing group of the query task in the query issuing window;
determining a second resource consumption estimation value of the inquiry issuing window according to the determined sharing issuing group;
judging whether the sum of the second resource consumption estimated value and the current system resource utilization rate exceeds the preset expansion water level threshold value or not;
if yes, executing the step of capacity expansion according to the calculated quantity of the capacity expansion nodes;
and if not, executing the step of judging whether other query tasks exist in the current query service queue.
Optionally, the step of determining the shared delivery group of the query task in the query delivery window includes:
initializing a first shared issuing group in the query issuing window;
acquiring an execution plan tree of a query task in the query issuing window;
judging whether a sharing operator matched with each execution plan tree in the first sharing issuing group exists in the execution plan trees or not;
if yes, adding the query task to the matched first shared issuing group;
if not, establishing a second sharing issuing group for the query task, and adding the query task to the second sharing issuing group;
judging whether all the query tasks in the query issuing window complete sharing operator matching;
if not, returning to the step of executing the execution plan tree for acquiring one query task in the query issuing window;
and if so, determining the sharing issuing grouping of the query task in the query issuing window.
Optionally, the step of determining the first resource consumption estimation value of the query issuing window includes:
generating a query syntax tree of the query task;
generating an execution plan tree of the query task according to the query syntax tree;
performing resource consumption estimation on the query task according to the execution plan tree to obtain a resource consumption value for executing the query task;
and accumulating the resource consumption value for executing the query task and the current resource consumption value of the query issuing window to obtain a first resource consumption estimated value of the query issuing window.
According to a second aspect of the embodiments of the present invention, there is provided a processing apparatus for a query task, including: a memory for storing executable instructions and a processor; the processor is configured to execute the processing method of the query task according to any one of the first aspect of the embodiments of the present invention under the control of the instruction.
According to a third aspect of the embodiments of the present invention, there is provided a processing apparatus for a query task, including:
the acquisition module is used for acquiring a query task from the current query service queue;
the adding module is used for adding the query task to a query issuing window;
the judging module is used for judging whether other inquiry tasks exist in the current inquiry service queue;
if the judgment result of the judgment module is yes, triggering the operation in the acquisition module;
and the issuing module is used for issuing the query task in the query issuing window to a computing node for execution if the judgment result of the judging module is negative.
According to a fourth aspect of the embodiments of the present invention, there is provided a system for processing a query task, including a client device and a device for processing the query task according to the second or third aspect of the embodiments of the present invention.
According to a fifth aspect of embodiments of the present invention, there is provided a computer storage medium having stored thereon computer instructions which, when executed by a processor, implement operations in a processing method of a query task according to any one of the first aspect of embodiments of the present invention.
According to a sixth aspect of the embodiments of the present invention, there is provided a database query method, including:
acquiring a first database query request and a second database query request; the first database query request comprises a plurality of first query operators, and the second database query request comprises a plurality of second query operators; the plurality of first query operators and the plurality of second query operators comprise at least one same query operator;
executing the first database query request and the second database query request using the at least one same operator.
According to a seventh aspect of the embodiments of the present invention, there is provided a database query apparatus, including:
the acquisition module is used for acquiring a first database query request and a second database query request; the first database query request comprises a plurality of first query operators, and the second database query request comprises a plurality of second query operators; the plurality of first query operators and the plurality of second query operators comprise at least one same query operator;
and the execution module is used for executing the first database query request and the second database query request by using the at least one same operator.
According to an eighth aspect of the present invention, there is provided a database query apparatus, the apparatus comprising: a memory for storing executable instructions and a processor; the processor is configured to execute the processing method of the query task according to the sixth aspect of the embodiment of the present invention under the control of the instruction.
According to the embodiment of the invention, the processing efficiency of the query task can be improved, and the utilization rate of system resources can be improved.
Other features of the present invention and advantages thereof will become apparent from the following detailed description of exemplary embodiments thereof, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a block diagram of a hardware configuration of a processing system that may be used to implement query tasks of an embodiment of the invention;
FIG. 2 is a schematic flow chart diagram of a first embodiment of a method for processing a query task;
FIG. 3 is a schematic flow chart diagram of a second embodiment of a method for processing a query task according to the present invention;
FIG. 4 is a schematic flow chart diagram of a third embodiment of a method for processing a query task in accordance with the present invention;
FIG. 5 is a schematic flow chart diagram of a fourth embodiment of a method for processing a query task in accordance with the present invention;
FIG. 6 is a schematic flow chart diagram of a fifth embodiment of a method for processing a query task in accordance with the present invention;
FIG. 7 is a schematic flow chart diagram illustrating a sixth embodiment of a method for processing a query task in accordance with the present invention;
FIG. 8 is a schematic flow chart of the step of determining a shared issued packet in the method for processing a query task of the present invention;
FIG. 9 shows a schematic diagram according to one example of FIG. 8;
FIG. 10 is a diagram illustrating a structure of a processing device for a query task according to a first embodiment of the present invention;
FIG. 11 is a diagram illustrating a processing apparatus for a query task according to a second embodiment of the present invention;
FIG. 12 is a block diagram of a query task processing system according to an embodiment of the present invention;
FIG. 13 is a schematic flow chart diagram of a database query method of an embodiment of the present invention;
fig. 14 is a schematic configuration diagram of a database query device according to a first embodiment of the present invention;
fig. 15 is a schematic configuration diagram of a database query device according to a second embodiment of the present invention.
Detailed Description
Various exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, the numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless specifically stated otherwise.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
In all examples shown and discussed herein, any particular value should be construed as merely illustrative, and not limiting. Thus, other examples of the exemplary embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
Various embodiments and examples according to embodiments of the present invention are described below with reference to the accompanying drawings.
< hardware configuration >
Fig. 1 is a block diagram of a hardware configuration of a processing system that can be used to implement a query task of an embodiment of the present invention.
As shown in fig. 1, the processing system 100 of the query task includes a server 1000 and a client device 2000.
In the processing system 100 of the query task, the server 1000 and the client device 2000 are communicatively connected via the network 3000.
The server 1000 may be a unitary server or a distributed server across multiple computers or computer data centers. The server 1000 may be of various types, such as, but not limited to, a web server, a news server, a mail server, a message server, an advertisement server, a file server, an application server, an interaction server, a database server, or a proxy server.
In some embodiments, each server 1000 may include hardware, software, or embedded logic components or a combination of two or more such components for performing the appropriate functions supported or implemented by the server 1000. For example, the server 1000 may be, for example, a blade server, a cloud server, or the like, or may be a server group composed of a plurality of servers, and may include one or more of the above types of servers, and the like.
In one embodiment, the server 1000 may be as shown in fig. 1, including a processor 1100, a memory 1200, an interface device 1300, a communication device 1400, a display device 1500, an input device 1600.
In other embodiments, the server 1000 may further include a speaker, a microphone, and the like, which are not limited herein.
The processor 1100 may be a dedicated server processor, or may be a desktop processor, a mobile version processor, etc. that meets performance requirements, without limitation, the memory 1200 may include, for example, ROM (read only memory), RAM (random access memory), non-volatile memory such as a hard disk, etc., the interface device 1300 may include, for example, various bus interfaces such as a serial bus interface (including a USB interface), a parallel bus interface, etc., the communication device 1400 may, for example, enable wired or wireless communications, the communication device 1400 may be capable of communicating based on at least a connection-oriented protocol, with a retransmission mechanism, the display device 1500 may, for example, be a liquid crystal display, L ED display touch screen, etc., the input device 1600 may, for example, include a touch screen, a keyboard, etc.
Although a plurality of devices of the server 1000 are illustrated in fig. 1, the present invention may only relate to some of the devices, for example, the server 1000 only relates to the memory 1200, the processor 1100 and the communication device 1400.
In this embodiment, the memory 1200 of the server 1000 is used for storing instructions for controlling the processor 1100 to operate so as to execute the processing method of the query task according to any embodiment of the present invention. The skilled person can design the instructions according to the disclosed solution. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.
The client device 2000 is, for example, a mobile phone, a laptop, a tablet, a palmtop, a wearable device, or the like.
As shown in fig. 1, the client device 2000 may include a processor 2100, a memory 2200, an interface device 2300, a communication device 2400, a display device 2500, an input device 2600, a speaker 2700, a microphone 2800, and the like.
The processor 2100 may be a mobile version processor. The memory 2200 includes, for example, a ROM (read only memory), a RAM (random access memory), a nonvolatile memory such as a hard disk, and the like. The interface device 2300 includes, for example, a USB interface, a headphone interface, and the like. Communication device 2400 is capable of wired or wireless communication, for example. The display device 2500 is, for example, a liquid crystal display panel, a touch panel, or the like. The input device 2600 may include, for example, a touch screen, a keyboard, and the like. A user can input/output voice information through the speaker 2700 and the microphone 2800.
Although a plurality of devices of the client device 2000 are shown in fig. 1, the present invention may only relate to some of the devices, for example, the client device 2000 only relates to the memory 2200 and the processor 2100 and the communication device 2400.
The wireless communication network may be a wired communication network, and may be a local area network or a wide area network. In the processing system 100 of the query task shown in fig. 1, the server 1000 and the client device 2000 can communicate through the network 3000.
< method examples >
FIG. 2 is a schematic flow chart diagram of a first embodiment of a method for processing a query task.
The processing method of the query task in this embodiment may be specifically executed by the server 1000 shown in fig. 1.
As shown in FIG. 2, at step 2100, a query task is obtained from the current query service queue.
The query tasks in the current query service queue are query tasks submitted at the same time, one or more query tasks may be in the current query service queue, and the query tasks may be from the same or different users.
In this embodiment, the following steps 2200 to 2400 are sequentially performed on the query task in the current query service queue.
In step 2200, the query task is added to the query issue window.
Step 2300, judging whether other query tasks are left in the current query service queue.
If yes, the process returns to step 2100.
If the judgment result is negative, the step 2400 is entered, and the query task in the query issuing window is issued to the computing node for execution.
The query task processing method of the embodiment can simultaneously schedule the query tasks submitted at the same time and issue the query tasks to the computing nodes for execution, thereby improving the processing efficiency of the query tasks and the utilization rate of system resources.
In an embodiment, as shown in fig. 3, before the step 2100, the method for processing the query task of this embodiment further may further include:
3100, obtaining a list of the number of currently executed user query tasks; the user query number list contains the number of query tasks currently executed by different users.
Wherein the query task at least comprises a user ID. In this step, the currently executed query tasks are respectively subjected to quantity statistics according to different user IDs, so as to obtain a user query number list containing the quantity of the currently executed query tasks of different users.
Accordingly, after step 2100 and before step 2200 may include:
step 3200, determining whether the number of query tasks corresponding to the user ID exceeds a user preset maximum query concurrency number corresponding to the user ID after the query task is added to the query issue window.
Considering fairness of user query response, in this step, for each query task in the query issue window, it is determined whether the number of query tasks corresponding to the user ID of the user exceeds a user preset maximum query concurrence number if the query task of the user is added to the query issue window, and if not, the step 2200 is executed, that is, the query task is added to the query issue window. If yes, in the current scheduling, the query task of the user is not scheduled for the moment, and the step 2100 is returned to be executed to obtain the query task from the current query service queue again.
In the embodiment, polling scheduling of query tasks from different users can be realized, and fairness of user query response is ensured.
In an embodiment, in consideration of the utilization rate of the system resources and the computing capability of the computing node, as shown in fig. 4, after the step 2200, the method of this embodiment may further include:
step 4100, determining a first resource consumption estimate for the query issue window.
Specifically, when determining the first resource consumption estimation value of the query issuing window, the following steps 4100-1 to 4100-4 may be included:
step 4100-1, generating a query syntax tree of the query task.
Specifically, a query syntax parser may be invoked to generate a query syntax tree for the query task.
And 4100-2, generating an execution plan tree of the query task according to the query syntax tree.
And 4100-3, estimating resource consumption of the query task according to the execution plan tree to obtain a resource consumption value for executing the query task.
In steps 4100-2 and 4100-3, an execution plan tree may be generated according to the query syntax tree of the query task by specifically calling a query optimizer, and resource consumption estimation may be performed on the execution plan tree, so as to obtain a resource consumption value for executing the query task. That is, the incremental data that would be consumed by resources such as CPU, memory, etc. if the query task is executed is obtained by estimation.
And 4100-4, accumulating the resource consumption value of the executed query task and the current resource consumption value of the query issuing window to obtain a first resource consumption estimation value of the query issuing window.
Step 4200, determining whether the sum of the first resource consumption estimation value and the current system resource utilization rate exceeds a preset capacity expansion water level threshold.
If the sum of the first resource consumption estimated value and the current system resource utilization rate exceeds a preset expansion water level threshold, it indicates that the current system resources and the computing nodes have insufficient computing power to process the query task in the query issuing window, and the system needs to be expanded, at this time, step 4300 is executed. If the sum of the first resource consumption estimated value and the current system resource utilization rate does not exceed the preset expansion water level threshold, it indicates that the current system resources and the computing capacity of the computing node can process the query task in the query issue window, and the system does not need to be expanded, at this time, the execution step 2300 may be returned.
In step 4300, capacity expansion is performed according to the calculated number of capacity expansion nodes.
After this step, the above step 2400 is executed: and issuing the query task in the query issuing window to a computing node for execution.
In this embodiment, by determining the first resource consumption estimated value of the query issuing window, then determining whether the sum of the first resource consumption estimated value and the current system resource utilization rate exceeds a preset capacity expansion water level threshold, and when determining that the sum of the first resource consumption estimated value and the current system resource utilization rate exceeds the preset capacity expansion water level threshold, triggering the capacity expansion calculation of the calculation node, thereby implementing elastic capacity expansion and reasonably utilizing system resources.
In an embodiment, in view of the utilization rate of system resources and the computing capability of the computing node, as shown in fig. 5, after the step 2300, if it is determined that there is no other query task in the current query service queue, the method of this embodiment may further include:
in step 5100, it is determined whether the sum of the first resource consumption estimation value and the current system resource utilization rate is less than a preset shrinkage water level threshold.
If the sum of the first resource consumption estimated value and the current system resource utilization rate is smaller than the preset capacity reduction water level threshold, it indicates that the computing capacities of the current system resources and the computing nodes exceed the resources required to be consumed for processing the query task in the query issuing window, and the system needs to be reduced, so that the problem that the query task occupies too many system resources after being issued to the computing nodes to cause system resource waste is avoided, and at this time, step 5200 is executed. If the sum of the first resource consumption estimated value and the current system resource utilization rate is not less than the preset shrinkage water level threshold, it indicates that the current system resources and the computing capability of the computing node can process the query task in the query issuing window, and after the query task is issued to the computing node, the system resources are not wasted, so that the system does not need to be shrunk, and at this time, step 2400 may be executed.
And step 5200, carrying out capacity reduction according to the calculated number of capacity reduction nodes.
Specifically, the reduction capacity can be calculated according to the difference of the preset reduction water level threshold after the consumption of the new resources is estimated.
After this step, the above step 2400 is executed: and issuing the query task in the query issuing window to a computing node for execution.
In this embodiment, after it is determined that there are no other query tasks in the current query service queue, it is determined whether the sum of the first resource consumption estimated value and the current system resource utilization rate is smaller than a preset shrinkage water level threshold, and when it is determined that the sum of the first resource consumption estimated value and the current system resource utilization rate is smaller than the preset shrinkage water level threshold, the shrinkage calculation of the calculation node is triggered, so that elastic shrinkage is achieved, and system resources are reasonably utilized.
In one embodiment, in order to estimate the expansion more accurately, as shown in fig. 6, before the step 4300, steps 6100 to 6300 may be further included.
Step 6100, determine the shared delivery packet of the query task in the query delivery window.
This step is performed after the sum of the first resource consumption estimated value and the current system resource utilization rate is determined to exceed the preset capacity expansion water level threshold according to the step 4200, and before the capacity expansion is performed according to the calculated capacity expansion node number in the step 4300.
In the embodiment, the query tasks in the query issuing window are shared and issued and grouped before the query issuing window is defined, and the query tasks with shared operators are grouped into one group, so that system resources can be saved, and the utilization rate of the system resources is further improved.
And 6200, determining a second resource consumption estimated value of the query issuing window according to the determined sharing issuing group.
In this step, since the query task is shared and distributed, the resource consumption of the query distribution window needs to be estimated again. For the specific estimation steps, reference may be made to the descriptions of steps 4100-1 to 4100-4, which are not described herein again.
6300, determining whether the sum of the second resource consumption estimated value and the current system resource utilization rate exceeds the preset capacity expansion water level threshold.
If yes, triggering calculation node expansion calculation, executing the step 4300, and expanding the capacity according to the calculated quantity of the expansion nodes; specifically, the capacity expansion amount is calculated according to the result converted from the new resource consumption estimation and the calculation node. This step may be performed asynchronously.
If not, execute the above step 2300: and judging whether other query tasks exist in the current query service queue or not until the judgment result is negative.
In this embodiment, after the sum of the first resource consumption estimated value and the current system resource utilization rate is judged to exceed the preset capacity expansion water level threshold, the query tasks in the query issuing window are shared and issued and grouped, and after the query tasks with the shared operators are grouped into one group, the capacity expansion is calculated in the query issuing window, so that the capacity expansion is estimated more accurately, and therefore, the system resources are saved and the system resource utilization rate is improved.
In one embodiment, in order to reduce the response delay of the query task, as shown in fig. 7, before step 2400 described above, step 7100 may also be included: and determining the sharing issuing grouping of the query task in the query issuing window.
In this embodiment, after the sum of the first resource consumption estimated value and the current system resource utilization rate is judged to be not less than the preset capacity shrinkage water level threshold, the query tasks in the query issuing window are shared and issued into groups, the query tasks with shared operators are divided into a group, and then the query tasks in the query issuing window are issued to the computing nodes for execution, so that the system resources are saved, the system resource utilization rate is improved, and meanwhile, the response delay of the query tasks can be shortened.
In an embodiment, as shown in fig. 8, in step 6100 or step 7100, the step of determining the shared delivery packet of the query task in the query delivery window may specifically include the following steps 8100 to 8700:
in step 8100, a first shared delivery packet in the query delivery window is initialized.
Step 8200, obtaining an execution plan tree of a query task in the query issuing window.
8300, judging whether the execution plan tree has a sharing operator matched with each execution plan tree in the first sharing issuing group.
For example, after acquiring the execution plan tree of one query task in the query issue window, starting from a Table Scan operator of the execution plan tree, matching the execution plan tree with operators in each execution plan tree in the first shared issue group, so as to determine whether the execution plan tree exists in a shared operator matched with each execution plan tree in the first shared issue group.
If the shared operator exists in the step, executing step 8400; and if the judgment result shows that the sharing operator does not exist, executing a step 8500.
Step 8400, the query task is added to the matched first shared delivered packet.
And if the sharing operator exists, adding the query task to the corresponding first sharing issuing group, and correspondingly adjusting the execution plan tree of the query task.
Step 8500, a second sharing issuing group is established for the query task, and the query task is added to the second sharing issuing group.
If the shared operator does not exist, namely the operator which can be shared is not matched, a new shared issuing group, namely the second shared issuing group, is established for the query task, and the query task is added into the second shared issuing group.
After the step 8400 or the step 8500 is executed, the step 8600 is entered, and whether all query tasks in the query issuing window complete the matching of the sharing operators is judged.
If the judgment result is no, the step 8200 is executed again: and obtaining an execution plan tree of one query task in the query issuing window until the query tasks in the query issuing window are judged to complete the matching operation of the shared operators. If so, go to step 8700.
8700, determining the shared issuing group of the query task in the query issuing window.
In one example, as shown in FIG. 9, there are five query tasks (queries).
Wherein, Query1 is:
SELECT*
FROM A INNER JOIN D ON A.id=D.id
WHERE A.name=’abc’AND D.key=‘123’
ORDER BY A.id;
the Query 2 is:
SELECT*
FROM A INNER JOIN B ON A.id=B.id
WHERE A.name=’abc’AND B.sex=‘male’
ORDER BY A.id
LIMIT 100;
the Query 3 is:
SELECT C.age,COUNT(A.id)
FROM A INNER JOIN B ON A.id=B.id
INNER JOIN C ON B.name=C.name
WHERE A.name=’abc’AND B.sex=‘male’
GROUP BY C.age
ORDER BY A.id
LIMIT 100;
the Query 4 is:
SELECT*
FROM C
ORDER BY C.age
LIMIT 10;
the Query5 is:
SELECT E.age,COUNT(*)
FROM E
GROUP BY E.age
ORDER BY E.age。
wherein, the query1, 2, 3, 4 share join operators are allocated to the share group 1, and no share operator in the query5 matched with the query1, 2, 3, 4 establishes a new share group 2 for the query 5.
In the embodiment, the query tasks with the shared operators are grouped by determining the shared issuing groups of the query tasks in the query issuing window, so that the access to the same data set is shared in the same time, the consumption of computing resources is saved, and the utilization rate of system resources is further improved.
< apparatus embodiment >
Fig. 10 is a schematic structural diagram of a processing device for a query task according to a first embodiment of the present invention.
As shown in fig. 10, the processing device 100 for the query task of the present embodiment may include: the system comprises an acquisition module 101, an adding module 102, a judging module 103 and a sending module 104.
The obtaining module 101 is configured to obtain a query task from a current query service queue.
And the adding module 102 is configured to add the query task to the query issuing window.
The judging module 103 is configured to judge whether there are other query tasks in the current query service queue.
If the determination result of the determining module 103 is yes, the operation in the obtaining module 101 is triggered.
And the issuing module 104 is configured to issue the query task in the query issuing window to a computing node for execution if the determination result of the determining module 103 is negative.
The obtaining module 101 may also be configured to obtain a list of currently executed user query task numbers; the user query number list comprises the number of query tasks currently executed by different users; wherein the query task at least comprises a user ID.
Correspondingly, the determining module 103 is further configured to determine whether the number of query tasks corresponding to the user ID exceeds the user preset maximum query concurrence number corresponding to the user ID after the query task is added to the query issuing window. And if the number of the query concurrences does not exceed the maximum number preset by the user, adding the query task to the query issuing window. And if the number of the query concurrencies exceeds the maximum number preset by the user, triggering the operation in the acquisition module 101.
The processing device 100 for query task of this embodiment may further include a determining module (not shown in the figure) for determining the first resource consumption estimation value of the query issuing window. The judgment module 103 is further configured to judge whether a sum of the first resource consumption estimated value and the current system resource utilization rate exceeds a preset capacity expansion water level threshold. If the judgment result of the judgment module 103 does not exceed the preset capacity expansion water level threshold, triggering the judgment module 103 to judge whether there is any operation of other query tasks in the current query service queue; the processing apparatus 100 for query task may further include a capacity expansion module (not shown in the figure), configured to perform capacity expansion according to the calculated number of capacity expansion nodes if the determination result of the determining module 103 exceeds the preset capacity expansion water level threshold; and triggering the issuing module 104 to issue the query task in the query issuing window to a computing node for execution.
Further, the determining module 103 may be further configured to determine whether a sum of the first resource consumption estimated value and the current system resource utilization rate is smaller than a preset shrinkage water level threshold. The processing apparatus 100 for the query task may further include a capacity reduction module (not shown in the figure), configured to perform capacity reduction according to the calculated number of capacity reduction nodes if the determination result of the determining module 103 is smaller than the preset capacity reduction water level threshold; triggering the issuing module 104 to issue the query task in the query issuing window to a computing node for execution; if the judgment result of the judgment module 103 is not less than the preset shrinkage water level threshold, triggering the issuing module 104 to issue the query task in the query issuing window to a computing node for execution.
Further, in an example, the determining module may be further configured to trigger the issuing module 104 to issue the query task in the query issuing window to a computing node for execution after determining the shared issuing group of the query task in the query issuing window.
In another example, the determining module may be further configured to determine a shared delivery group of the query task in the query delivery window; and determining a second resource consumption estimated value of the inquiry issuing window according to the determined sharing issuing group. The determining module 103 may be further configured to determine whether a sum of the second resource consumption estimated value and the current system resource utilization rate exceeds the preset capacity expansion water level threshold. If the judgment result of the judgment module 103 is yes, triggering the capacity expansion module to perform capacity expansion operation according to the calculated capacity expansion node number; if the judgment result of the judgment module 103 is not exceeded, the judgment module 103 is triggered to judge whether there is any operation of other query tasks in the current query service queue.
Further, when determining the shared delivery group of the query task in the query delivery window, the determining module is specifically configured to: initializing a first shared issuing group in the query issuing window; acquiring an execution plan tree of a query task in the query issuing window; judging whether a sharing operator matched with each execution plan tree in the first sharing issuing group exists in the execution plan trees or not; if yes, adding the query task to the matched first shared issuing group; if not, establishing a second sharing issuing group for the query task, and adding the query task to the second sharing issuing group; judging whether all the query tasks in the query issuing window complete sharing operator matching; if not, returning to execute the execution plan tree for acquiring one query task in the query issuing window; and if so, determining the sharing issuing grouping of the query task in the query issuing window.
Further, when determining the first resource consumption estimation value of the query issuing window, the determining module is specifically configured to: generating a query syntax tree of the query task; generating an execution plan tree of the query task according to the query syntax tree; performing resource consumption estimation on the query task according to the execution plan tree to obtain a resource consumption value for executing the query task; and accumulating the resource consumption value for executing the query task and the current resource consumption value of the query issuing window to obtain a first resource consumption estimated value of the query issuing window.
Fig. 11 is a schematic structural diagram of a processing device for a query task according to a second embodiment of the present invention. As shown in fig. 11, the processing device 110 of the query task of the present embodiment may include a memory 112 and a processor 111.
The memory 112 is used for storing instructions for controlling the processor 111 to operate to perform the processing method of the query task of any embodiment of the present invention. The skilled person can design the instructions according to the disclosed solution. How the instructions control the operation of the processor is well known in the art and will not be described in detail herein.
< System >
In this embodiment, the processing system of the query task may include the client device shown in fig. 1, and the processing apparatus 100 or 110 of the query task in the above embodiments.
In an example, the processing system of the query task in this embodiment may also include, as shown in fig. 12:
and the Query service queue (Query queue) is used for receiving Query task requests sent by different users to the server.
A Query parser (Query parser) for generating a Query syntax tree.
And the Query optimizer (Query optimizer) is used for generating an execution plan tree according to the Query syntax tree and the statistical information and can estimate the resource consumption of the execution plan tree.
The processing device 9000 for storing all relevant metadata information Meta Store of the system and Query tasks can be used for externally calling an elastic capacity expansion service to expand capacity and calling a Query engine (Query Execution) to issue the Query tasks. And the Query Execution issues the Query task after sharing and issuing the grouping to the distributed computing node cluster for Execution.
< computer storage Medium >
In this embodiment, a computer storage medium is further provided, on which computer instructions are stored, and when the computer instructions in the storage medium are executed by a processor, the method for processing a query task provided in any one of the above embodiments is implemented.
< database query method >
FIG. 13 is a schematic flow chart diagram of a database query method of an embodiment of the present invention.
As shown in fig. 13, the database query method of this embodiment may include:
step 130, acquiring a first database query request and a second database query request; the first database query request comprises a plurality of first query operators, and the second database query request comprises a plurality of second query operators; the plurality of first query operators and the plurality of second query operators comprise at least one same query operator.
Specifically, after a first database query request and a second database query request are obtained, a plurality of first query operators in the first database query request are matched with a plurality of second query operators in the second database query request to obtain at least one same query operator included in the plurality of first query operators and the plurality of second query operators, and the at least one same query operator is used as a sharing operator when the first database query request and the second database query request are executed.
Step 131, executing the first database query request and the second database query request by using the at least one same operator.
In the embodiment, the first database query request and the second database query request are executed by using at least one same query operator in the first database query request and the second database query request, so that access to the same data set is shared at the same time, the consumption of computing resources is saved, and the utilization rate of system resources is further improved.
< database query method >
Fig. 14 is a schematic structural diagram of a database query device according to a first embodiment of the present invention.
As shown in fig. 14, the database query device 140 includes: an acquisition module 141 and an execution module 142.
An obtaining module 141, configured to obtain a first database query request and a second database query request; the first database query request comprises a plurality of first query operators, and the second database query request comprises a plurality of second query operators; the plurality of first query operators and the plurality of second query operators comprise at least one same query operator.
An executing module 142, configured to execute the first database query request and the second database query request by using the at least one same operator.
The database query device of this embodiment may be used to implement the technical solutions of the above method embodiments, and the implementation principles and technical effects thereof are similar, and are not described herein again.
Fig. 15 is a schematic configuration diagram of a database query device according to a second embodiment of the present invention.
As shown in fig. 15, the database query device 150 of the present embodiment includes: a memory 151 and a processor 152, the memory 151 for storing executable instructions; the processor 152 is configured to execute the processing method of the query task under the control of the instruction.
It is well known to those skilled in the art that with the development of electronic information technology such as large scale integrated circuit technology and the trend of software hardware, it has been difficult to clearly divide the software and hardware boundaries of a computer system. As any of the operations may be implemented in software or hardware. Execution of any of the instructions may be performed by hardware, as well as by software. Whether a hardware implementation or a software implementation is employed for a certain machine function depends on non-technical factors such as price, speed, reliability, storage capacity, change period, and the like. A software implementation and a hardware implementation are equivalent for the skilled person. The skilled person can choose software or hardware to implement the above described scheme as desired. Therefore, specific software or hardware is not limited herein.
The present invention may be an apparatus, method and/or computer program product. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied therewith for causing a processor to implement various aspects of the present invention.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
Computer program instructions for carrying out operations of the present invention may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including AN object oriented programming language such as Smalltalk, C + +, or the like, as well as conventional procedural programming languages, such as the "C" language or similar programming languages.
Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions. It is well known to those skilled in the art that implementation by hardware, by software, and by a combination of software and hardware are equivalent.
Having described embodiments of the present invention, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein. The scope of the invention is defined by the appended claims.

Claims (15)

1. A method for processing a query task, the method comprising:
acquiring a query task from a current query service queue;
adding the query task to a query issuing window;
judging whether other query tasks exist in the current query service queue;
if the judgment result is yes, returning to the step of executing the query task obtained from the current query service queue;
if the judgment result is negative, the query task in the query issuing window is issued to the computing node for execution.
2. The method of claim 1, wherein prior to obtaining the query task from the current query service queue, the method further comprises:
acquiring a list of the number of currently executed user query tasks; the user query number list comprises the number of query tasks currently executed by different users; wherein the query task at least comprises a user ID;
correspondingly, before the step of adding the query task to the query issue window, the method includes:
judging whether the number of the query tasks corresponding to the user ID exceeds the user preset maximum query concurrency number corresponding to the user ID after the query tasks are added to the query issuing window;
if the number of the query concurrences does not exceed the user preset maximum number, adding the query task to the query issuing window;
and if the query concurrency number exceeds the user preset maximum query concurrency number, returning to the step of executing the query task obtained from the current query service queue.
3. The method of claim 1, wherein after the step of adding the query task to a query issue window, the method further comprises:
determining a first resource consumption estimation value of the query issuing window;
judging whether the sum of the first resource consumption estimated value and the current system resource utilization rate exceeds a preset capacity expansion water level threshold value or not;
if the current query service queue does not exceed the preset expansion water level threshold, executing a step of judging whether other query tasks exist in the current query service queue;
if the preset expansion water level threshold value is exceeded, expansion is carried out according to the calculated quantity of the expansion nodes; and issuing the query task in the query issuing window to a computing node for execution.
4. The method of claim 3, wherein after the step of determining whether there are more query tasks in the current query service queue, the method further comprises:
judging whether the sum of the first resource consumption estimated value and the current system resource utilization rate is smaller than a preset capacity reduction water level threshold value or not;
if the number of the capacity reduction nodes is smaller than the preset capacity reduction water level threshold, carrying out capacity reduction according to the calculated number of the capacity reduction nodes; issuing the query task in the query issuing window to a computing node for execution;
and if the size of the query task is not less than the preset shrinkage water level threshold, issuing the query task in the query issuing window to a computing node for execution.
5. The method of claim 1, wherein before the step of issuing the query task in the query issue window to the computing node for execution, the method further comprises:
and determining the sharing issuing grouping of the query task in the query issuing window.
6. The method according to claim 3, wherein before the step of performing capacity expansion according to the calculated number of capacity expansion nodes, the method further comprises:
determining a shared issuing group of the query task in the query issuing window;
determining a second resource consumption estimation value of the inquiry issuing window according to the determined sharing issuing group;
judging whether the sum of the second resource consumption estimated value and the current system resource utilization rate exceeds the preset expansion water level threshold value or not;
if yes, executing the step of capacity expansion according to the calculated quantity of the capacity expansion nodes;
and if not, executing the step of judging whether other query tasks exist in the current query service queue.
7. The method of claim 5 or 6, wherein the step of determining the shared delivery packet for the query task in the query delivery window comprises:
initializing a first shared issuing group in the query issuing window;
acquiring an execution plan tree of a query task in the query issuing window;
judging whether a sharing operator matched with each execution plan tree in the first sharing issuing group exists in the execution plan trees or not;
if yes, adding the query task to the matched first shared issuing group;
if not, establishing a second sharing issuing group for the query task, and adding the query task to the second sharing issuing group;
judging whether all the query tasks in the query issuing window complete sharing operator matching;
if not, returning to the step of executing the execution plan tree for acquiring one query task in the query issuing window;
and if so, determining the sharing issuing grouping of the query task in the query issuing window.
8. The method of claim 3, wherein the step of determining the first resource consumption estimate for the query issue window comprises:
generating a query syntax tree of the query task;
generating an execution plan tree of the query task according to the query syntax tree;
performing resource consumption estimation on the query task according to the execution plan tree to obtain a resource consumption value for executing the query task;
and accumulating the resource consumption value for executing the query task and the current resource consumption value of the query issuing window to obtain a first resource consumption estimated value of the query issuing window.
9. A query task processing apparatus, comprising: a memory for storing executable instructions and a processor; the processor is configured to perform a processing method of a query task according to any one of claims 1-8 under control of the instructions.
10. A query task processing apparatus, comprising:
the acquisition module is used for acquiring a query task from the current query service queue;
the adding module is used for adding the query task to a query issuing window;
the judging module is used for judging whether other inquiry tasks exist in the current inquiry service queue;
if the judgment result of the judgment module is yes, triggering the operation in the acquisition module;
and the issuing module is used for issuing the query task in the query issuing window to a computing node for execution if the judgment result of the judging module is negative.
11. A system for processing a query task, comprising a client device and a means for processing a query task as claimed in claim 9 or 10.
12. A computer storage medium having stored thereon computer instructions which, when executed by a processor, carry out operations in a method of processing a query task according to any one of claims 1-8.
13. A method of database querying, the method comprising:
acquiring a first database query request and a second database query request; the first database query request comprises a plurality of first query operators, and the second database query request comprises a plurality of second query operators; the plurality of first query operators and the plurality of second query operators comprise at least one same query operator;
executing the first database query request and the second database query request using the at least one same operator.
14. An apparatus for querying a database, the apparatus comprising:
the acquisition module is used for acquiring a first database query request and a second database query request; the first database query request comprises a plurality of first query operators, and the second database query request comprises a plurality of second query operators; the plurality of first query operators and the plurality of second query operators comprise at least one same query operator;
and the execution module is used for executing the first database query request and the second database query request by using the at least one same operator.
15. An apparatus for querying a database, the apparatus comprising: a memory for storing executable instructions and a processor; the processor is configured to perform a processing method of a query task as claimed in claim 13 under the control of the instructions.
CN201910108362.5A 2019-01-18 2019-01-18 Query task processing method, device, server and system Active CN111459981B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910108362.5A CN111459981B (en) 2019-01-18 2019-01-18 Query task processing method, device, server and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910108362.5A CN111459981B (en) 2019-01-18 2019-01-18 Query task processing method, device, server and system

Publications (2)

Publication Number Publication Date
CN111459981A true CN111459981A (en) 2020-07-28
CN111459981B CN111459981B (en) 2023-06-09

Family

ID=71685635

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910108362.5A Active CN111459981B (en) 2019-01-18 2019-01-18 Query task processing method, device, server and system

Country Status (1)

Country Link
CN (1) CN111459981B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113110326A (en) * 2021-04-12 2021-07-13 清华大学 Intelligent factory operating system based on industrial Internet architecture

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003122586A (en) * 2001-08-09 2003-04-25 Matsushita Electric Ind Co Ltd Task scheduling device
US20070022100A1 (en) * 2005-07-22 2007-01-25 Masaru Kitsuregawa Database management system and method
CN101217415A (en) * 2008-01-18 2008-07-09 深圳国人通信有限公司 A method of polling devices in the repeater
US20100036804A1 (en) * 2008-08-05 2010-02-11 International Business Machines Corporation Maintained and Reusable I/O Value Caches
CN103458527A (en) * 2012-06-01 2013-12-18 中兴通讯股份有限公司 Preamble detection task processing and dispatching method and device
CN104778074A (en) * 2014-01-14 2015-07-15 腾讯科技(深圳)有限公司 Calculation task processing method and device
US20150206260A1 (en) * 2014-01-21 2015-07-23 Steven W. Lundberg Systems and methods for analyzing prior art rejections
CN105159783A (en) * 2015-10-09 2015-12-16 上海瀚之友信息技术服务有限公司 System task distribution method
CN106294499A (en) * 2015-06-09 2017-01-04 阿里巴巴集团控股有限公司 A kind of database data querying method and equipment
US20170061364A1 (en) * 2015-08-28 2017-03-02 Exacttarget, Inc. Database systems and related queue management methods
CN107168779A (en) * 2017-03-31 2017-09-15 咪咕互动娱乐有限公司 A kind of task management method and system
US20180157710A1 (en) * 2016-12-02 2018-06-07 Oracle International Corporation Query and change propagation scheduling for heteogeneous database systems
CN108710535A (en) * 2018-05-22 2018-10-26 中国科学技术大学 A kind of task scheduling system based on intelligent processor

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003122586A (en) * 2001-08-09 2003-04-25 Matsushita Electric Ind Co Ltd Task scheduling device
US20070022100A1 (en) * 2005-07-22 2007-01-25 Masaru Kitsuregawa Database management system and method
CN101217415A (en) * 2008-01-18 2008-07-09 深圳国人通信有限公司 A method of polling devices in the repeater
US20100036804A1 (en) * 2008-08-05 2010-02-11 International Business Machines Corporation Maintained and Reusable I/O Value Caches
CN103458527A (en) * 2012-06-01 2013-12-18 中兴通讯股份有限公司 Preamble detection task processing and dispatching method and device
CN104778074A (en) * 2014-01-14 2015-07-15 腾讯科技(深圳)有限公司 Calculation task processing method and device
US20150206260A1 (en) * 2014-01-21 2015-07-23 Steven W. Lundberg Systems and methods for analyzing prior art rejections
CN106294499A (en) * 2015-06-09 2017-01-04 阿里巴巴集团控股有限公司 A kind of database data querying method and equipment
US20170061364A1 (en) * 2015-08-28 2017-03-02 Exacttarget, Inc. Database systems and related queue management methods
CN105159783A (en) * 2015-10-09 2015-12-16 上海瀚之友信息技术服务有限公司 System task distribution method
US20180157710A1 (en) * 2016-12-02 2018-06-07 Oracle International Corporation Query and change propagation scheduling for heteogeneous database systems
CN107168779A (en) * 2017-03-31 2017-09-15 咪咕互动娱乐有限公司 A kind of task management method and system
CN108710535A (en) * 2018-05-22 2018-10-26 中国科学技术大学 A kind of task scheduling system based on intelligent processor

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113110326A (en) * 2021-04-12 2021-07-13 清华大学 Intelligent factory operating system based on industrial Internet architecture

Also Published As

Publication number Publication date
CN111459981B (en) 2023-06-09

Similar Documents

Publication Publication Date Title
CN109074377B (en) Managed function execution for real-time processing of data streams
CN107590001B (en) Load balancing method and device, storage medium and electronic equipment
CN112753019A (en) Efficient state maintenance of execution environments in on-demand code execution systems
WO2020140614A1 (en) Offline message distribution method, server and storage medium
CN109614402B (en) Multidimensional data query method and device
CN111786895A (en) Method and apparatus for dynamic global current limiting
US20160085473A1 (en) Asynchronous Processing of Mapping Information
CN110858194A (en) Method and device for expanding database
CN111797091A (en) Method and device for querying data in database, electronic equipment and storage medium
CN107426336B (en) Method and device for adjusting push message opening rate
CN113190517B (en) Data integration method and device, electronic equipment and computer readable medium
CN111858586B (en) Data processing method and device
CN111459981B (en) Query task processing method, device, server and system
CN112667368A (en) Task data processing method and device
CN112948138A (en) Method and device for processing message
CN113779122B (en) Method and device for exporting data
US20230342369A1 (en) Data processing method and apparatus, and electronic device and storage medium
CN110019671B (en) Method and system for processing real-time message
CN113760861A (en) Data migration method and device
CN112749204A (en) Method and device for reading data
CN112711588A (en) Multi-table connection method and device
CN112799863A (en) Method and apparatus for outputting information
CN111338882A (en) Data monitoring method, device, medium and electronic equipment
CN116431523B (en) Test data management method, device, equipment and storage medium
CN114328558B (en) List updating method, apparatus, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant