CN111767305A - Self-adaptive database hybrid query method - Google Patents

Self-adaptive database hybrid query method Download PDF

Info

Publication number
CN111767305A
CN111767305A CN202010581766.9A CN202010581766A CN111767305A CN 111767305 A CN111767305 A CN 111767305A CN 202010581766 A CN202010581766 A CN 202010581766A CN 111767305 A CN111767305 A CN 111767305A
Authority
CN
China
Prior art keywords
query
database
gpu
request
hybrid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010581766.9A
Other languages
Chinese (zh)
Other versions
CN111767305B (en
Inventor
文军
郑立源
李宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202010581766.9A priority Critical patent/CN111767305B/en
Publication of CN111767305A publication Critical patent/CN111767305A/en
Application granted granted Critical
Publication of CN111767305B publication Critical patent/CN111767305B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/242Query formulation
    • G06F16/2433Query languages
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a self-adaptive database hybrid query method, which comprises the steps that a service process monitors a query request, calls a function of a database management layer to perform actual GPU query task execution and resource scheduling tasks, and returns a result to a query task client. Meanwhile, processing the subsequent request from the process, and transmitting the result back to the query task control process through an IPC mechanism; when the query control process completes the query, initiating an exit request to the server, and finishing the corresponding service thread by the service process; and the query control process generates a query plan, and a dynamic link library is used for intercepting a CUDA Runtime API initiated by the query task control process and converting the CUDA Runtime API into an IPC request of a service process. The method analyzes the query operation task on the hybrid platform based on the cost model according to the query data characteristics, creates a query optimization scheme, realizes database query by utilizing the cooperation of the CPU and the GPU, improves the overall query processing performance of the database, realizes the maximized utilization of the hardware performance of the computer, and reduces the extra time overhead.

Description

Self-adaptive database hybrid query method
Technical Field
The invention belongs to the technical field of GPU database query, and particularly relates to a database hybrid query method for realizing self-adaption through cooperation of a CPU and a GPU.
Background
The conventional GPU database is stored in a column mode, and is combined with a memory and a video memory, so that optimization is not needed, and an index is not needed. During production time, data are loaded into a video memory and a memory, and a disk is not required to be accessed, so that the overhead of accessing a sector is not required to be reduced by means of indexes. Through thousands of cores of each GPU, tens of thousands of threads are used for scanning the full table in parallel, and the method is particularly suitable for JOIN, fuzzy matching, Group By, full table scanning or aggregation and the like of tens of millions of lines. The GPU database uses the memory as a bridge to realize a three-level cache structure of a disk, the memory and a video memory, which can provide 10 times of CPU DRAM bandwidth and lower delay.
Current GPU database queries still suffer from some deficiencies:
1. each query task separately manages GPU resources, which can bring repeated overhead;
2. for the query of a large amount of data, the memory-video memory data exchange is frequently carried out by using PCie, different query tasks repeatedly transmit the same column storage data in a database, and the overall utilization rate of gpu is still low;
3. although hundreds or even thousands of stream processors in the GPU can provide powerful vector computing power, the efficiency of operations such as complex branch instructions, iterative processing, inter-thread data synchronization, large data high latency access, etc. is weaker than that of general purpose processors. Unfortunately, the relational operation model is not an ideal query processing model suitable for GPU vector computation features. The strategy adopted to cooperate with the CPU and the GPU is used for carrying out multi-branch statement query aiming at mass data, and the strategy is also one direction of current GPU database research;
4. for the same algorithm, since the GPU is weaker than the CPU in terms of logic control and complex data management capabilities, the overall performance of the GPU is not obvious to the advantage of the CPU algorithm for deep optimization, and even in some queries, the GPU algorithm performance is lower than the CPU algorithm. Therefore, the acceleration of the database by the GPU is not comprehensive, and the CPU and the GPU query processing module need to be combined according to the characteristics of the data and the characteristics of the operation.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides a self-adaptive database hybrid query method, so as to improve the overall query processing performance of the database, maximally utilize the performance of computer hardware and reduce the extra time overhead.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that:
an adaptive database hybrid query method comprises the following steps:
s1, starting a database system service process and monitoring a query request;
s2, submitting an sql query statement to the database;
s3, generating a query plan according to the monitored sql query statement, and starting a query task control process;
s4, intercepting and inquiring a CUDA Runtime API initiated by a task control process by using a dynamic link library, and converting the CUDA Runtime API into an IPC request of a service process;
s5, after the server receives the IPC request, a new service thread is created to process the follow-up request from the process, and the result is transmitted back to the query task control process through the IPC mechanism;
and S6, when the query task control process completes the query, initiating an exit request to the server, and finishing the corresponding service thread by the service process.
Further, the step S1 specifically includes:
starting a database system service process, creating a CUDA context, modifying a folder pre-loaded column storage data to a system memory according to a starting parameter datadir, and then monitoring a query request.
Further, the step S3 specifically includes:
and generating a query plan by utilizing a query plan generation module based on a cost learning model according to the monitored sql query statement, setting the LD _ PRELOAD environment variable as a database dynamic link library path, executing the query plan, and starting a query task control process.
Further, the query plan generating module in step S3 specifically includes:
a self-adjusting execution time estimator for calculating estimated execution times of all available algorithms capable of performing the operation;
the algorithm selector is used for selecting an optimal algorithm according to the estimated execution time and the optimized exploratory formula;
and the hybrid query optimizer is used for constructing a physical query plan according to the logic query plan determined by the optimal algorithm and dispatching the database operation to the CPU and the GPU equipment.
Further, the step S3, according to the monitored sql query statement, generating a query plan based on the cost learning model by using a query plan generation module specifically includes the following sub-steps:
s31, for an operator OP (D, O) consisting of the data set D and the operation O, searching all available algorithms capable of executing the operation O by using an algorithm selector;
s32, calculating an estimated execution time ExeTime (D, A) for each algorithm A and each data set D by using a self-adjusting execution time estimator;
s33, selecting the algorithm with the shortest estimated execution time as the optimal algorithm by using the algorithm selector;
and S34, constructing a physical query plan by using the logic query plan determined by the hybrid query optimizer optimization algorithm, and dispatching the database operation to the CPU and the GPU equipment.
Furthermore, the query task control process combines the query plan generated by the query plan generation module with the database schema, compiles and forms a CPU host program for controlling data reading, and calls the CUDA Runtime API to control the flow of the whole query task according to the query plan.
Further, the step S4 specifically includes:
intercepting a CUDA Runtime API initiated by a query task control process by using a dynamic link library, using a globally unique CUDA Context through a service process, creating a single stream in a CUDA Context when the service process starts each query service thread to initiate GPU calls requested by the query process, and mapping the CUDA Runtime API calls contained in a CPU host program into IPC calls to the service process.
Further, the step S5 specifically includes:
after receiving the IPC request, the server establishes a new service process to process subsequent requests from the process, continuously analyzes the requests from the same query task process, executes responsive resource management and kernel calling operation through a database core management layer, and transmits the result back to the query task control process through an IPC mechanism.
Further, when different query processes use the same data, the service process determines that the PCie transmits the data to the GPU or directly uses the data stored in the memory of the GPU according to the working state of the system.
Further, the database core management layer comprises column storage data sharing, GPU hardware management and kernel calling logic.
The invention has the following beneficial effects:
(1) the invention uses a CPU and GPU mixed query mode to reduce Pcie bus data transmission pressure and enable computer hardware to be used with maximum efficiency;
(2) the invention uses the cost model based on learning, and can generate the execution plan according to the data characteristics and the operation without worrying about the problems that the model needs to be perfected and the administrator needs to maintain after the hardware resources are changed;
(3) the query process uses the dynamic link library to communicate with the database service process through an IPC mechanism, so that the coupling degree between the query task process and the database process is reduced, and the system has good expansibility;
(4) when different inquiry processes use the same data, the service process can determine whether the PCie transmits the data to the GPU or directly uses the data which is already stored in the memory of the GPU according to the working state of the system, so that the time waste caused by data transmission can be reduced.
Drawings
Fig. 1 is a schematic flow chart of the adaptive database hybrid query method of the present invention.
FIG. 2 is a schematic diagram of query plan generation according to the present invention;
FIG. 3 is a schematic diagram of a CPU and GPU hybrid query according to the present invention.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, an embodiment of the present invention provides an adaptive database hybrid query method, including the following steps S1 to S6:
s1, starting a database system service process and monitoring a query request;
in this embodiment, the present invention starts a database system service process, creates a CUDA context, and uses the CUDA context as a container to manage life cycles of all objects for calling the CUDA function; modifying the folder pre-loading column storage data to a system memory according to a starting parameter datadir, wherein the parameter datadir represents the position of the data to be loaded; and then listens for a Unix Domain socket to accept the query request.
S2, submitting an sql query statement to the database;
s3, generating a query plan according to the monitored sql query statement, and starting a query task control process;
in the embodiment, since the conventional method uses the analysis model for cost evaluation, there are disadvantages that the model must be perfected after hardware resource changes, and the parameters thereof must be maintained and appropriately adjusted by a database administrator. Thus, the present invention uses a learning-based approach to estimate the execution time of operations on different processing devices. And mapping data characteristics (such as data size, data inclination and selectivity) and operations to obtain execution time by using the cost model as a decision basis, and distributing the database operations to the CPU and the GPU equipment according to the cost.
The method comprises the steps of generating a query plan based on a cost learning model by utilizing a query plan generation module according to monitored sql query statements, setting LD _ PRELOAD environment variables as database dynamic link library paths, executing the query plan, and starting a query task control process.
The cost learning model observes the correlation between the input data set and the result execution time and learns the correlation using a statistical method. The statistical method computes an approximation function after sufficient observations are collected, which selects the best algorithm for operation.
As shown in fig. 2, the query plan generating module specifically includes:
a self-adjusting execution time estimator S for calculating estimated execution times of all available algorithms capable of performing the operation, thereby providing an accurate and reliable database architecture and model-independent estimated execution times for the database algorithms. For the incoming algorithm a and data set D, the self-adjusting execution time estimator S computes an estimated execution time ExeTime (D, a).
The algorithm selector A is used for selecting an optimal algorithm according to the estimated execution time and the optimized exploratory formula; the algorithm selector a uses a self-adjusting execution time estimator S to obtain estimated execution times for all available algorithm execution operations. For the operator OP (D, O) consisting of the data set D and the operation O, the algorithm selector a finds all available algorithms for O and calculates the estimated execution time ExeTime (D, a) for them with the self-adjusting execution time estimator S, and then determines the optimal algorithm a opt. To optimize the response time, the system selects the algorithm that estimates the shortest execution time.
And the hybrid query optimizer H is used for constructing a physical query plan Q phy according to the logical query plan Q log determined by the optimal algorithm and dispatching the database operation to the CPU and the GPU equipment.
The generating of the query plan by the query plan generating module according to the monitored sql query statement based on the cost learning model specifically comprises the following steps:
s31, for an operator OP (D, O) consisting of the data set D and the operation O, searching all available algorithms capable of executing the operation O by using an algorithm selector;
s32, calculating an estimated execution time ExeTime (D, A) for each algorithm A and each data set D by using a self-adjusting execution time estimator;
s33, selecting the algorithm with the shortest estimated execution time as the optimal algorithm by using the algorithm selector;
and S34, constructing a physical query plan by using the logic query plan determined by the hybrid query optimizer optimization algorithm, and dispatching the database operation to the CPU and the GPU equipment.
The query task control process combines the query plan generated by the query plan generating module with a database schema (database object set), compiles and forms a CPU host program for controlling data reading, and the CPU host program calls a CUDA Runtime API to control the flow of the whole query task according to the query plan.
As shown in fig. 3, the query process in the present invention uses a dynamic link library to communicate with a database service process through an IPC (inter-process communication) mechanism, thereby reducing the coupling between the query task process and the database process. The system architecture is divided into two parts of a CPU and a GPU. The CPU is divided into database service process, query plan generating process and query process (query plan generating module). The GPU part mainly comprises a kernel (CUDA kernel function) implementation of an execution plan, and is loaded and called by a service process.
S4, intercepting and inquiring a CUDA Runtime API initiated by a task control process by using a dynamic link library, and converting the CUDA Runtime API into an IPC request of a service process;
in the embodiment, the CUDARuntime API initiated by the task control process is intercepted and inquired by using the dynamic link library, so that a mechanism for triggering CUDA context created by a CUDA is avoided, and the time consumed by independently creating each inquiry process is saved by using the globally unique CUDA context through the service process. In order to avoid blocking caused by CUDA Runtime API calls between each query process, when each query service thread is started by a service process, a separate stream (GPU operation queue) is created in CUDAContext to initiate a GPU call requested by the query process. CUDAAPI calls carried out on different streams cannot be mutually blocked and can be called by a GPU (graphics processing unit) concurrently, so that PCie data transmission and kernel calls of different query tasks can be carried out. The CUDA Runtime API call contained in the CPU host program is then mapped as an IPC call to the service process.
S5, after the server receives the IPC request, a new service thread is created to process the follow-up request from the process, and the result is transmitted back to the query task control process through the IPC mechanism;
in this embodiment, after receiving an IPC request, the server of the present invention creates a new service process to process subsequent requests from the process, continuously analyzes the requests from the same query task process, executes resource management and kernel call operations in response through the database core management layer, and returns the result to the query task control process through the IPC mechanism.
And the service process calls a function of the database management layer to perform actual GPU query task execution and resource scheduling tasks through an IPC request initiated by the dynamic link library, and returns the result to the query task client.
The database core management layer comprises logic of column storage data sharing, GPU hardware management, kernel calling and the like. Database management functions are provided to the service process in the form of a library of functions. The service process uses a multi-threaded architecture, and different query task requests respond using independent threads. Concurrent queries can be well supported. When different inquiry processes use the same data, the service process can determine whether the PCie transmits the data to the GPU or directly uses the data stored in the GPU memory according to the working state of the system. The time waste caused by data transmission can be reduced.
And S6, when the query task control process completes the query, initiating an exit request to the server, and finishing the corresponding service thread by the service process.
According to the method for realizing database query by the cooperative operation of the CPU and the GPU, the query operation tasks on the hybrid platform are analyzed based on the cost model according to the query data characteristics, the operation type, the PCIe channel data transmission performance between the GPU and the CPU, the GPU parallel computing performance and other related factors, so that a query optimization scheme is created, the overall query processing performance of the database is improved, the hardware performance of the computer is utilized to the maximum extent, and the extra time overhead is reduced.
It will be appreciated by those of ordinary skill in the art that the embodiments described herein are intended to assist the reader in understanding the principles of the invention and are to be construed as being without limitation to such specifically recited embodiments and examples. Those skilled in the art can make various other specific changes and combinations based on the teachings of the present invention without departing from the spirit of the invention, and these changes and combinations are within the scope of the invention.

Claims (10)

1. An adaptive database hybrid query method is characterized by comprising the following steps:
s1, starting a database system service process and monitoring a query request;
s2, submitting an sql query statement to the database;
s3, generating a query plan according to the monitored sql query statement, and starting a query task control process;
s4, intercepting and inquiring a CUDA Runtime API initiated by a task control process by using a dynamic link library, and converting the CUDA Runtime API into an IPC request of a service process;
s5, after the server receives the IPC request, a new service thread is created to process the follow-up request from the process, and the result is transmitted back to the query task control process through the IPC mechanism;
and S6, when the query task control process completes the query, initiating an exit request to the server, and finishing the corresponding service thread by the service process.
2. The adaptive database hybrid query method according to claim 1, wherein the step S1 specifically includes:
starting a database system service process, creating a CUDA context, modifying a folder pre-loaded column storage data to a system memory according to a starting parameter datadir, and then monitoring a query request.
3. The adaptive database hybrid query method according to claim 1, wherein the step S3 specifically includes:
and generating a query plan by utilizing a query plan generation module based on a cost learning model according to the monitored sql query statement, setting the LD _ PRELOAD environment variable as a database dynamic link library path, executing the query plan, and starting a query task control process.
4. The adaptive database hybrid query method according to claim 3, wherein the query plan generating module in step S3 specifically includes:
a self-adjusting execution time estimator for calculating estimated execution times of all available algorithms capable of performing the operation;
the algorithm selector is used for selecting an optimal algorithm according to the estimated execution time and the optimized exploratory formula;
and the hybrid query optimizer is used for constructing a physical query plan according to the logic query plan determined by the optimal algorithm and dispatching the database operation to the CPU and the GPU equipment.
5. The adaptive database hybrid query method according to claim 4, wherein the step S3 is to generate a query plan based on the cost learning model by using a query plan generation module according to the monitored sql query statement, and specifically includes the following sub-steps:
s31, for an operator OP (D, O) consisting of the data set D and the operation O, searching all available algorithms capable of executing the operation O by using an algorithm selector;
s32, calculating an estimated execution time ExeTime (D, A) for each algorithm A and each data set D by using a self-adjusting execution time estimator;
s33, selecting the algorithm with the shortest estimated execution time as the optimal algorithm by using the algorithm selector;
and S34, constructing a physical query plan by using the logic query plan determined by the hybrid query optimizer optimization algorithm, and dispatching the database operation to the CPU and the GPU equipment.
6. The adaptive database hybrid query method according to claim 5, wherein the query task control process combines the query plan generated by the query plan generation module with the database schema, compiles and forms a CPU host program for controlling data reading, and calls a CUDA Runtime API to control the flow of the whole query task according to the query plan.
7. The adaptive database hybrid query method according to claim 6, wherein the step S4 specifically includes:
intercepting CUDA Runtime API initiated by a query task control process by using a dynamic link library, using a globally unique CUDA Context through a service process, creating a single stream in the CUDA Context when the service process starts each query service thread to initiate GPU call requested by the query process, and mapping CUDARuntime API call contained in a CPU host program into IPC call to the service process.
8. The adaptive database hybrid query method according to claim 1, wherein the step S5 specifically includes:
after receiving the IPC request, the server establishes a new service process to process subsequent requests from the process, continuously analyzes the requests from the same query task process, executes responsive resource management and kernel calling operation through a database core management layer, and transmits the result back to the query task control process through an IPC mechanism.
9. The adaptive database hybrid query method according to claim 8, wherein the service process determines that the PCie transmits data to the GPU or directly uses data already stored in a memory of the GPU according to a system working state when different query processes use the same data.
10. The adaptive database hybrid query method of claim 8, wherein the database core management layer comprises column store data sharing, GPU hardware management, and kernel call logic.
CN202010581766.9A 2020-06-23 2020-06-23 Self-adaptive database hybrid query method Active CN111767305B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010581766.9A CN111767305B (en) 2020-06-23 2020-06-23 Self-adaptive database hybrid query method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010581766.9A CN111767305B (en) 2020-06-23 2020-06-23 Self-adaptive database hybrid query method

Publications (2)

Publication Number Publication Date
CN111767305A true CN111767305A (en) 2020-10-13
CN111767305B CN111767305B (en) 2023-04-07

Family

ID=72721797

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010581766.9A Active CN111767305B (en) 2020-06-23 2020-06-23 Self-adaptive database hybrid query method

Country Status (1)

Country Link
CN (1) CN111767305B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113641424A (en) * 2021-10-13 2021-11-12 北京安华金和科技有限公司 Database operation processing method and system

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101719935A (en) * 2009-12-04 2010-06-02 广州市聚晖电子科技有限公司 Security protection arming and disarming method and security protection system
CN102981807A (en) * 2012-11-08 2013-03-20 北京大学 Graphics processing unit (GPU) program optimization method based on compute unified device architecture (CUDA) parallel environment
US20130117305A1 (en) * 2010-07-21 2013-05-09 Sqream Technologies Ltd System and Method for the Parallel Execution of Database Queries Over CPUs and Multi Core Processors
CN106943679A (en) * 2017-04-24 2017-07-14 安徽慧软科技有限公司 Photon and electron dose calculate method under magnetic field based on GPU Monte carlo algorithms
CN109791493A (en) * 2016-09-29 2019-05-21 英特尔公司 System and method for the load balance in the decoding of out-of-order clustering
CN110569312A (en) * 2019-11-06 2019-12-13 创业慧康科技股份有限公司 big data rapid retrieval system based on GPU and use method thereof
US20200042076A1 (en) * 2018-07-31 2020-02-06 Nvidia Corporation Voltage/Frequency Scaling for Overcurrent Protection With On-Chip ADC
CN110795097A (en) * 2019-11-04 2020-02-14 腾讯科技(深圳)有限公司 Page processing method and device, computer equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101719935A (en) * 2009-12-04 2010-06-02 广州市聚晖电子科技有限公司 Security protection arming and disarming method and security protection system
US20130117305A1 (en) * 2010-07-21 2013-05-09 Sqream Technologies Ltd System and Method for the Parallel Execution of Database Queries Over CPUs and Multi Core Processors
CN102981807A (en) * 2012-11-08 2013-03-20 北京大学 Graphics processing unit (GPU) program optimization method based on compute unified device architecture (CUDA) parallel environment
CN109791493A (en) * 2016-09-29 2019-05-21 英特尔公司 System and method for the load balance in the decoding of out-of-order clustering
CN106943679A (en) * 2017-04-24 2017-07-14 安徽慧软科技有限公司 Photon and electron dose calculate method under magnetic field based on GPU Monte carlo algorithms
US20200042076A1 (en) * 2018-07-31 2020-02-06 Nvidia Corporation Voltage/Frequency Scaling for Overcurrent Protection With On-Chip ADC
CN110795097A (en) * 2019-11-04 2020-02-14 腾讯科技(深圳)有限公司 Page processing method and device, computer equipment and storage medium
CN110569312A (en) * 2019-11-06 2019-12-13 创业慧康科技股份有限公司 big data rapid retrieval system based on GPU and use method thereof

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
XINGXING JIN ET AL.: "Improved GPU SIMD control flow efficiency via hybrid warp size mechanism", 《MICROPROCESSORS AND MICROSYSTEMS》 *
刘勇: "基于GPU的内存数据库索引技术研究", 《中国博士学位论文全文数据库 信息科技辑》 *
程伟: "基于行为特征剖析的CUDA程序合成方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
谢伦义 等: "基于ACE Reactor的BSC功能测试系统设计", 《计算机应用》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113641424A (en) * 2021-10-13 2021-11-12 北京安华金和科技有限公司 Database operation processing method and system
CN113641424B (en) * 2021-10-13 2022-02-01 北京安华金和科技有限公司 Database operation processing method and system

Also Published As

Publication number Publication date
CN111767305B (en) 2023-04-07

Similar Documents

Publication Publication Date Title
US10545789B2 (en) Task scheduling for highly concurrent analytical and transaction workloads
CN109241191B (en) Distributed data source heterogeneous synchronization platform and synchronization method
US8209697B2 (en) Resource allocation method for a physical computer used by a back end server including calculating database resource cost based on SQL process type
US8352517B2 (en) Infrastructure for spilling pages to a persistent store
US20140156636A1 (en) Dynamic parallel aggregation with hybrid batch flushing
US9875186B2 (en) System and method for data caching in processing nodes of a massively parallel processing (MPP) database system
Minier et al. SaGe: Web preemption for public SPARQL query services
Breß et al. Why it is time for a HyPE: A hybrid query processing engine for efficient GPU coprocessing in DBMS
US11068506B2 (en) Selective dispatching of OLAP requests using execution statistics
CN107480202B (en) Data processing method and device for multiple parallel processing frameworks
CN112905339B (en) Task scheduling execution method, device and system
CN111767305B (en) Self-adaptive database hybrid query method
CN114327880A (en) Computing method of light code heterogeneous distributed system
CN116881192A (en) Cluster architecture for GPU and internal first-level cache management method thereof
CN115237885A (en) Parameter adjusting method and device of data storage system
CN112311695B (en) On-chip bandwidth dynamic allocation method and system
CN114820275A (en) Dynamic timer and VirtiO GPU performance optimization method
US10013353B2 (en) Adaptive optimization of second level cache
Liang et al. Correlation-aware replica prefetching strategy to decrease access latency in edge cloud
CN116188239B (en) Multi-request concurrent GPU (graphics processing unit) graph random walk optimization realization method and system
Zou et al. Live migration in Greenplum database based on SDN via improved gray wolf optimization algorithm
CN117234745B (en) Heterogeneous computing platform-oriented database load balancing method and device
US20220027369A1 (en) Query-based routing of database requests
CN115905306B (en) Local caching method, equipment and medium for OLAP analysis database
Cao et al. DLFaaS: Serverless Platform for Data-Intensive Tasks Based on Interval Access Patterns

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant