CN109753306A - A kind of big data processing method of because precompiled function caching engine - Google Patents
A kind of big data processing method of because precompiled function caching engine Download PDFInfo
- Publication number
- CN109753306A CN109753306A CN201811628499.5A CN201811628499A CN109753306A CN 109753306 A CN109753306 A CN 109753306A CN 201811628499 A CN201811628499 A CN 201811628499A CN 109753306 A CN109753306 A CN 109753306A
- Authority
- CN
- China
- Prior art keywords
- function
- caching
- data processing
- big data
- precompiled
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The embodiment of the invention discloses a kind of big data processing methods of because precompiled function caching engine, which comprises engine cache size and optimisation strategy setting;Expression tree is generated according to user demand;The expression tree is traversed, the feature string of the expression tree is generated, while extracting constant node data and caching;Generate the unique value of the feature string;The Just-In-Time function that precompile whether is stored in caching is searched according to the unique value;If it does not exist, then it generates Just-In-Time function and caches;The constant node data are substituted into the Just-In-Time function to handle, to the delay time of mass data processing in millisecond rank under the scene of high concurrent single-point inquiry, greatly accelerate big data processing speed, CPU and memory source is effectively utilized, the query load for alleviating distributed OLAP or OLTP database meets high-performance big data process demand.
Description
Technical field
The present embodiments relate to computerized algorithm technical fields, and in particular to a kind of because precompiled function caching engine it is big
Data processing method.
Background technique
Cloud computing big data era, with business event increasingly increase with it is complicated, the data volume of generation is also more and more huger,
How under mass data high concurrent scene quick-searching goes out desired data, to distributed data base, there has also been higher need
It asks.The business scenario of certain user is mass data, when encountering identical expression formula, only how to be obtained under constant different situations
Higher performance, such as bank-user audit period are taken, output condition is all identical, but only changes Subscriber Number, in traditional database
Software can all issue identical SQL (structured query language) each time, and can repeat every time, storage resource, memory
The scheduling of resource, computing resource etc. causes resource to repeat application and release, causes query time bottleneck.
Distributed big data popular at present, high performance Computational frame has as follows:
(1) the MapReduce parallel computation frame based on Hadoop.Segmentation of Data Set can be by MapReduce operation
The data block of Map (mapping) function parallel processing, the data that frame generates Map process are ranked up, and are then used as reduce
The data of (reduction) function input, and the output of usual operation and input data are stored in a distributed file system
(HDFS)。
(2) LLVM (underlying virtual machine Just-In-Time technology, Low Level Virtual Machine Just-in- are based on
Time compilation) high-performance memory Computational frame.Expression tree is generated according to the demand of user, according to expression tree
Just-In-Time function is generated, Just-In-Time function contains only instruction related with this processing work, greatly reduces instruction
Quantity improve performance.
And existing computation model has the disadvantage in that MR (MapReduce) computation model can be the intermediate result of task
All fall on above HDFS, then from HDFS face handle reading data carry out operation again out, so ductility is much low at that time
In the model calculated based on memory, therefore MR computation model is mostly used for above the scene of non real-time calculating;And the high property of LLVM
Energy memory Computational frame time delay is really low, but under high concurrent scene, in LLVM memory Computational frame, compiles LLVM JIT letter
Several time is but very long, since it is desired that the management of LLVM JIT engine processing memory, the redirection of symbol, processing external symbol etc.
Etc. the compiler back-end problem of many complexity, so the creation of each LLVM JIT function can fight for cpu resource, memory money
The problem of system resources such as source, so performance can lower.
Summary of the invention
For this purpose, the embodiment of the present invention provides a kind of big data processing method of because precompiled function caching engine, it is existing to solve
Some Computational frames are existing, and the single-point query latency time is long under mass data high concurrent scene, is unable to satisfy high performance demands
The problem of.
To achieve the goals above, embodiments of the present invention provide the following technical solutions:
In the first aspect of embodiments of the present invention, at a kind of provide because precompiled function caching engine big data
Reason method, which comprises
Engine cache size and optimisation strategy setting;
Expression tree is generated according to user demand;
The expression tree is traversed, the feature string of the expression tree is generated, while extracting constant node data and delaying
It deposits;
Generate the unique value of the feature string;
The Just-In-Time function that precompile whether is stored in caching is searched according to the unique value;
If it does not exist, then it generates Just-In-Time function and caches;
The constant node data are substituted into the Just-In-Time function to handle.
Further, the engine cache size and optimisation strategy setting include: preset cache size and will cache into
Row divides bucket.
Further, described caching is subjected to a point bucket to include:
The number of bucket is preset according to user profile;
Alternatively, estimating each instant volume according to the number of Just-In-Time function pointer in historical data statistical information histogram
Memory FM required for function pointer is translated, the number of bucket is calculated according to BucketNum=M/FM, wherein BucketNum is bucket
Number, M be caching default size.
Further, the feature string for generating the expression tree includes:
The attribute information is merged the characteristic character for generating the expression tree by the attribute information for obtaining the expression tree
String, wherein the attribute information includes data type, data col width or data indexing information.
Further, the extraction constant node data and cache include:
The index for establishing the constant node data divides spatial cache according to the index, and by the constant node
Data are respectively stored into spatial cache corresponding with index.
Further, the unique value for generating the feature string includes:
The eap-message digest character string of preceding 32 characters of the feature string is calculated according to MD5 Message Digest 5,
And the Huffman character string of the feature string is calculated according to Huffman coding method, the message being calculated is plucked
Character string and Huffman character string is wanted to merge to obtain the unique value of the feature string.
Further, described that the Just-In-Time function packet that precompile whether is stored in caching is searched according to the unique value
It includes:
Preceding 32 characters of the unique value are searched in the buffer as keyword, if finding, are continued institute
It states compared with remaining character after preceding 32 characters of unique value carries out memcmp function with the character string found.
Further, the generation Just-In-Time function includes:
Instruction database is established, a variety of intermediate representation instructions are stored in described instruction library;
Determine that corresponding function executes process according to user demand;
Process, which is executed, according to the function chooses the institute that flow path match is executed with the function stored in described instruction library
Intermediate representation instruction is stated, intermediate command set is generated;
The Just-In-Time function is generated according to the intermediate command set.
It is further, described to handle constant node data substitution Just-In-Time function further include:
It executes constant acquisition instruction and obtains constant node data, the constant acquisition instruction is packaged in static library and deposits
It stores up in the buffer.
In the second aspect of embodiments of the present invention, a kind of computer storage medium is provided, the computer is deposited
Storage media is stored with computer program instructions, and the computer program instructions are for executing because precompiled function caching as described above
Engine method.
A kind of big data processing method for because precompiled function caching engine that the embodiment of the present invention proposes has following excellent
Point: generating expression tree according to user demand, and traversal expressions tree generates the feature string of the expression tree, extracts constant
Node data simultaneously caches, and ultimately produces the unique value of feature string, according to unique value search in caching it is compiled in advance i.e.
When compiling function call directly and calculated, avoid computing engines and encountering identical expression formula, only the different situation of constant
Under to the repetition application of storage resource, memory source, computing resource etc. and release, cached compared to unused because precompiled function
Engine prolongs mass data processing under the scene that high concurrent single-point is inquired using the server of because precompiled function caching engine
The slow time in millisecond rank, greatly accelerates big data processing speed, CPU and memory source is effectively utilized, and alleviates point
The query load of cloth OLAP or OLTP database meet high-performance big data process demand.
Detailed description of the invention
It, below will be to embodiment party in order to illustrate more clearly of embodiments of the present invention or technical solution in the prior art
Formula or attached drawing needed to be used in the description of the prior art are briefly described.It should be evident that the accompanying drawings in the following description is only
It is merely exemplary, it for those of ordinary skill in the art, without creative efforts, can also basis
The attached drawing of offer, which is extended, obtains other implementation attached drawings.
Fig. 1 is a kind of big data processing method process for because precompiled function caching engine that one embodiment of the invention provides
Schematic diagram;
Fig. 2 is that a kind of big data processing method for because precompiled function caching engine that one embodiment of the invention provides is instant
Compile function lookup method flow schematic diagram.
Specific embodiment
Embodiments of the present invention are illustrated by particular specific embodiment below, those skilled in the art can be by this explanation
Content disclosed by book is understood other advantages and efficacy of the present invention easily, it is clear that described embodiment is the present invention one
Section Example, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art are not doing
Every other embodiment obtained under the premise of creative work out, shall fall within the protection scope of the present invention.
As shown in Figure 1, An embodiment provides a kind of processing of the big data of because precompiled function caching engine
Method, method includes the following steps:
S10, engine cache size and optimisation strategy setting.
Further, engine cache size is provided with following methods:
(1) the eighth value of computing system current process free memory amount, then the so big memory of preliminery application is made
For the size of caching, distribution, managing internal memory are then realized by the mechanism of Slab Allocator (caching distributes);
The principle of Slab Allocation first from a bulk of memory of operating system application, and is divided into various rulers
Very little block Chunk, and it is divided into a group Slab Class identical piece of size, wherein Chunk is exactly for storing key (key)-
The minimum unit of value (value) data.Slab distributor is managed based on object, and the object of same type is classified as one kind
(such as process descriptors be exactly a kind of), whenever applying for that such a object, slab distributor just divide from a slab list
It goes out with a this level of unit, and when to be discharged, it is saved in the list again, rather than is directly returned to
Buddy system, to avoid these interior fragments, slab distributor does not abandon allocated object, but discharges and they are protected
It deposits in memory, when the object that please be look for novelty again later, so that it may be directly acquired from memory and not have to repeat to initialize.
(2) cache size of user profile configuration is read, then the so big memory of preliminery application passes through Slab
The mechanism of Allocator realizes distribution, managing internal memory;
(3) number of the Just-In-Time function pointer of user profile configuration is read;
(4) maximum duration that the permission cache object of user profile configuration is in idle condition is read;
(5) the permission cache object for reading user profile configuration is present in the maximum duration in caching.
Further, engine cache optimisation strategy, which is arranged, includes:
Caching is subjected to a point bucket, divides bucket that caching is indicated to be divided into many levels, to reach the function of efficient retrieval.Further
Ground, caching, which is carried out a point bucket, includes:
The number of bucket is preset according to user profile;
Alternatively, estimating each instant volume according to the number of Just-In-Time function pointer in historical data statistical information histogram
Memory FM required for function pointer is translated, the number of bucket is calculated according to BucketNum=M/FM, wherein BucketNum is bucket
Number, M be caching default size.
S20, expression tree is generated according to user demand.
S30, traversal expressions tree generate feature string TCS (the Tree character of the expression tree
String), while extracting constant node data and caching.
Further, the feature string TCS for generating the expression tree includes:
Attribute information is merged the feature string for generating the expression tree by the attribute information for obtaining expression tree, wherein
Attribute information includes data type, data col width or data indexing information.
Further, constant node data are extracted and are cached and include:
The index for establishing constant node data divides spatial cache according to index, and constant node data is stored respectively
Into spatial cache corresponding with index.
Specifically, when constant node is accessed an index is arranged, then according to this in traversal expressions tree
Index corresponds to one piece of memory of division from constant memory manager and handle constant node data corresponding with the index are saved in
It marks off in the spatial cache come, for example, it is 10 that an int type constant node data are accessed for the first time, then index is set
It is set to 1, correspondence opens up the spatial cache of an int type and is put into int type 10 and hews out in the spatial cache come,
Back-call is 20 to a long type constant node data, then index is set as 2, correspondence opens up a long type
Spatial cache and 20 be put into long type hew out come spatial cache in, in this way, to each constant node it is equal
An index is generated, and constant node data are respectively stored into spatial cache corresponding with index, this subsequent constant tree
The constant data of node is all written in this block spatial cache, can reduce the application and release of memory, effectively in this way to mention
Rise performance.
S40, the unique value TCSUV (Tree character string Unique Value) for generating feature string,
This step guarantees that the unique value of the text string generation of same feature is the same, without generating ambiguity.
Further, the unique value for generating feature string includes:
The eap-message digest character string MS of preceding 32 characters of feature string is calculated according to MD5 Message Digest 5
(MD5String), the Huffman character string HS of feature string and according to Huffman coding method is calculated
(Huffman String) merges the eap-message digest character string MS and Huffman character string HS being calculated to obtain tagged word
Accord with the unique value TCSUV of string.
MD5 Message Digest 5 is a kind of Cryptographic Hash Function being widely used, and can produce out one 128 (16
Byte) hashed value (hash value);Huffman (Huffman) coding method, is one kind of variable word length coding (VLC),
The shortest code word of average length that this method constructs different prefix according to character probability of occurrence.
S50, the Just-In-Time function that precompile whether is stored in caching is searched according to unique value;
Further, include: according to the Just-In-Time function for whether being stored with precompile in unique value lookup caching
Check whether TCSUV is stored in engine cache, as shown in Figure 2;
Specifically, (1) searches preceding 32 characters of unique value as keyword KEY in the buffer, if finding,
Continue compared with remaining character after preceding 32 characters by unique value carries out memcmp function with the character string found, memcmp
Function comparison result is very, to illustrate the Just-In-Time function that precompile is stored in caching to return to final result if truly,
If memcmp function comparison result if false, final result be it is false, illustrate the not stored Just-In-Time letter for having precompile in caching
Number;
(2) preceding 32 characters of unique value are searched in the buffer as keyword KEY, if not finding, is returned
Final result is vacation, illustrates the not stored Just-In-Time function for having precompile in caching.
S60, if it does not exist, then generate Just-In-Time function and cache.
Further, generating Just-In-Time function includes:
Instruction database is established, a variety of intermediate representation instructions are stored in instruction database;
Wherein, intermediate representation instruction include add, subtract, multiplication and division, comparing, bit arithmetic, logical operation, java standard library call, static state
Library is called and the instructions such as variable declarations, load, storage.
Determine that corresponding function executes process according to user demand;
Process, which is executed, according to function chooses the intermediate representation instruction that the and function stored in instruction database executes flow path match, it is raw
At intermediate command set;
Just-In-Time function is generated according to intermediate command set.
Wherein it is determined that corresponding function executes process, comprising:
Traversal expressions tree obtains the attribute information of the expression tree;
According to expression tree and its attribute information, determine that function executes process.Wherein, so-called function executes process and refers to
Carry out which kind of operation, first do do any step after which is walked, how to do when condition is set up, condition not at when how to do etc..
Further, intermediate representation instruction set is generated by Just-In-Time function using underlying virtual machine Just-In-Time technology.
Wherein, so-called underlying virtual machine Just-In-Time technology, Low Level Virtual Machine Just-in-time
Compilation, abbreviation LLVM-JIT.Just-In-Time therein also referred to as compiling in time, in real time compiling or on-the-flier compiler.
Further, it then according to scheme above, stored, managed by constant memory manager in the present embodiment
With reading constant node data, the constant that constant node data are read by constant memory manager is obtained into operation generation constant
Acquisition instruction, and encapsulated and stored in the buffer in the form of static library, it is to pass through when LLVM generates constant acquisition instruction collection
LLVM calls the mode of static library to do, and the instruction set determined according to user demand is finally compiled into LLVM-JIT function.
S70, constant node data substitution Just-In-Time function is handled.
By above step, assembled LLVM JIT function has just been obtained, but this LLVM JIT letter
If number only describes acquired value, how to calculate, if will be by LLVM JIT function come when actually calculating, it is also necessary to will
Constant node data for calculating, which are transmitted as parameter and substitute into LLVM JIT function, carries out processing calculating.
Further, constant node data substitution Just-In-Time function is handled further include:
It executes constant acquisition instruction and obtains constant node data, constant acquisition instruction is packaged in static library and is stored in
In caching, the generation method of constant acquisition instruction is as above, and LLVM can obtain constant by way of calling static library
Data.
In order to facilitate understanding, the application of the because precompiled function caching engine method is exemplified below, it should be understood that
The example is not the restriction to above-mentioned technical proposal.
For example, making mass data processing building personage's portrait at the end of certain bank, each ring service person operates same point
Cloth database, it is assumed that have ring service person personnel up to a hundred while operating distributed data base generation personage's portrait, then current point
The concurrency of cloth database up to a hundred, each ring service person wants to know some product by distributed data base
Under, specifically there is which user, they can be handed down to SQL as database (structured query language):
Select name from productwhere product_id='1';
Select name from productwhere product_id='2';
In traditional distributed data base, inquiry each time can all issue this similar SQL, and every time can
Repeat, then the computing engines of distributed data base can be in the repetition Shen of storage resource, memory source, computing resource etc.
Please with release, query time bottleneck is caused.
The because precompiled function caching engine method of the present embodiment, can be in the predicate part (portion for needing expression formula to calculate of SQL
Point), it is scanned, is recorded if encountering constant, feature is extracted if encountering non-constant, and generate and be directed to this type
The unique value of type expression formula, original expression formula are (product_id='1', product_id='2' etc.), precompile letter
Number caching engine can abstract such expression formula, be abstracted into (product_id=X) this type feature
Expression formula, be directed to this type feature expression formula carry out LLVM JIT function precompile be processed and stored in caching,
When other SQL issues the expression formula of this feature, LLVM JIT function is taken out from caching the inside by the unique value of expression formula
It is used for calculating.
Test uses because precompiled function caching engine performance and unused precompile under the server of same hardware configuration
Function caching engine performance, test result is as follows table:
1) because precompiled function caching engine the performance test results are used:
2) because precompiled function caching engine the performance test results are not used:
Data volume | Number of concurrent | TPS | Average response time (second) |
700000000 row data | 100 | 135 | 0.8 |
700000000 row data | 200 | 155 | 1.3 |
700000000 row data | 300 | 172 | 2.4 |
Test data explanation, under identical hardware configuration, server using because precompiled function caching engine performance with
TPS (issued transaction amount per second) ratio that because precompiled function caching engine is not used is about 4:1, and average response time ratio is about
For 9:40, compared to unused because precompiled function caching engine, using the server of because precompiled function caching engine in high concurrent
Big data processing speed is greatly accelerated in millisecond rank to the delay time of mass data processing under the scene of single-point inquiry,
CPU and memory source are effectively utilized, the query load of distributed OLAP or OLTP database is alleviated.
In another embodiment of the invention, a kind of computer storage medium is additionally provided, which deposits
Computer program instructions are contained, computer program instructions are for executing because precompiled function caching engine method as described above.
Although above having used general explanation and specific embodiment, the present invention is described in detail, at this
On the basis of invention, it can be made some modifications or improvements, this will be apparent to those skilled in the art.Therefore,
These modifications or improvements without departing from theon the basis of the spirit of the present invention are fallen within the scope of the claimed invention.
Claims (10)
1. a kind of big data processing method of because precompiled function caching engine, which is characterized in that the described method includes:
Engine cache size and optimisation strategy setting;
Expression tree is generated according to user demand;
The expression tree is traversed, the feature string of the expression tree is generated, while extracting constant node data and caching;
Generate the unique value of the feature string;
The Just-In-Time function that precompile whether is stored in caching is searched according to the unique value;
If it does not exist, then it generates Just-In-Time function and caches;
The constant node data are substituted into the Just-In-Time function to handle.
2. a kind of big data processing method of because precompiled function caching engine according to claim 1, which is characterized in that institute
It states engine cache size and optimisation strategy setting includes: preset cache size and caching is carried out a point bucket.
3. a kind of big data processing method of because precompiled function caching engine according to claim 2, which is characterized in that institute
It states and caching is subjected to a point bucket includes:
The number of bucket is preset according to user profile;
Alternatively, estimating each Just-In-Time letter according to the number of Just-In-Time function pointer in historical data statistical information histogram
Memory FM required for number pointer, the number of bucket is calculated according to BucketNum=M/FM, wherein BucketNum is of bucket
Number, M are the default size of caching.
4. a kind of big data processing method of because precompiled function caching engine according to claim 1, which is characterized in that institute
State generate the expression tree feature string include:
The attribute information is merged the feature string for generating the expression tree by the attribute information for obtaining the expression tree,
Wherein, the attribute information includes data type, data col width or data indexing information.
5. a kind of big data processing method of because precompiled function caching engine according to claim 1, which is characterized in that institute
It states to extract constant node data and cache and includes:
The index for establishing the constant node data divides spatial cache according to the index, and by the constant node data
It is respectively stored into spatial cache corresponding with index.
6. a kind of big data processing method of because precompiled function caching engine according to claim 1, which is characterized in that institute
It states and generates the unique value of the feature string and include:
The eap-message digest character string of preceding 32 characters of the feature string is calculated according to MD5 Message Digest 5, and
The Huffman character string that the feature string is calculated according to Huffman coding method, the eap-message digest word that will be calculated
Symbol string and Huffman character string merge to obtain the unique value of the feature string.
7. a kind of big data processing method of because precompiled function caching engine according to claim 1, which is characterized in that institute
It states and includes: according to the Just-In-Time function for whether being stored with precompile in unique value lookup caching
Preceding 32 characters of the unique value are searched in the buffer as keyword, if finding, continue by it is described only
Compared with remaining character after preceding 32 characters of one value carries out memcmp function with the character string found.
8. a kind of big data processing method of because precompiled function caching engine according to claim 1, which is characterized in that institute
Stating generation Just-In-Time function includes:
Instruction database is established, a variety of intermediate representation instructions are stored in described instruction library;
Determine that corresponding function executes process according to user demand;
According to the function execute process choose described instruction library in store with the function execute flow path match it is described in
Between indicate instruction, generate intermediate command set;
The Just-In-Time function is generated according to the intermediate command set.
9. a kind of big data processing method of because precompiled function caching engine according to claim 1, which is characterized in that institute
It states and handles constant node data substitution Just-In-Time function further include:
It executes constant acquisition instruction and obtains constant node data, the constant acquisition instruction is packaged in static library and is stored in
In caching.
10. a kind of computer storage medium, which is characterized in that the computer storage medium is stored with computer program instructions,
The computer program instructions are for executing method as claimed in any one of claims 1-9 wherein.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811628499.5A CN109753306A (en) | 2018-12-28 | 2018-12-28 | A kind of big data processing method of because precompiled function caching engine |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811628499.5A CN109753306A (en) | 2018-12-28 | 2018-12-28 | A kind of big data processing method of because precompiled function caching engine |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109753306A true CN109753306A (en) | 2019-05-14 |
Family
ID=66404223
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811628499.5A Pending CN109753306A (en) | 2018-12-28 | 2018-12-28 | A kind of big data processing method of because precompiled function caching engine |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109753306A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110502530A (en) * | 2019-07-03 | 2019-11-26 | 平安科技(深圳)有限公司 | Database functions call method, system, computer equipment and storage medium |
CN112445316A (en) * | 2019-08-27 | 2021-03-05 | 无锡江南计算技术研究所 | Compile-time low-power-consumption optimization method based on vector calculation |
WO2021184304A1 (en) * | 2020-03-19 | 2021-09-23 | 深圳市欢太科技有限公司 | Distributed cache compilation method and system |
CN114003629A (en) * | 2021-10-29 | 2022-02-01 | 深圳壹账通智能科技有限公司 | Efficient pre-compiling type cache data management method, device, equipment and medium |
CN116610455A (en) * | 2023-07-18 | 2023-08-18 | 之江实验室 | Resource constraint description system and method of programmable network element equipment |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103440244A (en) * | 2013-07-12 | 2013-12-11 | 广东电子工业研究院有限公司 | Large-data storage and optimization method |
CN104965687A (en) * | 2015-06-04 | 2015-10-07 | 北京东方国信科技股份有限公司 | Big data processing method and apparatus based on instruction set generation |
WO2016018944A1 (en) * | 2014-07-29 | 2016-02-04 | Metanautix, Inc. | Systems and methods for a distributed query execution engine |
CN105979268A (en) * | 2016-05-05 | 2016-09-28 | 北京智捷伟讯科技有限公司 | Safe information transmission method based on lossless watermark embedding and safe video hiding |
CN107250983A (en) * | 2015-04-15 | 2017-10-13 | 华为技术有限公司 | The apparatus and method for parameterizing intermediate representation progress Just-In-Time are utilized in data base querying enforcement engine |
CN108027838A (en) * | 2015-09-24 | 2018-05-11 | 华为技术有限公司 | Database inquiry system and method |
-
2018
- 2018-12-28 CN CN201811628499.5A patent/CN109753306A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103440244A (en) * | 2013-07-12 | 2013-12-11 | 广东电子工业研究院有限公司 | Large-data storage and optimization method |
WO2016018944A1 (en) * | 2014-07-29 | 2016-02-04 | Metanautix, Inc. | Systems and methods for a distributed query execution engine |
CN107250983A (en) * | 2015-04-15 | 2017-10-13 | 华为技术有限公司 | The apparatus and method for parameterizing intermediate representation progress Just-In-Time are utilized in data base querying enforcement engine |
CN104965687A (en) * | 2015-06-04 | 2015-10-07 | 北京东方国信科技股份有限公司 | Big data processing method and apparatus based on instruction set generation |
CN108027838A (en) * | 2015-09-24 | 2018-05-11 | 华为技术有限公司 | Database inquiry system and method |
CN105979268A (en) * | 2016-05-05 | 2016-09-28 | 北京智捷伟讯科技有限公司 | Safe information transmission method based on lossless watermark embedding and safe video hiding |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110502530A (en) * | 2019-07-03 | 2019-11-26 | 平安科技(深圳)有限公司 | Database functions call method, system, computer equipment and storage medium |
CN112445316A (en) * | 2019-08-27 | 2021-03-05 | 无锡江南计算技术研究所 | Compile-time low-power-consumption optimization method based on vector calculation |
WO2021184304A1 (en) * | 2020-03-19 | 2021-09-23 | 深圳市欢太科技有限公司 | Distributed cache compilation method and system |
CN114003629A (en) * | 2021-10-29 | 2022-02-01 | 深圳壹账通智能科技有限公司 | Efficient pre-compiling type cache data management method, device, equipment and medium |
CN116610455A (en) * | 2023-07-18 | 2023-08-18 | 之江实验室 | Resource constraint description system and method of programmable network element equipment |
CN116610455B (en) * | 2023-07-18 | 2023-12-05 | 之江实验室 | Resource constraint description system and method of programmable network element equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109753306A (en) | A kind of big data processing method of because precompiled function caching engine | |
CN107247808B (en) | Distributed NewSQL database system and picture data query method | |
Kim et al. | FAST: fast architecture sensitive tree search on modern CPUs and GPUs | |
Shi et al. | Oblivious RAM with O ((log N) 3) worst-case cost | |
EP3591547A1 (en) | Query optimization method and related device | |
CN108536692B (en) | Execution plan generation method and device and database server | |
US20120011144A1 (en) | Aggregation in parallel computation environments with shared memory | |
US20140351239A1 (en) | Hardware acceleration for query operators | |
Hentschel et al. | Column sketches: A scan accelerator for rapid and robust predicate evaluation | |
István et al. | Runtime parameterizable regular expression operators for databases | |
EP2469423B1 (en) | Aggregation in parallel computation environments with shared memory | |
Kim et al. | Designing fast architecture-sensitive tree search on modern multicore/many-core processors | |
Williams et al. | Enabling fine-grained HTTP caching of SPARQL query results | |
CN115269631A (en) | Data query method, data query system, device and storage medium | |
Wang et al. | Rencoder: A space-time efficient range filter with local encoder | |
Theocharidis et al. | SRX: efficient management of spatial RDF data | |
Müller et al. | An in-depth analysis of data aggregation cost factors in a columnar in-memory database | |
Shen et al. | An efficient LSM-tree-based SQLite-like database engine for mobile devices | |
Alam et al. | Performance of point and range queries for in-memory databases using radix trees on GPUs | |
Romero et al. | Bolt: Fast inference for random forests | |
Wang et al. | Optimization of LevelDB by separating key and value | |
do Carmo Oliveira et al. | Set similarity joins with complex expressions on distributed platforms | |
US11868331B1 (en) | Systems and methods for aligning big data tables in linear time | |
Carter et al. | Nanosecond indexing of graph data with hash maps and VLists | |
Gu et al. | Improving in-memory file system reading performance by fine-grained user-space cache mechanisms |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190514 |