CN103020151B - Big data quantity batch processing system and big data quantity batch processing method - Google Patents

Big data quantity batch processing system and big data quantity batch processing method Download PDF

Info

Publication number
CN103020151B
CN103020151B CN201210480063.2A CN201210480063A CN103020151B CN 103020151 B CN103020151 B CN 103020151B CN 201210480063 A CN201210480063 A CN 201210480063A CN 103020151 B CN103020151 B CN 103020151B
Authority
CN
China
Prior art keywords
major key
data
paging
cache device
level
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210480063.2A
Other languages
Chinese (zh)
Other versions
CN103020151A (en
Inventor
张�成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yonyou Network Technology Co Ltd
Original Assignee
Yonyou Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yonyou Network Technology Co Ltd filed Critical Yonyou Network Technology Co Ltd
Priority to CN201210480063.2A priority Critical patent/CN103020151B/en
Publication of CN103020151A publication Critical patent/CN103020151A/en
Application granted granted Critical
Publication of CN103020151B publication Critical patent/CN103020151B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention provides a kind of big data quantity batch processing system, comprise: middleware unit is used for sending inquiry request to level cache device, and the secondary paging major key set received from L2 cache device, according to the set of secondary paging major key to the pending data of data base querying and after computing is carried out to pending data, send perdurable data request to database; Level cache device is used for the major key set meeting inquiry request to data base querying, and generates one-level paging major key set according to major key set and the set of one-level paging major key is back to L2 cache device; L2 cache device is used for generating secondary paging major key set according to the set of one-level paging major key and the set of secondary paging major key being back to middleware unit.Present invention also offers a kind of big data quantity batch processing method.According to technical scheme of the present invention, greatly can improve the processing speed of system mass data, reduce system processing time, and then the combination property of elevator system.

Description

Big data quantity batch processing system and big data quantity batch processing method
Technical field
The present invention relates to field of computer technology, in particular to a kind of big data quantity batch processing system and a kind of big data quantity batch processing method.
Background technology
In online transaction processing system (OLTP) large-scale at present, weigh the index of its system performance quality, often the processing speed of some key core algorithms under big data quantity application scenarios, and the speed of processing speed directly affects the performance of whole system.
A large-scale information system, often there are some oneself more complicated business processing logic, business processing algorithm, when business processing efficiency under small data quantity application scenarios of these complexity is often all out in the cold, because system response time is than faster under this scene, and the bottleneck of system handling property just may be there will be in big data quantity situation, long-time without response or the serious conditions such as machine of directly delaying, the problem so wherein comparing general character and core is exactly: first, if data volume is excessive, disposable the reading in internal memory of program may cause Installed System Memory to overflow, the second, if not disposable reading data in internal memory, circulation is read data one by one and is being processed, then algorithm becomes the single process of circulation by batch processing, also must the performance of influential system greatly.To this, prior art uses background paging technology to solve such problem.
Existing paging technique is all realize paging technique in database side, one directly utilizes SQL statement to carry out paging, such as first time gets 1-50 article of record, second time gets 51-100 article of record etc., be loaded in internal memory although this mode reaches the limited record of each reading, but the pressure of database side is still very large, because the inquiry of each SQL statement is all the scanning to result set complete record, processing speed is not optimized; Another realizes paging by code, such as, utilizes ResultSet result set to carry out searching loop to realize in JAVA, and first time travels through 1-50 article of record and takes out.Second time traversal 1-100 article of record, but only take out 51-100 article of record, still there is the shortcoming of at every turn inquiring about all records in advance in this mode, secondly also have a kind of by finding the major key PK of the result set that satisfies condition in advance, then sequence number is numbered with stored in temporary table, gathered by sequence number many batches of reading PK afterwards, gather in database utilizing PK and inquire data, although this mode solves problem above, but from database temporary table, read data due to wanting many batches of, when height is concurrent, the pressure of database side is still very large, and have repeatedly the connection of middleware unit to database, inquiry, data network transmission, in narrowband environment, still there are some bottlenecks in efficiency, reasonably do not utilize middleware unit resource in addition.Finally above-mentioned three kinds of schemes all do not propose to load data into after in internal memory, how by the speed of the further optimization data process of a kind of general mode, it is all the bottleneck considering to solve Data import in whole algorithm, and often big data quantity batch algorithms often has inquiry to load and data processing persistence two processes, and paging process how automatic adaptation multiple database, these are all problems.
So, how to solve the reasonable employment of middleware unit resource and database resource in big data quantity loading procedure, how to make paging bottom self-adaptation multitype database, how a whole set of solution and system are proposed, prevent middleware unit internal memory from overflowing, alleviate database side processing pressure, reduce transmitted data on network amount between middleware unit and database, this is technical matters urgently to be resolved hurrily.
Summary of the invention
The present invention, just based on the problems referred to above, proposes a kind of big data quantity batch system, can prevent middleware unit internal memory from overflowing, alleviating the processing pressure of database side.
According to an aspect of the present invention, the invention provides a kind of big data quantity batch processing system, comprise: middleware unit, level cache device and L2 cache device, wherein, described middleware unit is used for sending inquiry request to described level cache device, and the secondary paging major key set received from described L2 cache device, according to the set of described secondary paging major key to the pending data of data base querying and after computing is carried out to described pending data, send perdurable data request to described database; Described level cache device is used for the major key set meeting described inquiry request to described data base querying, and generates one-level paging major key set according to described major key set and the set of described one-level paging major key is back to described L2 cache device; Described L2 cache device is used for generating secondary paging major key set according to described one-level paging major key set and the set of described secondary paging major key being back to described middleware unit.
By technique scheme, read in the process of data add two-level cache structure at middleware, optimization data reads greatly, solves the technical matters that middleware internal memory overflows.
In technique scheme, preferably, can also comprise: the first setting unit, the level cache threshold value of described level cache device is set; Described level cache device is also for when the data volume of described major key set is less than or equal to described level cache threshold value, directly the set of described one-level paging major key is back to described L2 cache device, and when the data volume of described major key set is greater than described level cache threshold value, set up and insert temporary table, paging carried out to described temporary table and the major key of acquisition is back to described L2 cache device.
If only have level cache structure to solve the problem of middleware internal memory spilling, then must do more fine-grained control to every page of major key data volume, after have employed two-level cache structure, due to the just major key that level cache returns, each major key is the character string of a regular length, committed memory is less, so greatly can improve the major key data total amount of every page, level cache structure.
In technique scheme, preferably, can also comprise: the second setting unit, the L2 cache threshold value of described L2 cache device is set; Described L2 cache device is also for when the data volume of described one-level paging major key is less than or equal to described L2 cache threshold value, directly the set of described secondary paging major key is back to described middleware unit, and when the data volume of described major key set is greater than described L2 cache threshold value, the set of described secondary paging major key is temporary in internal memory, every one page major key data are taken out, pending data according to described every one page major key data query from described internal memory.
Occupancy based on middleware actual treatment data arranges the L2 cache threshold value of L2 cache device, and the storage threshold rationally arranging buffer structure at different levels can the treatment effeciency of elevator system to greatest extent.
In technique scheme, preferably, described middleware unit comprises: affairs set up subelement, for setting up standalone transaction; Lock subelement, for adding middleware unit rank major key lock to described pending data, processes described pending data, after process terminates, is locked into row unlocks described middleware unit rank.
Each page data adopts standalone transaction process, that is every page data be disposed after affairs submit to immediately, instead of only play affairs at whole algorithm outermost layer, can not to lock for a long time locking to data all in database, thus promote database integral concurrent processing capacity, reduce the pressure of database side.
In above-mentioned arbitrary technical scheme, preferably, can also comprise: identification device, make described level cache device-adaptive multi-type database.
According to a further aspect in the invention, additionally provide a kind of big data quantity batch processing method, comprise the following steps: step 402, middleware unit sends inquiry request to level cache device, and database returns the major key set extremely described level cache device meeting described inquiry request; Step 404, described level cache device generates one-level paging major key set according to described major key set and the set of described one-level paging major key is back to L2 cache device; Step 406, described L2 cache device generates secondary paging major key set according to described one-level paging major key set and the set of described secondary paging major key is back to described middleware unit; Step 408, described middleware unit sends perdurable data request to described database after also carrying out computing to described pending data again according to the set of described secondary paging major key to the pending data of described data base querying.
By technique scheme, read in the process of data add two-level cache structure at middleware, optimization data reads greatly, solves the technical matters that middleware internal memory overflows.
In technique scheme, preferably, described step 404 specifically comprises: the level cache threshold value arranging described level cache device; When the data volume of described major key set is less than or equal to described level cache threshold value, directly the set of described one-level paging major key is back to described L2 cache device; When the data volume of described major key set is greater than described level cache threshold value, sets up and insert temporary table, paging carried out to described temporary table and the major key of acquisition is back to described L2 cache device.
If only have level cache structure to solve the problem of middleware internal memory spilling, then must do more fine-grained control to every page of major key data volume, after have employed two-level cache structure, due to the just major key that level cache returns, each major key is the character string of a regular length, committed memory is less, so greatly can improve the major key data total amount of every page, level cache structure.
In technique scheme, preferably, described step 406 specifically comprises: the L2 cache threshold value arranging described L2 cache device; When the data volume of described one-level paging major key is less than or equal to described L2 cache threshold value, directly the set of described secondary paging major key is back to described middleware unit; When the data volume of described major key set is greater than described L2 cache threshold value, the set of described secondary paging major key is temporary in internal memory, from described internal memory, takes out every one page major key data, pending data according to described every one page major key data query.
Occupancy based on middleware actual treatment data arranges the L2 cache threshold value of L2 cache device, and the storage threshold rationally arranging buffer structure at different levels can the treatment effeciency of elevator system to greatest extent.
In technique scheme, preferably, described step 408 specifically comprises: set up standalone transaction in described middleware unit, middleware unit rank major key lock is added to described pending data, described pending data are processed, after process terminates, row is locked into described middleware unit rank and unlocks.
Each page data adopts standalone transaction process, that is every page data be disposed after affairs submit to immediately, instead of only play affairs at whole algorithm outermost layer, can not to lock for a long time locking to data all in database, thus promote database integral concurrent processing capacity, reduce the pressure of database side.
In above-mentioned arbitrary technical scheme, preferably, described step 404 can also comprise, and at described level cache device place, adopts identification device self-adaptation multi-type database.
Therefore, greatly can improve the processing speed of system to large-data operation according to big data quantity batch processing method of the present invention, balance uses middleware and database resource to the full extent, under the circumstances reducing respective load, make full use of again respective resource, to reach the maximum lift of system performance
Accompanying drawing explanation
Fig. 1 shows the schematic diagram of big data quantity batch processing in correlation technique;
Fig. 2 shows the block diagram of big data quantity batch processing system according to an embodiment of the invention;
Fig. 3 shows the schematic diagram of big data quantity batch processing according to an embodiment of the invention;
Fig. 4 shows the process flow diagram of big data quantity batch processing method according to an embodiment of the invention;
Fig. 5 shows the process flow diagram of big data quantity batch processing method according to an embodiment of the invention.
Embodiment
In order to more clearly understand above-mentioned purpose of the present invention, feature and advantage, below in conjunction with the drawings and specific embodiments, the present invention is further described in detail.
Set forth a lot of detail in the following description so that fully understand the present invention, but the present invention can also adopt other to be different from other modes described here and implement, and therefore, the present invention is not limited to the restriction of following public specific embodiment.
Before explanation is according to big data quantity batch processing system of the present invention, first simply introduce existing large Data Matching processing procedure.
As shown in Figure 1, general big data quantity batch processing services scene, all processing logics and algorithm are all roughly divided into following several process: middleware initiates to database the request that inquiry loads data, middleware obtains data in internal memory computing, after process terminates, the most backward database initiates the request of perdurable data, database completes persistence operation, and such processing procedure is easy to cause middleware internal memory to overflow.In order to solve this technical problem, disclose according to big data quantity batch processing system of the present invention.
Fig. 2 shows the block diagram of big data quantity batch processing system according to an embodiment of the invention.
As shown in Figure 2, big data quantity batch processing system 200 according to the embodiment of the present invention comprises: middleware unit 202, level cache device 204 and L2 cache device 206, wherein, described middleware unit 202 is for sending inquiry request to described level cache device 204, and the secondary paging major key set received from described L2 cache device 206, according to the set of described secondary paging major key to the pending data of data base querying and after computing is carried out to described pending data, send perdurable data request to described database; Described level cache device 204 for meeting the major key set of described inquiry request to described data base querying, and generates one-level paging major key set according to described major key set and the set of described one-level paging major key is back to described L2 cache device 206; Described L2 cache device 206 is for generating secondary paging major key set according to described one-level paging major key set and the set of described secondary paging major key being back to described middleware unit 202.
By technique scheme, read in the process of data add two-level cache structure at middleware, optimization data reads greatly, solves the technical matters that middleware internal memory overflows.
In technique scheme, preferably, can also comprise: the first setting unit 208, the level cache threshold value of described level cache device 204 is set; Described level cache device 204 is also for when the data volume of described major key set is less than or equal to described level cache threshold value, directly the set of described one-level paging major key is back to described L2 cache device 206, and when the data volume of described major key set is greater than described level cache threshold value, set up and insert temporary table, paging carried out to described temporary table and the major key of acquisition is back to described L2 cache device 206.
If only have level cache structure to solve the problem of middleware internal memory spilling, then must do more fine-grained control to every page of major key data volume, after have employed two-level cache structure, due to the just major key that level cache returns, each major key is the character string of a regular length, committed memory is less, so greatly can improve the major key data total amount of every page, level cache structure.
Preferably, big data quantity batch processing system 200 can also comprise: the second setting unit 210, arranges the L2 cache threshold value of described L2 cache device 206; Described L2 cache device 206 is also for when the data volume of described one-level paging major key is less than or equal to described L2 cache threshold value, directly the set of described secondary paging major key is back to described middleware unit 202, and when the data volume of described major key set is greater than described L2 cache threshold value, the set of described secondary paging major key is temporary in internal memory, every one page major key data are taken out, pending data according to described every one page major key data query from described internal memory.
Occupancy based on middleware actual treatment data arranges the L2 cache threshold value of L2 cache device 206, and the storage threshold rationally arranging buffer structure at different levels can the treatment effeciency of elevator system to greatest extent.
In technique scheme, preferably, described middleware unit 202 comprises: affairs set up subelement 2022, for setting up standalone transaction; Lock subelement 2024, for adding middleware unit 202 rank major key lock to described pending data, processes described pending data, after process terminates, is locked into row unlocks described middleware unit 202 rank.
Each page data adopts standalone transaction process, that is every page data be disposed after affairs submit to immediately, instead of only play affairs at whole algorithm outermost layer, can not to lock for a long time locking to data all in database, thus promote database integral concurrent processing capacity, reduce the pressure of database side.
Preferably, big data quantity batch processing system 200 can also comprise: identification device 212, makes described level cache device 204 self-adaptation multi-type database.
Comprehensively above-mentioned, whole big data quantity batch processing system can be divided into following several module mutually to transmit data harmonization work: level cache device solves middleware memory bottleneck; Database identification device automatic adaptation multi-type database; Rationalize while L2 cache device reduces database loads pressure and use middleware resource; Standalone transaction treating apparatus promotes database and middleware many concurrent processing ability further, thus Integral lifting system efficiency.
The handling principle according to big data quantity batch processing system of the present invention is described in detail below in conjunction with Fig. 3.Fig. 3 shows the schematic diagram of big data quantity batch processing according to an embodiment of the invention.
As shown in Figure 3, read in the process of data add two-level cache structure at middleware (middleware unit in corresponding diagram 2), optimization data reads; Initiate to adopt standalone transaction in persistence processing procedure at middleware, optimize and alleviate database side pressure.Whole process is as follows as we can see from the figure:
1. initiate inquiry request to level cache device.
2. level cache device meets the major key of querying condition to data base querying, and database returns all major key set to level cache structure.
3. L2 cache device is to the set of level cache device acquisition request one-level paging major key.
4. L2 cache device receives the secondary paging major key set returned by level cache device.
5. middleware combines to L2 cache device acquisition request secondary paging major key.
6. L2 cache device returns the set of secondary paging major key to middleware.
7. middleware inquires about pending data according to secondary paging major key to database combination.
8. adopt standalone transaction process to calculate data.
9. persistence secondary paged data.
Wherein, the 2nd step is first order paging circulating treatment procedure to the 4th step; 5th step is second level paging circulating treatment procedure to the 9th step, and inner employing standalone transaction, alleviates database side processing pressure simultaneously.In system, level cache device inner utilization database temporary table realizes, and the inner identification device 212 comprising adaptation underlying database automatically, is applicable to types of databases simultaneously.L2 cache structure adopts and sets up internal memory level buffer memory in middleware rank, temporal data major key information.
If only adopt level cache device to prevent middleware internal memory from overflowing, more fine-grained control must be done to every page of major key data volume, such as must process the data volume of total amount 1,000 ten thousand, for ensureing that internal memory does not overflow, every page of maximum 5000 records must be accomplished, corresponding major key is also 5000, so just needs 2000 pages, namely 2000 paging queries.
And if adopt now two-level cache structure, maximum 5000 data of same L2 cache every page, corresponding major key is also 5000, but due to the just major key that level cache device returns, each major key is the character string of a regular length, committed memory is less, so its every page of major key data can reach such as 40,000 records, because L2 cache paging peek is complete in internal memory completely, do not have remote inquiry, the query cost that so level cache brings only has: 1,000 ten thousand/40,000=250, that is altogether only has 250 paging queries.Compare 2000 paging queries only having level cache device, the method of employing L2 cache structure can be less to the pressure of database side, also reduce the network traffic flow between middleware and database simultaneously, thus further promote the ability of whole system big data quantity batch processing.
Following composition graphs 4 and Fig. 5 describe in detail according to big data quantity batch processing method of the present invention.
Fig. 4 shows the process flow diagram of big data quantity batch processing method according to an embodiment of the invention.
As shown in Figure 4, big data quantity batch processing method, comprises the following steps: step 402 according to an embodiment of the invention, middleware unit to level cache device send inquiry request, database return meet inquiry request major key set to level cache device; Step 404, level cache device generates one-level paging major key set according to major key set and the set of one-level paging major key is back to L2 cache device; Step 406, L2 cache device generates secondary paging major key set according to the set of one-level paging major key and the set of secondary paging major key is back to middleware unit; Step 408, middleware unit sends perdurable data request to database after also carrying out computing to pending data again according to the set of secondary paging major key to the pending data of data base querying.
By technique scheme, read in the process of data add two-level cache structure at middleware, optimization data reads greatly, solves the technical matters that middleware internal memory overflows.
In technique scheme, preferably, described step 404 specifically comprises: the level cache threshold value arranging described level cache device; When the data volume of described major key set is less than or equal to described level cache threshold value, directly the set of described one-level paging major key is back to described L2 cache device; When the data volume of described major key set is greater than described level cache threshold value, sets up and insert temporary table, paging carried out to described temporary table and the major key of acquisition is back to described L2 cache device.
If only have level cache structure to solve the problem of middleware internal memory spilling, then must do more fine-grained control to every page of major key data volume, after have employed two-level cache structure, due to the just major key that level cache returns, each major key is the character string of a regular length, committed memory is less, so greatly can improve the major key data total amount of every page, level cache structure.
In technique scheme, preferably, described step 406 specifically comprises: the L2 cache threshold value arranging described L2 cache device; When the data volume of described one-level paging major key is less than or equal to described L2 cache threshold value, directly the set of described secondary paging major key is back to described middleware unit; When the data volume of described major key set is greater than described L2 cache threshold value, the set of described secondary paging major key is temporary in internal memory, from described internal memory, takes out every one page major key data, pending data according to described every one page major key data query.
Occupancy based on middleware actual treatment data arranges the L2 cache threshold value of L2 cache device, and the storage threshold rationally arranging buffer structure at different levels can the treatment effeciency of elevator system to greatest extent.
In technique scheme, preferably, described step 408 specifically comprises: set up standalone transaction in described middleware unit, middleware unit rank major key lock is added to described pending data, described pending data are processed, after process terminates, row is locked into described middleware unit rank and unlocks.
Each page data adopts standalone transaction process, that is every page data be disposed after affairs submit to immediately, instead of only play affairs at whole algorithm outermost layer, can not to lock for a long time locking to data all in database, thus promote database integral concurrent processing capacity, reduce the pressure of database side.
In above-mentioned arbitrary technical scheme, preferably, described step 404 can also comprise, and at described level cache device place, adopts identification device self-adaptation multi-type database.
As shown in Figure 5, big data quantity batch process can be roughly divided into 3 processes: 1) level cache pattern handling inquiry request; 2) L2 cache pattern handling inquiry request; 3) standalone transaction is used to submit final data to
1) level cache pattern handling inquiry request.
1., after level cache Structure Receive inquiry request, first perform SQL statement and obtain result set.
2. traversing result collection, if the total size of the data volume of result set does not exceed level cache threshold values, directly returns results collection.
If 3. the total size of the data volume of result set exceedes level cache threshold values, temporary table buffer storage treatment S QL statement.
4. temporary table buffer storage is according to bottom data Source Type, automatically creates various type of database and inserts temporary table SQL statement.
The field of temporary table is as follows: numbering (self-propagation type), major key, the field imported in temporary table buffer memory SQL.Wherein, numbering is in order to follow-up paging uses, and often kind of database self-propagation type-word section actualizing technology is variant, so can according to the automatic differentiated treatment of type of database at this, the SQL statement of final formation narration interspersed with flashbacks temporary table is similar: insertintotemp (selectrownumFrom ...).
5. pending data are taken out in the paging from inner temporary table of level cache structure, the principle of paging utilizes number field in temporary table exactly, due to number field be self-propagation type (such as, 12,3,4 ...), so the similar selectpkfromtempwhereno>=1andnoLEssT.LTssT .LT=50 of SQL of paging peek ...
6. finally the major key data acquisition that paging is taken out is passed to L2 cache structure (i.e. L2 cache device).
2) L2 cache pattern handling inquiry request.
1. the major key data that return of L2 cache Structure Receive level cache structure.
2. if the total size of the data volume of result set does not exceed L2 cache threshold values, then directly return major key result set.
If 3. the total size of the data volume of result set exceedes L2 cache threshold values, then by major key data temporary storage in internal memory.
4. every one page major key data are taken out in two pagings from internal memory level buffer memory.
5. go in database, to inquire pending data according to every one page major key data.
3) standalone transaction is used to submit data to.
1. create standalone transaction at middleware layer.
2. pair pending data add middleware rank major key lock.
3. pair data calculate, last perdurable data.
4. to feel relieved lock.
4) each buffer structure threshold values is arranged.
1. level cache structure acquiescence paged data threshold values 20000.
2. L2 cache structure acquiescence paged data threshold values 5000.
3. the threshold values of level cache structure and L2 cache structure can dynamically arrange according to the hardware condition of middleware.
4. arranging of level cache structure threshold values mainly considers major key character EMS memory occupation amount.
5. the occupancy that actual treatment data in main consideration middleware are set of L2 cache structure threshold values.The threshold values that buffer structure at different levels is rationally set can be maximum the treatment effeciency of elevator system.
Therefore, the processing speed of system to large-data operation greatly can be improved according to big data quantity batch processing method of the present invention, balance uses middleware and database resource to the full extent, under the circumstances reducing respective load, make full use of again respective resource, to reach the maximum lift of system performance.To sum up, the method this make infosystem better can adapt to more, that condition is harsher network environment, big data quantity environment, client's operational system under larger business datum scene can be made.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any amendment done, equivalent replacement, improvement etc., all should be included within protection scope of the present invention.

Claims (10)

1. a big data quantity batch processing system, is characterized in that, comprising: middleware unit, level cache device and L2 cache device, wherein,
Described middleware unit is used for sending inquiry request to described level cache device, and the secondary paging major key set received from described L2 cache device, according to the set of described secondary paging major key to the pending data of data base querying and after computing is carried out to described pending data, send perdurable data request to described database;
Described level cache device is used for the major key set meeting described inquiry request to described data base querying, and generates one-level paging major key set according to the described major key set meeting described inquiry request and the set of described one-level paging major key is back to described L2 cache device;
Described L2 cache device is used for generating secondary paging major key set according to described one-level paging major key set and the set of described secondary paging major key being back to described middleware unit.
2. big data quantity batch processing system according to claim 1, is characterized in that, also comprise: the first setting unit, arranges the level cache threshold value of described level cache device;
Described level cache device is also for when the described data volume meeting the major key set of described inquiry request is less than or equal to described level cache threshold value, directly the set of described one-level paging major key is back to described L2 cache device, and when the described data volume meeting the major key set of described inquiry request is greater than described level cache threshold value, set up and insert temporary table, paging carried out to described temporary table and the major key of acquisition is back to described L2 cache device.
3. big data quantity batch processing system according to claim 1, is characterized in that, also comprise:
Second setting unit, arranges the L2 cache threshold value of described L2 cache device;
Described L2 cache device is also for when the data volume of described one-level paging major key is less than or equal to described L2 cache threshold value, directly the set of described secondary paging major key is back to described middleware unit, and when the data volume of described one-level paging major key is greater than described L2 cache threshold value, the set of described secondary paging major key is temporary in internal memory, every one page major key data are taken out, pending data according to described every one page major key data query from described internal memory.
4. big data quantity batch processing system according to claim 3, is characterized in that, described middleware unit comprises:
Affairs set up subelement, for setting up standalone transaction;
Lock subelement, for adding middleware unit rank major key lock to described pending data, processes described pending data, after process terminates, is locked into row unlocks described middleware unit rank.
5. big data quantity batch processing system according to any one of claim 1 to 4, is characterized in that, also comprise: identification device, makes described level cache device-adaptive multi-type database.
6. a big data quantity batch processing method, is characterized in that, comprises the following steps:
Step 402, middleware unit sends inquiry request to level cache device, and database returns the major key set extremely described level cache device meeting described inquiry request;
Step 404, described level cache device according to described in meet described inquiry request major key set generate one-level paging major key set the set of described one-level paging major key is back to L2 cache device;
Step 406, described L2 cache device generates secondary paging major key set according to described one-level paging major key set and the set of described secondary paging major key is back to described middleware unit;
Step 408, described middleware unit sends perdurable data request to described database after also carrying out computing to described pending data again according to the set of described secondary paging major key to the pending data of described data base querying.
7. big data quantity batch processing method according to claim 6, is characterized in that, described step 404 specifically comprises: the level cache threshold value arranging described level cache device;
When the described data volume meeting the major key set of described inquiry request is less than or equal to described level cache threshold value, directly the set of described one-level paging major key is back to described L2 cache device;
When the described data volume meeting the major key set of described inquiry request is greater than described level cache threshold value, sets up and insert temporary table, paging carried out to described temporary table and the major key of acquisition is back to described L2 cache device.
8. big data quantity batch processing method according to claim 6, is characterized in that, described step 406 specifically comprises: the L2 cache threshold value arranging described L2 cache device;
When the data volume of described one-level paging major key is less than or equal to described L2 cache threshold value, directly the set of described secondary paging major key is back to described middleware unit;
When the data volume of described one-level paging major key is greater than described L2 cache threshold value, the set of described secondary paging major key is temporary in internal memory, from described internal memory, takes out every one page major key data, pending data according to described every one page major key data query.
9. big data quantity batch processing method according to claim 6, it is characterized in that, described step 408 specifically comprises: set up standalone transaction in described middleware unit, middleware unit rank major key lock is added to described pending data, described pending data are processed, after process terminates, row is locked into described middleware unit rank and unlocks.
10. the big data quantity batch processing method according to any one of claim 6 to 9, is characterized in that, described step 404 also comprises, and at described level cache device place, adopts identification device self-adaptation multi-type database.
CN201210480063.2A 2012-11-22 2012-11-22 Big data quantity batch processing system and big data quantity batch processing method Active CN103020151B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210480063.2A CN103020151B (en) 2012-11-22 2012-11-22 Big data quantity batch processing system and big data quantity batch processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210480063.2A CN103020151B (en) 2012-11-22 2012-11-22 Big data quantity batch processing system and big data quantity batch processing method

Publications (2)

Publication Number Publication Date
CN103020151A CN103020151A (en) 2013-04-03
CN103020151B true CN103020151B (en) 2015-12-02

Family

ID=47968755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210480063.2A Active CN103020151B (en) 2012-11-22 2012-11-22 Big data quantity batch processing system and big data quantity batch processing method

Country Status (1)

Country Link
CN (1) CN103020151B (en)

Families Citing this family (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104111962B (en) * 2013-04-22 2018-09-18 Sap欧洲公司 Enhanced affairs cache with batch operation
CN103218179A (en) * 2013-04-23 2013-07-24 深圳市京华科讯科技有限公司 Second-level system acceleration method based on virtualization
CN104424319A (en) * 2013-09-10 2015-03-18 镇江金钛软件有限公司 Method for temporarily storing general data
CN103886022B (en) * 2014-02-24 2019-01-18 上海上讯信息技术股份有限公司 A kind of query facility and its method carrying out paging query based on major key field
CN103888378B (en) * 2014-04-09 2017-08-25 北京京东尚科信息技术有限公司 A kind of data exchange system and method based on caching mechanism
CN107273522B (en) * 2015-06-01 2020-01-14 明算科技(北京)股份有限公司 Multi-application-oriented data storage system and data calling method
CN108090086B (en) * 2016-11-21 2022-02-22 迈普通信技术股份有限公司 Paging query method and device
CN106407020A (en) * 2016-11-23 2017-02-15 青岛海信移动通信技术股份有限公司 Database processing method of mobile terminal and mobile terminal thereof
CN106407019A (en) * 2016-11-23 2017-02-15 青岛海信移动通信技术股份有限公司 Database processing method of mobile terminal and mobile terminal thereof
CN107609068B (en) * 2017-08-30 2021-03-16 企查查科技有限公司 Data non-inductive migration method
CN109165090B (en) * 2018-09-27 2019-07-05 苏宁消费金融有限公司 Batch processing method and system based on statement
CN109710639A (en) * 2018-11-26 2019-05-03 厦门市美亚柏科信息股份有限公司 A kind of search method based on pair buffers, device and storage medium
CN109828834A (en) * 2018-12-14 2019-05-31 泰康保险集团股份有限公司 The method and system and its computer-readable intermediate value and electronic equipment of batch processing
CN109889336B (en) * 2019-03-08 2022-06-14 浙江齐治科技股份有限公司 Method, device and system for middleware to acquire password
CN110457540B (en) * 2019-06-28 2020-07-14 卓尔智联(武汉)研究院有限公司 Data query method, service platform, terminal device and storage medium
CN113312382A (en) * 2021-05-31 2021-08-27 上海万物新生环保科技集团有限公司 Method, device and system for database paging query

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216840A (en) * 2008-01-21 2008-07-09 金蝶软件(中国)有限公司 Data enquiry method and data enquiry system
CN101860449A (en) * 2009-04-09 2010-10-13 华为技术有限公司 Data query method, device and system
CN201993755U (en) * 2011-01-30 2011-09-28 上海振华重工(集团)股份有限公司 Data filtration, compression and storage system of real-time database

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7076608B2 (en) * 2003-12-02 2006-07-11 Oracle International Corp. Invalidating cached data using secondary keys
US7647312B2 (en) * 2005-05-12 2010-01-12 Microsoft Corporation System and method for automatic generation of suggested inline search terms

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101216840A (en) * 2008-01-21 2008-07-09 金蝶软件(中国)有限公司 Data enquiry method and data enquiry system
CN101860449A (en) * 2009-04-09 2010-10-13 华为技术有限公司 Data query method, device and system
CN201993755U (en) * 2011-01-30 2011-09-28 上海振华重工(集团)股份有限公司 Data filtration, compression and storage system of real-time database

Also Published As

Publication number Publication date
CN103020151A (en) 2013-04-03

Similar Documents

Publication Publication Date Title
CN103020151B (en) Big data quantity batch processing system and big data quantity batch processing method
US5692182A (en) Bufferpool coherency for identifying and retrieving versions of workfile data using a producing DBMS and a consuming DBMS
US5692174A (en) Query parallelism in a shared data DBMS system
US9881170B2 (en) DBFS permissions using user, role, and permissions flags
US20190272265A1 (en) Hybrid Database Table Stored As Both Row and Column Store
US8336051B2 (en) Systems and methods for grouped request execution
CN104111958B (en) A kind of data query method and device
CN104536724B (en) The concurrent access performance optimization method of Hash table under a kind of multi-core environment
JP6695537B2 (en) How to read multiple small files of 2MB or less from HDFS with data merge module and HBase cache module based on Hadoop
US6785675B1 (en) Aggregation of resource requests from multiple individual requestors
EP2659412B1 (en) A system and method for using partial evaluation for efficient remote attribute retrieval
CN104778270A (en) Storage method for multiple files
CN101216840A (en) Data enquiry method and data enquiry system
AU2005239366A1 (en) Partial query caching
CN103198361B (en) Based on the XACML strategy evaluation engine system of multiple Optimization Mechanism
CN101329686A (en) System for implementing network search caching and search method
CN105683941A (en) Regulating enterprise database warehouse resource usage
CN106372266A (en) Cache and accessing method of cloud operation system based on aspects and configuration documents
CN108647266A (en) A kind of isomeric data is quickly distributed storage, exchange method
CN102880897A (en) Application data sharing method of smart card and smart card
CN117076523B (en) Local data time sequence storage method
CN115145953A (en) Data query method
CN116027982A (en) Data processing method, device and readable storage medium
CN105912621A (en) Area building energy consumption platform data storing and query method
US9063858B2 (en) Multi-core system and method for data consistency by memory mapping address (ADB) to hash table pattern associated with at least one core

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 100094 Haidian District North Road, Beijing, No. 68

Applicant after: Yonyou Network Technology Co., Ltd.

Address before: 100094 Beijing city Haidian District North Road No. 68, UFIDA Software Park

Applicant before: UFIDA Software Co., Ltd.

COR Change of bibliographic data
C14 Grant of patent or utility model
GR01 Patent grant