CN105653203A - Data instruction processing method, device and system - Google Patents

Data instruction processing method, device and system Download PDF

Info

Publication number
CN105653203A
CN105653203A CN201510980382.3A CN201510980382A CN105653203A CN 105653203 A CN105653203 A CN 105653203A CN 201510980382 A CN201510980382 A CN 201510980382A CN 105653203 A CN105653203 A CN 105653203A
Authority
CN
China
Prior art keywords
data
compression
instruction
reading
procedure
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510980382.3A
Other languages
Chinese (zh)
Other versions
CN105653203B (en
Inventor
王文铎
陈宗志
彭信东
宋昭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Qizhi Software Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd, Qizhi Software Beijing Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201510980382.3A priority Critical patent/CN105653203B/en
Publication of CN105653203A publication Critical patent/CN105653203A/en
Application granted granted Critical
Publication of CN105653203B publication Critical patent/CN105653203B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0608Saving storage space on storage systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/067Distributed or networked storage systems, e.g. storage area networks [SAN], network attached storage [NAS]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

The invention discloses a data instruction processing method, device and system. The method comprises the following steps: receiving a data compression instruction sent by a scheduling system; creating a compression progress which is used for processing the data compression instruction, wherein the compression progress is independent of a pre-created data read/write progress which is used for processing a data read/write instruction; and starting a couple of concurrent threads in a stand-alone storage engine, wherein the concurrent threads are used for respectively reading the data read/write instruction and the data compression instruction from the data read/write progress and the data compression progress so that the sand-alone storage engine can process the data read/write instruction and the data compression instruction. Through the scheme, the online compression of the data in distributed data storage systems can be realized, the data read/write and the data compression can be realized synchronously, and the users do not need to stop all the business services provided by the distributed data storage systems to cause influences to the businesses of the distributed data storage system.

Description

Data command processing method, Apparatus and system
Technical field
The present invention relates to field of computer technology, be specifically related to a kind of data command processing method, Apparatus and system.
Background technology
Unit storage engines is a kind of storage engines supporting online data to compress, but when it provides reading and writing data service, its unit handle is occupied, data compression service cannot be provided, and prior art is processed the compression instruction of reading and writing data instruction and data by a process, generally, data compression will expend for a long time, when compressing data instruction processes, it is easy for causing read write command not processed in time, or, when unit storage engines compressing data instruction, reading and writing data instruction and data is compressed instruction and stores to task queue by reading and writing data process, owing to the process ratio of compressing data instruction is relatively time-consuming, and cause task queue full, data writing instructions can not be received again and cause data inconsistent, or, reading and writing data process can not receive reading and writing data instruction again, cause that reading and writing data instruction is identified as time-out, and be disposed.
Distributed data-storage system is exactly data dispersion be stored on the equipment of many platform independent. Although unit storage engines can apply to distributed data-storage system, but due to drawbacks described above, online data is compressed to cause existing distributed data-storage system not supported, when needs carry out data compression, after needing the business service that distributed data-storage system provided wholly off, each unit is carried out data compression.
Summary of the invention
In view of the above problems, it is proposed that the present invention is to provide a kind of and overcome the problems referred to above or solve the data command processing method of the problems referred to above, data command process device and corresponding data command process system at least in part.
According to an aspect of the invention, it is provided a kind of data command processing method, including:
Receive the data compression instruction that dispatching patcher sends;
Creating the compression procedure for processing data compression instruction, compression procedure is separate with the reading and writing data process processing reading and writing data instruction being pre-created;
Wherein, starting in unit storage engines and have several concurrent thread, several concurrent thread read the compression instruction of reading and writing data instruction and data respectively from reading and writing data process and compression procedure and for unit storage engines, data read write command and data compression instruction are processed.
According to a further aspect in the invention, it is provided that a kind of data command processes device, including: process processes system and unit storage engines;
Process processes system and includes:
Read-write scheduler module, is suitable to be pre-created reading and writing data process and processes the reading and writing data instruction received; And
Compression procedure module, is suitable to after receiving the data compression instruction that dispatching patcher sends, and creates the compression procedure for processing data compression instruction, and compression procedure is separate with reading and writing data process;
Unit storage engines, is suitable to start several concurrent thread, and several concurrent thread read the compression instruction of reading and writing data instruction and data respectively from reading and writing data process and compression procedure; And, data read write command and data compression instruction are processed.
According to a further aspect in the invention, it is provided that a kind of data command processes system, processes device and dispatching patcher including above-mentioned data command.
According to scheme provided by the invention, after receiving the data compression instruction that dispatching patcher sends, create compression procedure, for processing data compression instruction, thus avoiding owing to processing data compression instruction is comparatively consuming time and the read-write of data is impacted, overcome the situation that factor data read write command can not be processed and cause data inconsistent in time, achieve the line compression of data in distributed data-storage system, achieve the compression of read-write limit, limit, business service without distributed data-storage system being provided is wholly off, impact to the business of distributed data-storage system.
Described above is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention, and can be practiced according to the content of description, and in order to above and other objects of the present invention, feature and advantage can be become apparent, below especially exemplified by the specific embodiment of the present invention.
Accompanying drawing explanation
By reading hereafter detailed description of the preferred embodiment, various other advantage and benefit those of ordinary skill in the art be will be clear from understanding. Accompanying drawing is only for illustrating the purpose of preferred implementation, and is not considered as limitation of the present invention. And in whole accompanying drawing, it is denoted by the same reference numerals identical parts. In the accompanying drawings:
Fig. 1 illustrates the schematic flow sheet of data command processing method according to an embodiment of the invention;
Fig. 2 illustrates the schematic flow sheet of data command processing method in accordance with another embodiment of the present invention;
Fig. 3 illustrates that data command processes the functional block diagram of device according to an embodiment of the invention;
Fig. 4 illustrates that data command processes the functional block diagram of device in accordance with another embodiment of the present invention;
Fig. 5 illustrates that data command processes the functional block diagram of system according to an embodiment of the invention.
Detailed description of the invention
It is more fully described the exemplary embodiment of the disclosure below with reference to accompanying drawings. Although accompanying drawing showing the exemplary embodiment of the disclosure, it being understood, however, that may be realized in various forms the disclosure and should do not limited by embodiments set forth here. On the contrary, it is provided that these embodiments are able to be best understood from the disclosure, and complete for the scope of the present disclosure can be conveyed to those skilled in the art.
Fig. 1 illustrates the schematic flow sheet of data command processing method according to an embodiment of the invention. As it is shown in figure 1, the method comprises the following steps:
Step S100, receives the data compression instruction that dispatching patcher sends.
Wherein, dispatching patcher is for managing the operational order sent data, for instance data reading instructions, data writing instructions or data compression instruction etc., say, that the operation that data are performed, and is that requirement is read data or writes data or compression data.
Step S101, creates the compression procedure for processing data compression instruction.
In existing technical scheme, only create a reading and writing data process, reading and writing data instruction and data compression instruction dispatching patcher sent by reading and writing data process processes, reading and writing data process is to store in task queue by the data manipulation instruction received, from the task queue of reading and writing data process initiation, obtained pending data manipulation instruction by unit storage engines data are operated, generally, unit storage engines comprehends consuming for a long time at the place of compressing data instruction, and to comprehend comparison fast at the place of reading and writing data instruction, therefore, when compressing data instruction processes, it is easy for causing read write command not processed in time, or, when unit storage engines compressing data instruction, reading and writing data instruction and data is compressed instruction and stores to task queue by reading and writing data process, owing to the process ratio of compressing data instruction is relatively time-consuming, and cause task queue full, reading and writing data instruction can not be received again.
Therefore, in embodiments of the present invention, after receiving the data compression instruction that dispatching patcher sends, create compression procedure, for processing received data compression instruction, wherein, compression procedure is separate with the reading and writing data process processing reading and writing data instruction being pre-created, that is, compression procedure is processing data compression instruction only, reading and writing data process only processes reading and writing data instruction, thus avoid reading and writing data process not only processed reading and writing data instruction but also processing data compression instruction and the read-write of data is impacted, overcome the situation that factor data read write command can not be processed and cause data inconsistent in time.
Step S102, starting in unit storage engines and have several concurrent thread, several concurrent thread read the compression instruction of reading and writing data instruction and data respectively from reading and writing data process and compression procedure and for unit storage engines, data read write command and data compression instruction are processed.
Specifically, unit storage engines is primarily referred to as the interface providing a storage medium inside unit, unit storage engines supports online data compression, and it is applied to distributed data-storage system, such that it is able to the data in distributed data-storage system are carried out line compression, it is not necessary to stop externally service.
Unit storage engines starts and has several concurrent thread, these several concurrent thread read the compression instruction of reading and writing data instruction and data respectively from reading and writing data process and compression procedure, the reading and writing data instruction and data read is compressed instruction store to corresponding task queue, for unit storage engines, data read write command and data compression instruction are processed.
According to the method that the above embodiment of the present invention provides, after receiving the data compression instruction that dispatching patcher sends, create compression procedure, for processing data compression instruction, thus avoid reading and writing data process not only processed reading and writing data instruction but also processing data compression instruction and the read-write of data is impacted, overcome the situation that factor data read write command can not be processed and cause data inconsistent in time, achieve the line compression of data in distributed data-storage system, achieve limit read-write data limit compression data, business service without distributed data-storage system being provided is wholly off, impact to the business of distributed data-storage system.
Fig. 2 illustrates the schematic flow sheet of data command processing method in accordance with another embodiment of the present invention.As in figure 2 it is shown, the method comprises the following steps:
Step S200, receives the data compression instruction that dispatching patcher sends.
Wherein, dispatching patcher is for managing the operational order sent data, for instance data reading instructions, data writing instructions or data compression instruction etc., say, that the operation that data are performed, and is that requirement is read data or writes data or compression data.
Step S201, creates the compression procedure for processing data compression instruction.
In embodiments of the present invention, after receiving the data compression instruction that dispatching patcher sends, create compression procedure, for processing received data compression instruction, wherein, compression procedure is separate with the reading and writing data process processing reading and writing data instruction being pre-created, and the reading and writing data instruction only by a reading and writing data process, dispatching patcher sent processes, that is, compression procedure is processing data compression instruction only, reading and writing data process only processes reading and writing data instruction, thus avoid reading and writing data process not only processed reading and writing data instruction but also processing data compression instruction and the read-write of data is impacted, overcome the situation that factor data read write command can not be processed and cause data inconsistent in time.
Step S202, stores data compression instruction in the compression duty queue that compression procedure starts.
After creating compression procedure according to step S201, compression procedure starts the compression duty queue for storing compression instruction, and the data compression instruction that dispatching patcher sends is stored in compression duty queue, from compression duty queue, read data compression instruction for the thread in unit storage engines.
Additionally, reading and writing data process for dispatching patcher send reading and writing data instruction process, and reading and writing data instruction is stored to reading and writing data process initiation read and write in task queue.
Step S203, starting in unit storage engines and have several concurrent thread, several concurrent thread read the compression instruction of reading and writing data instruction and data respectively from reading and writing data process and compression procedure and for unit storage engines, data read write command and data compression instruction are processed.
In embodiments of the present invention, unit storage engines includes: LevelDB, RocksDB and/or SSDB, and these three unit storage engines supports online data compression, say, that, read-write and the data compression of data can be carried out simultaneously, do not affect the business service of distributed data-storage system.
Unit storage engines supports multiple threads data manipulation instruction, that is, thread in unit storage engines can read the compression instruction of reading and writing data instruction and data concomitantly from reading and writing data process and compression procedure, multiple reading and writing data instructions and/or data compression instruction can be processed by unit storage engines simultaneously, thus disposing the compression instruction of reading and writing data instruction and data in time, it is to avoid cause obstruction.
In unit storage engines compressing data instruction processing procedure, dispatching patcher may send data compression instruction again, that is, again receive one or more data compression instructions that dispatching patcher sends, then create one or more compression procedure separately to be respectively used to process one or more data compression instruction, namely, the compression procedure of respective amount can be created according to the quantity of the data compression instruction received, carry out processing data compression instruction, it is prevented that the data compression instruction that dispatching patcher sends is not processed in time and causes obstruction.
Step S204, after data compression instruction has processed, the result according to unit storage engines feedback, destroy compression procedure.
After data are completed data compression, dispatching patcher will not retransmit data compression instruction, after the data compression instruction in having processed the compression duty queue that compression procedure starts of the unit storage engines, feedback has been completed the result of data compression, result according to unit storage engines feedback, destroy compression procedure, in order to avoid distributed data-storage system is impacted.
According to the method that the above embodiment of the present invention provides, after receiving the data compression instruction that dispatching patcher sends, create compression procedure, for processing data compression instruction, data compression instruction is stored in the compression duty queue that compression procedure starts, unit storage engines starts and has several concurrent thread, several concurrent thread read the compression instruction of reading and writing data instruction and data respectively from reading and writing data process and compression procedure and for unit storage engines, data read write command and data compression instruction are processed, thus avoid reading and writing data process not only processed reading and writing data instruction but also processing data compression instruction and the read-write of data is impacted, overcome the situation that factor data read write command can not be processed and cause data inconsistent in time, and the compression instruction of reading and writing data instruction and data can be disposed in time, avoid causing obstruction, achieve the line compression of data in distributed data-storage system, achieve limit read-write data limit compression data, business service without distributed data-storage system being provided is wholly off, impact to the business of distributed data-storage system.
Fig. 3 illustrates that data command processes the functional block diagram of device according to an embodiment of the invention. As it is shown on figure 3, this device 300 includes: process processes system 310 and unit storage engines 320.
This process processes system 310 and includes: read-write scheduler module 311 and compression procedure module 312.
Read-write scheduler module 311, is suitable to be pre-created reading and writing data process and processes the reading and writing data instruction received.
Compression procedure module 312, is suitable to after receiving the data compression instruction that dispatching patcher sends, and creates the compression procedure for processing data compression instruction, and compression procedure is separate with reading and writing data process.
Unit storage engines 320, is suitable to start several concurrent thread, and several concurrent thread read the compression instruction of reading and writing data instruction and data respectively from reading and writing data process and compression procedure; And, data read write command and data compression instruction are processed.
According to the device that the above embodiment of the present invention provides, after receiving the data compression instruction that dispatching patcher sends, create compression procedure, for processing data compression instruction, thus avoid reading and writing data process not only processed reading and writing data instruction but also processing data compression instruction and the read-write of data is impacted, overcome the situation that factor data read write command can not be processed and cause data inconsistent in time, achieve the line compression of data in distributed data-storage system, achieve limit read-write data limit compression data, business service without distributed data-storage system being provided is wholly off, impact to the business of distributed data-storage system.
Fig. 4 illustrates that data command processes the functional block diagram of device in accordance with another embodiment of the present invention. As shown in Figure 4, this device 400 includes: process processes system 410 and unit storage engines 420.
This process processes system 410 and includes: read-write scheduler module 411 and compression procedure module 412.
Read-write scheduler module 411, is suitable to be pre-created reading and writing data process and processes the reading and writing data instruction received.
Compression procedure module 412, is suitable to after receiving the data compression instruction that dispatching patcher sends, and creates the compression procedure for processing data compression instruction, and compression procedure is separate with reading and writing data process.
Unit storage engines 420, is suitable to start several concurrent thread, and several concurrent thread read the compression instruction of reading and writing data instruction and data respectively from reading and writing data process and compression procedure; And, data read write command and data compression instruction are processed.
Alternatively, device also includes: destroys module 430, is suitable to after data compression instruction has processed, and the result according to unit storage engines feedback destroys compression procedure.
Alternatively, device also includes: memory module 440, is suitable to data compression instruction be stored in the compression duty queue that compression procedure starts.
Additionally, reading and writing data process for dispatching patcher send reading and writing data instruction process, wherein, memory module 440 also reading and writing data instruction is stored to reading and writing data process initiation read and write in task queue.
Alternatively, compression procedure module 412 is further adapted for: in data compression instruction processing procedure, again receive one or more data compression instructions that dispatching patcher sends, then create one or more compression procedure separately and be respectively used to process one or more data compression instruction.
Alternatively, unit storage engines 420 includes: LevelDB, RocksDB and/or SSDB.
According to the device that the above embodiment of the present invention provides, after receiving the data compression instruction that dispatching patcher sends, create compression procedure, for processing data compression instruction, data compression instruction is stored in the compression duty queue that compression procedure starts, unit storage engines starts and has several concurrent thread, several concurrent thread read the compression instruction of reading and writing data instruction and data respectively from reading and writing data process and compression procedure and for unit storage engines, data read write command and data compression instruction are processed, thus avoid reading and writing data process not only processed reading and writing data instruction but also processing data compression instruction and the read-write of data is impacted, overcome the situation that factor data read write command can not be processed and cause data inconsistent in time, and the compression instruction of reading and writing data instruction and data can be disposed in time, avoid causing obstruction, achieve the line compression of data in distributed data-storage system, achieve limit read-write data limit compression data, business service without distributed data-storage system being provided is wholly off, impact to the business of distributed data-storage system.
Fig. 5 illustrates that data command processes the functional block diagram of system according to an embodiment of the invention. As it is shown in figure 5, this system 500 includes: data command processes device 400 and dispatching patcher 510.
Wherein, data command processes device is multiple.
According to the system that the above embodiment of the present invention provides, avoid reading and writing data process not only processed reading and writing data instruction but also processing data compression instruction and the read-write of data is impacted, overcome the situation that factor data read write command can not be processed and cause data inconsistent in time, and the compression instruction of reading and writing data instruction and data can be disposed in time, avoid causing obstruction, achieve the line compression of data in distributed data-storage system, achieve limit read-write data limit compression data, business service without distributed data-storage system being provided is wholly off, impact to the business of distributed data-storage system.
Not intrinsic to any certain computer, virtual system or miscellaneous equipment relevant in algorithm and the display of this offer.Various general-purpose systems can also with use based on together with this teaching. As described above, the structure constructed required by this kind of system is apparent from. Additionally, the present invention is also not for any certain programmed language. It is understood that, it is possible to utilize various programming language to realize the content of invention described herein, and the description above language-specific done is the preferred forms in order to disclose the present invention.
In description mentioned herein, describe a large amount of detail. It is to be appreciated, however, that embodiments of the invention can be put into practice when not having these details. In some instances, known method, structure and technology it are not shown specifically, in order to do not obscure the understanding of this description.
Similarly, it is to be understood that, one or more in order to what simplify that the disclosure helping understands in each inventive aspect, herein above in the description of the exemplary embodiment of the present invention, each feature of the present invention is grouped together in single embodiment, figure or descriptions thereof sometimes. But, the method for the disclosure should be construed to and reflect an intention that namely the present invention for required protection requires feature more more than the feature being expressly recited in each claim. More precisely, as the following claims reflect, inventive aspect is in that all features less than single embodiment disclosed above. Therefore, it then follows claims of detailed description of the invention are thus expressly incorporated in this detailed description of the invention, wherein each claim itself as the independent embodiment of the present invention.
Those skilled in the art are appreciated that, it is possible to carry out the module in the equipment in embodiment adaptively changing and they being arranged in one or more equipment different from this embodiment. Module in embodiment or unit or assembly can be combined into a module or unit or assembly, and multiple submodule or subelement or sub-component can be put them in addition. Except at least some in such feature and/or process or unit excludes each other, it is possible to adopt any combination that all processes or the unit of all features disclosed in this specification (including adjoint claim, summary and accompanying drawing) and so disclosed any method or equipment are combined. Unless expressly stated otherwise, each feature disclosed in this specification (including adjoint claim, summary and accompanying drawing) can be replaced by the alternative features providing purpose identical, equivalent or similar.
In addition, those skilled in the art it will be appreciated that, although embodiments more described herein include some feature included in other embodiments rather than further feature, but the combination of the feature of different embodiment means to be within the scope of the present invention and form different embodiments. Such as, in the following claims, the one of any of embodiment required for protection can mode use in any combination.
The all parts embodiment of the present invention can realize with hardware, or realizes with the software module run on one or more processor, or realizes with their combination. It will be understood by those of skill in the art that the some or all functions of the some or all parts that microprocessor or digital signal processor (DSP) can be used in practice to realize in data command process equipment according to embodiments of the present invention. The present invention is also implemented as part or all the equipment for performing method as described herein or device program (such as, computer program and computer program).The program of such present invention of realization can store on a computer-readable medium, or can have the form of one or more signal. Such signal can be downloaded from internet website and obtain, or provides on carrier signal, or provides with any other form.
The present invention will be described rather than limits the invention to it should be noted above-described embodiment, and those skilled in the art can design alternative embodiment without departing from the scope of the appended claims. In the claims, any reference marks that should not will be located between bracket is configured to limitations on claims. Word " comprises " and does not exclude the presence of the element or step not arranged in the claims. Word "a" or "an" before being positioned at element does not exclude the presence of multiple such element. The present invention by means of including the hardware of some different elements and can realize by means of properly programmed computer. In the unit claim listing some devices, several in these devices can be through same hardware branch and specifically embody. Word first, second and third use do not indicate that any order. Can be title by these word explanations.
The invention discloses: A1, a kind of data command processing method, including:
Receive the data compression instruction that dispatching patcher sends;
Creating the compression procedure for processing described data compression instruction, described compression procedure is separate with the reading and writing data process processing reading and writing data instruction being pre-created;
Wherein, starting in unit storage engines and have several concurrent thread, described several concurrent thread read the compression instruction of reading and writing data instruction and data respectively from described reading and writing data process and described compression procedure and for unit storage engines, the compression instruction of described reading and writing data instruction and data are processed.
A2, method according to A1, wherein, described method also includes:
After described data compression instruction has processed, the result according to described unit storage engines feedback, destroy described compression procedure.
A3, method according to A1 or A2, wherein, described establishment for process described data compression instruction compression procedure after, described method also includes:
Described data compression instruction is stored in the compression duty queue that described compression procedure starts.
A4, method according to A1 or 2, wherein, described method also includes:
In described data compression instruction processing procedure, again receive one or more data compression instructions that dispatching patcher sends, then create one or more compression procedure separately and be respectively used to process the one or more data compression instruction.
A5, method according to any one of A1-A4, wherein, described unit storage engines includes: LevelDB, RocksDB and/or SSDB.
The invention also discloses: B6, a kind of data command process device, including: process processes system and unit storage engines;
Described process processes system and includes:
Read-write scheduler module, is suitable to be pre-created reading and writing data process and processes the reading and writing data instruction received; And
Compression procedure module, is suitable to after receiving the data compression instruction that dispatching patcher sends, and creates the compression procedure for processing described data compression instruction, and described compression procedure is separate with described reading and writing data process;
Described unit storage engines, is suitable to start several concurrent thread, and described several concurrent thread read the compression instruction of reading and writing data instruction and data respectively from described reading and writing data process and described compression procedure;And, described reading and writing data instruction and data is compressed instruction and processes.
B7, device according to B6, wherein, described device also includes:
Destroy module, be suitable to after described data compression instruction has processed, the result according to described unit storage engines feedback, destroy described compression procedure.
B8, device according to B6 or B7, wherein, described device also includes:
Memory module, is suitable to store in the compression duty queue that described compression procedure starts described data compression instruction.
B9, device according to B6 or B7, wherein, described compression procedure module is further adapted for: in described data compression instruction processing procedure, again receive one or more data compression instructions that dispatching patcher sends, then create one or more compression procedure separately and be respectively used to process the one or more data compression instruction.
B10, device according to any one of B6-B9, wherein, described unit storage engines includes: LevelDB, RocksDB and/or SSDB.
The invention also discloses: C11, a kind of data command process system, process device and dispatching patcher including the data command described in any one of B6-B10.
C12, system according to C11, it is multiple that described data command processes device.

Claims (10)

1. a data command processing method, including:
Receive the data compression instruction that dispatching patcher sends;
Creating the compression procedure for processing described data compression instruction, described compression procedure is separate with the reading and writing data process processing reading and writing data instruction being pre-created;
Wherein, starting in unit storage engines and have several concurrent thread, described several concurrent thread read the compression instruction of reading and writing data instruction and data respectively from described reading and writing data process and described compression procedure and for unit storage engines, the compression instruction of described reading and writing data instruction and data are processed.
2. method according to claim 1, wherein, described method also includes:
After described data compression instruction has processed, the result according to described unit storage engines feedback, destroy described compression procedure.
3. method according to claim 1 and 2, wherein, described establishment for process described data compression instruction compression procedure after, described method also includes:
Described data compression instruction is stored in the compression duty queue that described compression procedure starts.
4. method according to claim 1 and 2, wherein, described method also includes:
In described data compression instruction processing procedure, again receive one or more data compression instructions that dispatching patcher sends, then create one or more compression procedure separately and be respectively used to process the one or more data compression instruction.
5. the method according to any one of claim 1-4, wherein, described unit storage engines includes: LevelDB, RocksDB and/or SSDB.
6. data command processes a device, including: process processes system and unit storage engines;
Described process processes system and includes:
Read-write scheduler module, is suitable to be pre-created reading and writing data process and processes the reading and writing data instruction received; And
Compression procedure module, is suitable to after receiving the data compression instruction that dispatching patcher sends, and creates the compression procedure for processing described data compression instruction, and described compression procedure is separate with described reading and writing data process;
Described unit storage engines, is suitable to start several concurrent thread, and described several concurrent thread read the compression instruction of reading and writing data instruction and data respectively from described reading and writing data process and described compression procedure; And, described reading and writing data instruction and data is compressed instruction and processes.
7. device according to claim 6, wherein, described device also includes:
Destroy module, be suitable to after described data compression instruction has processed, the result according to described unit storage engines feedback, destroy described compression procedure.
8. the device according to claim 6 or 7, wherein, described device also includes:
Memory module, is suitable to store in the compression duty queue that described compression procedure starts described data compression instruction.
9. the device according to claim 6 or 7, wherein, described compression procedure module is further adapted for: in described data compression instruction processing procedure, again receive one or more data compression instructions that dispatching patcher sends, then create one or more compression procedure separately and be respectively used to process the one or more data compression instruction.
10. data command processes a system, processes device and dispatching patcher including the data command described in any one of claim 6-9.
CN201510980382.3A 2015-12-23 2015-12-23 Data command processing method, apparatus and system Active CN105653203B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510980382.3A CN105653203B (en) 2015-12-23 2015-12-23 Data command processing method, apparatus and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510980382.3A CN105653203B (en) 2015-12-23 2015-12-23 Data command processing method, apparatus and system

Publications (2)

Publication Number Publication Date
CN105653203A true CN105653203A (en) 2016-06-08
CN105653203B CN105653203B (en) 2019-06-07

Family

ID=56476796

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510980382.3A Active CN105653203B (en) 2015-12-23 2015-12-23 Data command processing method, apparatus and system

Country Status (1)

Country Link
CN (1) CN105653203B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902512A (en) * 2012-08-31 2013-01-30 浪潮电子信息产业股份有限公司 Multi-thread parallel processing method based on multi-thread programming and message queue
CN103984528A (en) * 2014-05-15 2014-08-13 中国人民解放军国防科学技术大学 Multithread concurrent data compression method based on FT processor platform
US20140365597A1 (en) * 2013-06-07 2014-12-11 International Business Machines Corporation Processing Element Data Sharing
CN104424326A (en) * 2013-09-09 2015-03-18 华为技术有限公司 Data processing method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902512A (en) * 2012-08-31 2013-01-30 浪潮电子信息产业股份有限公司 Multi-thread parallel processing method based on multi-thread programming and message queue
US20140365597A1 (en) * 2013-06-07 2014-12-11 International Business Machines Corporation Processing Element Data Sharing
CN104424326A (en) * 2013-09-09 2015-03-18 华为技术有限公司 Data processing method and device
CN103984528A (en) * 2014-05-15 2014-08-13 中国人民解放军国防科学技术大学 Multithread concurrent data compression method based on FT processor platform

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
祝君等: "实时历史数据库中压缩技术的并行化研究", 《计算机技术与发展》 *

Also Published As

Publication number Publication date
CN105653203B (en) 2019-06-07

Similar Documents

Publication Publication Date Title
CN102289378B (en) Method for automatically generating APP (Application)
KR101699981B1 (en) Memory optimization of virtual machine code by partitioning extraneous information
CN108205481B (en) Application container instance creation method and device
KR101955145B1 (en) Application loading method and device
CN105512029A (en) Method, server and system for testing intelligent terminal
CN106648724B (en) Application program hot repair method and terminal
CN102904878A (en) Method and system for download of data package
CN109885324A (en) A kind of processing method, device, terminal and the storage medium of application program installation kit
CN105553738A (en) Heat loading method and device of configuration information and distributed cluster system
CN102799444B (en) The method of cross-platform packing program and device
CN103577225A (en) Software installation method and device
CN109344619A (en) The hot restorative procedure and device of application program
CN105630818A (en) Method and device for renaming files in batches
CN103761107A (en) Software package customizing device and method
CN109933350A (en) The method, apparatus and electronic equipment of embedded code in the application
CN105656970A (en) RAID (Redundant Array of Independent Disk) card configuring method and system and relevant device
CN103631869A (en) Method and device for releasing access pressure of server-side database
CN102685481A (en) Method and system for processing media files
CN105653203A (en) Data instruction processing method, device and system
CN101834885A (en) Method and device for downloading software
CN107124446A (en) Application program method for down loading, server and terminal
CN104954450A (en) File processing method and file processing device
US20150033204A1 (en) System-construction-procedure generating device, system-construction-procedure generating method, and program thereof
CN113807908A (en) Logistics declaration method, system, device, electronic equipment and storage medium thereof
CN101561884B (en) Method and device for achieving script in variable data printing process

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220718

Address after: Room 801, 8th floor, No. 104, floors 1-19, building 2, yard 6, Jiuxianqiao Road, Chaoyang District, Beijing 100015

Patentee after: BEIJING QIHOO TECHNOLOGY Co.,Ltd.

Address before: 100088 room 112, block D, 28 new street, new street, Xicheng District, Beijing (Desheng Park)

Patentee before: BEIJING QIHOO TECHNOLOGY Co.,Ltd.

Patentee before: Qizhi software (Beijing) Co.,Ltd.