CN105843911B - Data buffer storage implementation method, system and data server - Google Patents

Data buffer storage implementation method, system and data server Download PDF

Info

Publication number
CN105843911B
CN105843911B CN201610171603.7A CN201610171603A CN105843911B CN 105843911 B CN105843911 B CN 105843911B CN 201610171603 A CN201610171603 A CN 201610171603A CN 105843911 B CN105843911 B CN 105843911B
Authority
CN
China
Prior art keywords
data
user data
header information
command
command header
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610171603.7A
Other languages
Chinese (zh)
Other versions
CN105843911A (en
Inventor
王旋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sina Technology China Co Ltd
Original Assignee
Sina Technology China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sina Technology China Co Ltd filed Critical Sina Technology China Co Ltd
Priority to CN201610171603.7A priority Critical patent/CN105843911B/en
Publication of CN105843911A publication Critical patent/CN105843911A/en
Application granted granted Critical
Publication of CN105843911B publication Critical patent/CN105843911B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/24Querying
    • G06F16/245Query processing
    • G06F16/2455Query execution
    • G06F16/24552Database cache management

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The embodiment of the present invention provides a kind of data buffer storage implementation method, system and data server, this method comprises: obtaining the user data of the data server forwarding of user data or other regions that one's respective area is submitted;Command header information and end mark are added for user data and obtains data cached, are added to data cached in the buffer queue of one's respective area;It is described it is data cached include command header information, user data and end mark, include the command number of instruction execution movement in the command header information;The reading cache data from the buffer queue of one's respective area parses the command header information and user data, executes corresponding movement to the user data parsed according to the command number for including.It can guarantee that data are synchronized in the server buffer of different location in time, accurately, improve the speed and efficiency of data buffer storage, guarantee the synchronism of data buffer storage.

Description

Data buffer storage implementation method, system and data server
Technical field
The present invention relates to technical field of data processing, espespecially a kind of data buffer storage suitable for large-scale data caching is realized Method, system and data server.
Background technique
HyperText Preprocessor (Hypertext Preprocessor, PHP) is used as a kind of general open source scripting language, energy Access relational database management system --- the new interface of MySQL is enough provided, sql like language access data can be used in MySQL Library.
Large-scale data are solved in different location access problem, other than the support of server, it is also necessary to respectively Data are copied more parts by kind method, are buffered on the server of each computer room.For example use raid-array (Redundant Array of Independent Disks, RAID), the synchronization mechanism of MySQL, Light Directory Access Protocol The synchronization mechanism etc. of (Lightweight Directory Access Protocol, LDAP), there are also the paddy of Google (google) It sings file system (Google File System, GFS).These schemes all solve data and are replicated in asking in some level Topic.
It is needed for different applications, different buffering schemes can be used, such as:
1., by data buffer storage in memory or disk, an expired time is set, after expired after having read data from database Regenerate caching.
2. after having read data from database, data buffer storage will be cached when data update in memory or disk It deletes, regenerates caching, or wait read next time and regenerate caching.
The scheme realization of these cachings is all fairly simple, has been full of these schemes in the existing code for realizing these schemes Various realizations, but when we have found that such caching can not solve the problems, such as, then it needs on caching Add caching, need in different server, different computer rooms constructs caching.
This when, situation become complicated, the scheme of caching is no longer so important, and a reliable mechanism guarantees number It is only most important according to that can keep synchronizing in different location, various forms of cachings, therefore, how to realize timely, synchronous By data buffer storage to different positions, be always urgently to guarantee the data obtained required for user can get in time The technical issues of solution.
Summary of the invention
The embodiment of the present invention provides a kind of data buffer storage implementation method, system and data server, to solve existing skill There are problems that capable of not guaranteeing in art data it is synchronous be cached to different location, can be improved the speed and effect of data buffer storage Rate guarantees the synchronism of data buffer storage.
On the one hand, the embodiment of the invention provides a kind of data buffer storage implementation methods, comprising:
Obtain the user data of the data server forwarding of user data or other regions that one's respective area is submitted;
Command header information and end mark are added for user data and obtains data cached, and the data cached one's respective area that is added to is delayed It deposits in queue;It is described it is data cached include command header information, user data and end mark, include instruction in the command header information The command number of execution movement;
The reading cache data from the buffer queue of one's respective area parses the command header information and user data, according to packet The command number included executes corresponding movement to the user data parsed.
In some alternative embodiments, the user data parsed is executed according to the command number for including corresponding Movement, specifically includes:
If determining that the user data action which be to be parsed be pipeline output according to the command number, will solve The user data of precipitation is exported to corresponding application program;
If determining that the user data action which be to be parsed be forwarding according to the command number, will parse User data be transmitted to the data server in other regions.
In some alternative embodiments, the user data that will be parsed is exported to corresponding application program, specifically Include:
The command header information for determining the corresponding pipeline output of the command number, adds the pipe for the user data parsed Output is to corresponding application program after the command header information of road output;
The user data that will be parsed is transmitted to the data server in other regions, specifically includes:
Determine the command number it is corresponding forwarding destination address and forwarding after need using new command header information, will parse User data and new command header information out is supplied to the data server in other regions corresponding with forwarding destination address.
In some alternative embodiments, by reading queue tool reading cache data from the buffer queue of one's respective area;
The file address for reading queue tool configuration q file record one's respective area buffer queue and filename;Configure p file Record the file process location information in the buffer queue of one's respective area;Configuration l file guarantees single part of example operation;Configure q.conf text Part with realize configuring pipes output command header information, forwarding destination address and forwarding after need using new command header information.
It in some alternative embodiments, is that user data adds command header information and end mark by write queue tool; Wherein, command header information includes version number, command number, reserve bytes and data length.
On the other hand, the embodiment of the present invention provides a kind of data server, comprising:
Module is obtained, for obtaining the user of the user data of one's respective area submission or the data server forwarding in other regions Data;
Cache module, for for user data add command header information and end mark obtain it is data cached, will be data cached It is added in the buffer queue of one's respective area;It is described data cached including command header information, user data and end mark, the command header It include the command number of instruction execution movement in information;
Execution module parses the command header information and use for the reading cache data from the buffer queue of one's respective area User data executes corresponding movement to the user data parsed according to the command number for including.
In some alternative embodiments, the execution module, is specifically used for:
If determining that the user data action which be to be parsed be pipeline output according to the command number, will solve The user data of precipitation is exported to corresponding application program;
If determining that the user data action which be to be parsed be forwarding according to the command number, will parse User data be transmitted to the data server in other regions.
In some alternative embodiments, the execution module, is specifically used for:
The command header information for determining the corresponding pipeline output of the command number, adds the pipe for the user data parsed Output is to corresponding application program after the command header information of road output;
The user data that will be parsed is transmitted to the data server in other regions, specifically includes:
Determine the command number it is corresponding forwarding destination address and forwarding after need using new command header information, will parse User data and new command header information out is supplied to the data server in other regions corresponding with forwarding destination address.
In some alternative embodiments, queue tool is read in setting in the execution module, by reading queue tool from originally Reading cache data in the buffer queue of region;
The file address for reading queue tool configuration q file record one's respective area buffer queue and filename;Configure p file Record the file process location information in the buffer queue of one's respective area;Configuration l file guarantees single part of example operation;Configure q.conf text Part with realize configuring pipes output command header information, forwarding destination address and forwarding after need using new command header information.
In some alternative embodiments, write queue tool is set in cache module, is number of users by write queue tool According to addition command header information and end mark;Wherein, command header information includes version number, command number, reserve bytes and data length.
On the other hand, the embodiment of the present invention provides a kind of data buffer storage realization system, comprising: at least two above-mentioned data clothes Business device.
Above-mentioned technical proposal has the following beneficial effects: the user data that the available one's respective area of data server is submitted, Also the user data of the data server forwarding in other available regions;And command header information and end are added for user data Symbol obtains data cached, is added to data cached in the buffer queue of one's respective area;Pass through the command number for including in command header information Instruction executes movement, uses convenient for subsequent reading data cached.The user data cached in the buffer queue of one's respective area has one's respective area That submits also has other servers to forward, from the buffer queue of one's respective area when reading cache data, by command header information Including command number corresponding movement is executed to the user data that parses, so as to which timely data are exported or are forwarded to Other data servers caching.Realize synchronous by the data server of data buffer storage to different location, the speed of caching Relatively high with efficiency, synchronism is also relatively good.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this Some embodiments of invention for those of ordinary skill in the art without creative efforts, can be with It obtains other drawings based on these drawings.
Fig. 1 is the flow chart of data buffer storage implementation method in the embodiment of the present invention one;
Fig. 2 is the flow chart of data buffer storage implementation method in the embodiment of the present invention two;
Fig. 3 is the flow chart of data buffer storage implementation method in the embodiment of the present invention two;
Fig. 4 is the structure chart that data buffer storage realizes system device in the embodiment of the present invention;
Fig. 5 is the data service structural schematic diagram in the embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on Embodiment in the present invention, it is obtained by those of ordinary skill in the art without making creative efforts every other Embodiment shall fall within the protection scope of the present invention.
In order to solve the problems, such as existing in the prior art it is impossible to ensure that data synchronization is cached to different location, guarantee The efficient of data, accurate synchronization caching, the embodiment of the present invention provide a kind of data buffer storage implementation method, real by queue mechanism Existing cache synchronization, while the function unit of each complexity being arranged and carries out modularized processing, to guarantee the synchronism of caching.Lead to below Specific embodiment is crossed to be described in detail.
Embodiment one
The embodiment of the present invention one provides a kind of data buffer storage implementation method, and process is as shown in Figure 1, include the following steps:
Step S101: the number of users of the data server forwarding of user data or other regions that one's respective area is submitted is obtained According to.
The user data that the available one's respective area of data server is submitted is cached, for subsequent use;It can also obtain The user data for taking the data server in other regions to forward is cached, for subsequent use.
Step S102: it is obtained for user data addition command header information and end mark data cached.Wherein, data cached packet Command header information, user data and end mark are included, includes the command number of instruction execution movement in command header information.
After data server gets user data, before caching to it, command header information is added for user data, With facilitate it is subsequent read this it is data cached when identified;And end mark is added the knot that identifies the data for user data Beam.
It is that user data adds command header information and end mark by write queue tool;Wherein, command header information includes version This number, command number, reserve bytes and data length.
Step S103: it is added to data cached in the buffer queue of one's respective area.
After in data cached addition buffer queue, use can be read for reading queue tool.
Step S104: the reading cache data from the buffer queue of one's respective area.
By reading queue tool reading cache data from the buffer queue of one's respective area.
Preferably, the following files can be configured to control the data cached reading in buffer queue by reading queue tool:
Configure file address and the filename of q file record one's respective area buffer queue;
Configure the file process location information in the buffer queue of p file record one's respective area;
Configuration l file guarantees single part of example operation;
Configuration q.conf file with realize configuring pipes output command header information, forwarding destination address and forwarding after need to make New command header information.
Step S105: command header information and user data are parsed.
Read queue tool reading cache data after, the data cached of reading is parsed, parse including life Head information and user data are enabled, and parses the command number for including in command header information.
Step S106: corresponding movement is executed to the user data parsed according to the command number for including.
It can be determined according to the command number parsed to the user data action which be to be parsed, wherein command number Indicated action which be to be may include at least one of pipeline output and forwarding.
If determining to be pipeline output to the user data action which be to be parsed according to the command number parsed, The user data parsed is exported to corresponding application program.
It is specific optional, by determining the command header information of the corresponding pipeline output of command number, the number of users that will be parsed According to output after the command header information of addition pipeline output to corresponding application program.
If determining that the user data action which be to be parsed be forwarding according to the command number parsed, will solve The user data of precipitation is transmitted to the data server in other regions.
It is specific optional, by determine need after the corresponding forwarding destination address of command number and forwarding using new command header The user data parsed and new command header information are supplied to the number in other regions corresponding with forwarding destination address by information According to server.
Method described in above-described embodiment one describes the one's respective area submission to acquisition from the angle of a data server User data and the forwarding of other data servers user data caching and treatment process, may packet in a data system Containing multiple such data servers, the role that each data server can serve as cache server can also act as forwarding clothes The role of business device implements two so that a system includes at least the system of two data servers as an example to carry out specifically below It is bright.
Embodiment two
Data buffer storage implementation method provided by the embodiment of the present invention two realizes big rule between each data server The principle of mould data cache synchronization is as shown in Figure 2.
User in first data server affiliated area submits user data to the first data server, the first data clothes Write queue (qw) tool in business device handles the user data got, obtains data cached being added to the slow of one's respective area It deposits in queue.
Write queue (qw) tool is responsible for the user data for issuing it to keep well, and confirms intact reception to data submission side Data (i.e. ok).Qw tool as data server (such as: can be apache server or other data servers, wherein Apache original meaning be the server full of patch, be derived from the pronunciation of a patchy server) common gateway interface (Common Gateway Interface, CGI) module, it can directly receive hypertext transfer protocol (HyperText Transfer Protocol, HTTP) request.It can be by specifying data cached preservation in write queue configuration file (qw.conf configuration file) Buffer queue file filename.
After Qw tool takes user data every time, after being added command header information and end mark, the caching number that will obtain According to being placed on buffer queue end.I.e. each qw tool additionally before user data add fixed byte command header, while with User data end increases the end that one " QEND " carrys out characterize data, to distinguish each data.
Data cached format is as shown in table 1 below.
Table 1
16 bytes Indefinite length, the length specified depending on 4 bytes in front 1 1 1 1
Data length User data Q E N D
Such as: after a typical user data is submitted to data server, the unified resource after script converts is fixed Position symbol (Uniform Resource Locator, URL) is as follows:
Http: // 127.0.0.1/cgi-bin/qw? data=%01%00%00%00%D2%07%00%00% 00%00%00%00%03%00%00%00abc.
It can be identified in buffer queue when the setting of command header information is in order to facilitate the reading of subsequent reads queue (qr) tool It is each data cached.It is data cached to include at least two parts: command header information and user data itself.It can also be tied by one Beam symbol data cached leaves it at that indicate this.
As shown in Table 1, it is %01%00% after coding that command header information has 16 bytes: 1-4 byte, which is version number, 00%00;5-8 byte is command number, and command number and qr tool are arranged;9-12 byte retains them temporarily;13-16 byte is number of users According to length.
Qr tool can configure 4 basic control file configurations:
Q file: the file address of record current cache queue and filename facilitate qr tool to go to read.
P file: record current cache queue processing to where so that qr tool can know that it is currently processed which Data cached, which is data cached for next processing.
L file: ensure the lock file of only a example operation, the currently processed caching of qr tool is guaranteed by this document Data only one, wait after having handled current cache data, next handled obtaining.
Q.conf file: configuration qr tool executes movement to different command head information.Such as: execution movement for forwarding or Pipeline output.Another example is: forwarding destination address and newer command head when execution movement is forwards;Execution movement is that pipeline exports When, corresponding pipeline exports command header.
It is as shown in Figure 2, it is data cached be added in the buffer queue of one's respective area after, queue (qr) tool of reading can be read Take in buffer queue each data cached is handled.Specifically, can read from caching queue file currently can be read Data cached and its relevant information.
Qr tool read buffer queue in it is data cached when execute operation it is as follows: for the caching number that currently can be read According to, when confirmation command header information is correct, the data length of reading qw tool records from caching queue file;It is long according to data Degree reads user data from caching queue file;Read from caching queue file ' QEND ' label, identify the user data Reading finishes;By the location updating of p file, continue to read next it is data cached.According to command header after the reading of qr tool Information, according to the movement that executes of agreement, processes user data.Two kinds of movements are supported at present, and one is forwarding, the other is pipeline Output.
Forwarding is exactly that this user data is issued another buffer queue as former state, can modify order in repeating process Head, pipeline output are that user data is given to external application program by standard output, and external application is recycled from standard Data, data format are read in input are as follows: preceding 4 bytes are data length, are followed by data content, are believed comprising data command head Breath, is returned the result by standard output after data processing, as a result can occupy a byte.
When qr tool reads the record of the data in buffer queue, movement processing is carried out according to command header information;If Be pipeline output, then by other logic modules of one's respective area from standard input in read data, read the data record after go into Row relevant treatment;If the movement processing of command header information instruction is forwarding, qr tool is according to forwarding purpose in q.conf file Address is forwarded, and the qw tool on corresponding data server is written in buffer queue after receiving user data, by qr work Tool is read out, and qr tool is handled according to format.
As shown in Figure 2, user data can be supplied to other logic modules of one's respective area by the first data server, User data can be forwarded to the second data server, converted user data to by the qw tool of the second data server slow Deposit data is written in the buffer queue of one's respective area, is handled for qr tool, and the qr tool of the second data server equally can be with User data is transmitted to other logic modules of one's respective area, exports and uses for other modules;It can also be by user data again It is transmitted to third data server.
Data forwarding can be transmitted to the server of central area, can also be transmitted to the server in other regions.When turn It is that the server qr tool of central area takes data when issuing the server of central area, turns according to previous buffer queue The command header information sent goes execution to act, and updates the user data in buffer queue.When the service for being transmitted to other regions It is that the server qr tool of corresponding region takes data when device, the command header information forwarded according to previous buffer queue It goes execution to act, updates the user data in buffer queue.
Optionally, more data servers are if desired forwarded to, then configure corresponding forwarding purpose in q.conf file Address and command header information dispose qr tool and qw tool on the corresponding data server of forwarding destination address, can be according to Above-mentioned queue mechanism synchronizes caching.
Such as: a typical q.conf file is as follows:
Op expression acts in q.conf file, and func is the interface function of output data, and host indicates forwarding destination address, Port is the port numbers being forwarded to, and newid indicates the newer command head of the data after forwarding.
See that the buffer queue only supports that an order ID is 1001 from the configuration of q.conf file, order ID is corresponding Action sequence simultaneously include 2 movement, first is output data content, i.e., pipeline export, second is to be transmitted to history The buffer queue of data server is recorded, while order ID is revised as 1002.
Each data server need be transmitted to " xxx.xxx.xxx.xxx " this forwarding destination address when, Just by user data to order the 1001 buffer queue processing for giving one's respective area, the buffer queue of one's respective area is by user data to order It enables 1002 to be transmitted to the corresponding data server of forwarding destination address to be uniformly processed.
This lining up mode is achieved that user when submitting user data, and each data server can synchronized update processing Local caching.
It is whole by task is handled when carrying out data processing on each data server in addition to when handling synchronization caching Body is divided into several different buffer queues, according to above-mentioned mechanism, allomeric function can be cut into several small functions, often A function can independent, other module direct request queue processings reduce the coupling of sophisticated functions, each module Between interdependence it is lower, system is relatively reliable and stable.As shown in Figure 3, data processing can be divided into queue 1, queue 2 ..., n processing queue such as queue n handles the processing of queue, output data result by this n.
The above method of the embodiment of the present invention, the scheme using synchronization caching are to design a kind of queue mechanism, will cache into Row synchronizes, while can be used for each sophisticated functions unit column modularized processing.This method is realized by the read-write mechanism of queue Data buffer storage can accomplish to handle cache synchronization in a manner of first in first out, and logic is mutually indepedent between each data server, single It could be used that queue mechanism is handled inside only data server, sophisticated functions can be divided, and system is more stable, healthy and strong.
Based on the same inventive concept, the embodiment of the present invention also provides a kind of data buffer storage realization system, and the system structure is such as Shown in Fig. 4, including at least two data servers 10.
Data server 10, what the data server for obtaining user data or other regions that one's respective area is submitted forwarded User data;Command header information and end mark are added for user data and obtains data cached, are added to one's respective area for data cached In buffer queue;Wherein, data cached includes command header information, user data and end mark, includes instruction in command header information The command number of execution movement;The reading cache data from the buffer queue of one's respective area parses command header information and user data, root Corresponding movement is executed to the user data parsed according to the command number for including.
Preferably, above-mentioned data server 10, if specifically for being determined according to command number to the user data parsed Action which be to be is pipeline output, and the user data parsed is exported to corresponding application program;If according to command number It determines that the data action which be to be parsed be forwarding, the user data parsed is transmitted to the number in other regions According to server.
Preferably, above-mentioned data server 10, specifically for determining the command header information of the corresponding pipeline output of command number, Output is to corresponding application program after the user data parsed to be added to the command header information of the pipeline output;
Preferably, above-mentioned data server 10 is specifically used for after determining the corresponding forwarding destination address of command number and forwarding The user data parsed and new command header information are supplied to and forward destination address by the new command header information that need to be used The data server in other corresponding regions.
Preferably, queue tool is read in setting in above-mentioned data server 10, caches team from one's respective area by reading queue tool Reading cache data in column;The reading queue tool configures file address and the filename of q file record one's respective area buffer queue;Match Set the file process location information in the buffer queue of p file record one's respective area;Configuration l file guarantees single part of example operation;Configuration Q.conf file with realize configuring pipes output command header information, forwarding destination address and forwarding after need using new order Head information.
Preferably, write queue tool is set in above-mentioned data server 10, is user data addition by write queue tool Command header information and end mark;Wherein, command header information includes version number, command number, reserve bytes and data length.
The structure of data server 10 is as shown in Figure 5, comprising: obtains module 101, cache module 102 and execution module 103。
Module 101 is obtained, what the data server for obtaining user data or other regions that one's respective area is submitted forwarded User data.
Cache module 102 obtains data cached, will cache number for adding command header information and end mark for user data According to being added in the buffer queue of one's respective area;Wherein, data cached includes command header information, user data and end mark, command header It include the command number of instruction execution movement in information.
Execution module 103 parses command header information and user for the reading cache data from the buffer queue of one's respective area Data execute corresponding movement to the user data parsed according to the command number for including.
Preferably, above-mentioned execution module 103, if specifically for being determined according to command number to the user data institute parsed The movement to be executed is pipeline output, and the user data parsed is exported to corresponding application program;If true according to command number Making to the user data action which be to be parsed is forwarding, and the user data parsed is transmitted to other regions Data server.
Preferably, above-mentioned execution module 103, specifically for determining the command header information of the corresponding pipeline output of command number, Output is to corresponding application program after the user data parsed to be added to the command header information of the pipeline output;
Preferably, above-mentioned execution module 103, specifically for being needed after determining the corresponding forwarding destination address of command number and forwarding The user data parsed and new command header information are supplied to and forward destination address pair by the new command header information used The data server in other regions answered.
Preferably, queue tool is read in setting in above-mentioned execution module 103, by reading queue tool from one's respective area buffer queue Middle reading cache data;The reading queue tool configures file address and the filename of q file record one's respective area buffer queue;Configuration File process location information in the buffer queue of p file record one's respective area;Configuration l file guarantees single part of example operation;Configuration Q.conf file with realize configuring pipes output command header information, forwarding destination address and forwarding after need using new order Head information.
Preferably, write queue tool is set in above-mentioned cache module 102, is user data addition life by write queue tool Enable head information and end mark;Wherein, command header information includes version number, command number, reserve bytes and data length.
Those skilled in the art will also be appreciated that the various illustrative components, blocks that the embodiment of the present invention is listed (illustrative logical block), unit and step can by electronic hardware, computer software, or both knot Conjunction is realized.For the replaceability (interchangeability) for clearly showing that hardware and software, above-mentioned various explanations Property component (illustrative components), unit and step universally describe their function.Such function It can be that the design requirement for depending on specific application and whole system is realized by hardware or software.Those skilled in the art Can be can be used by various methods and realize the function, but this realization is understood not to for every kind of specific application Range beyond protection of the embodiment of the present invention.
Various illustrative logical blocks or unit described in the embodiment of the present invention can by general processor, Digital signal processor, specific integrated circuit (ASIC), field programmable gate array or other programmable logic devices, discrete gate Or transistor logic, discrete hardware components or above-mentioned any combination of design carry out implementation or operation described function.General place Managing device can be microprocessor, and optionally, which may be any traditional processor, controller, microcontroller Device or state machine.Processor can also be realized by the combination of computing device, such as digital signal processor and microprocessor, Multi-microprocessor, one or more microprocessors combine a digital signal processor core or any other like configuration To realize.
The step of method described in the embodiment of the present invention or algorithm can be directly embedded into hardware, processor execute it is soft The combination of part module or the two.Software module can store in RAM memory, flash memory, ROM memory, EPROM storage Other any form of storaging mediums in device, eeprom memory, register, hard disk, moveable magnetic disc, CD-ROM or this field In.Illustratively, storaging medium can be connect with processor, so that processor can read information from storaging medium, and It can be to storaging medium stored and written information.Optionally, storaging medium can also be integrated into the processor.Processor and storaging medium can To be set in asic, ASIC be can be set in user terminal.Optionally, processor and storaging medium also can be set in In different components in the terminal of family.
In one or more exemplary designs, above-mentioned function described in the embodiment of the present invention can be in hardware, soft Part, firmware or any combination of this three are realized.If realized in software, these functions be can store and computer-readable On medium, or it is transferred on a computer readable medium in the form of one or more instructions or code forms.Computer readable medium includes electricity Brain storaging medium and convenient for so that computer program is allowed to be transferred to from a place telecommunication media in other places.Storaging medium can be with It is that any general or special computer can be with the useable medium of access.For example, such computer readable media may include but It is not limited to RAM, ROM, EEPROM, CD-ROM or other optical disc storages, disk storage or other magnetic storage devices or other What can be used for carry or store with instruct or data structure and it is other can be by general or special computer or general or specially treated The medium of the program code of device reading form.In addition, any connection can be properly termed computer readable medium, example Such as, if software is to pass through a coaxial cable, fiber optic cables, double from a web-site, server or other remote resources Twisted wire, Digital Subscriber Line (DSL) are defined with being also contained in for the wireless way for transmitting such as example infrared, wireless and microwave In computer readable medium.The disk (disk) and disk (disc) includes compress disk, radium-shine disk, CD, DVD, floppy disk And Blu-ray Disc, disk is usually with magnetic replicate data, and disk usually carries out optically replicated data with laser.Combinations of the above Also it may be embodied in computer readable medium.
Above-described specific embodiment has carried out further the purpose of the present invention, technical scheme and beneficial effects It is described in detail, it should be understood that being not intended to limit the present invention the foregoing is merely a specific embodiment of the invention Protection scope, all within the spirits and principles of the present invention, any modification, equivalent substitution, improvement and etc. done should all include Within protection scope of the present invention.

Claims (6)

1. a kind of data buffer storage implementation method characterized by comprising
Obtain the user data of the data server forwarding of user data or other regions that one's respective area is submitted;
Command header information and end mark are added for user data and obtains data cached, are added to one's respective area caching team for data cached In column;It is described it is data cached include command header information, user data and end mark, include that instruction executes in the command header information The command number of movement;
The reading cache data from the buffer queue of one's respective area parses the command header information and user data, according to include The command number executes corresponding movement to the user data parsed,
If determining to be pipeline output to the user data action which be to be parsed according to the command number, it is determined that institute The user data parsed is added the command header of the pipeline output by the command header information for stating the corresponding pipeline output of command number Output is to corresponding application program after information;
If determining to be forwarding to the user data action which be to be parsed according to the command number, it is determined that the life Enable need after number corresponding forwarding destination address and forwarding using new command header information, by the user data parsed and new Command header information is supplied to the data server in other regions corresponding with forwarding destination address.
2. the method as described in claim 1, which is characterized in that read from the buffer queue of one's respective area by reading queue tool slow Deposit data;
The file address for reading queue tool configuration q file record one's respective area buffer queue and filename;Configure p file record File process location information in the buffer queue of one's respective area;Configuration l file guarantees single part of example operation;Configure q.conf file with Realize configuring pipes output command header information, forwarding destination address and forwarding after need using new command header information.
3. the method as described in claim 1-2 is any, which is characterized in that ordered by write queue tool for user data addition Head information and end mark;Wherein, command header information includes version number, command number, reserve bytes and data length.
4. a kind of data server characterized by comprising
Module is obtained, for obtaining the number of users of the user data of one's respective area submission or the data server forwarding in other regions According to;
Cache module, for for user data add command header information and end mark obtain it is data cached, by data cached addition Into one's respective area buffer queue;It is described data cached including command header information, user data and end mark, the command header information In include instruction execution movement command number;
Execution module parses the command header information and number of users for the reading cache data from the buffer queue of one's respective area According to corresponding movement being executed to the user data that parses according to the command number for including, if determining according to the command number It is out pipeline output to the user data action which be to be parsed, the user data parsed is exported and is answered to corresponding Use program;If determining that the user data action which be to be parsed be forwarding according to the command number, will parse User data be transmitted to the data server in other regions;
Queue tool is read in setting in the execution module, and caching number is read from the buffer queue of one's respective area by reading queue tool According to;
The file address for reading queue tool configuration q file record one's respective area buffer queue and filename;Configure p file record File process location information in the buffer queue of one's respective area;Configuration l file guarantees single part of example operation;Configure q.conf file with Realize configuring pipes output command header information, forwarding destination address and forwarding after need using new command header information.
5. data server as claimed in claim 4, which is characterized in that write queue tool is arranged in cache module, by writing Queue tool is that user data adds command header information and end mark;Wherein, command header information includes version number, command number, guarantor It writes down characters section and data length.
6. a kind of data buffer storage realizes system characterized by comprising at least two as described in claim 4-5 is any data Server.
CN201610171603.7A 2016-03-24 2016-03-24 Data buffer storage implementation method, system and data server Active CN105843911B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610171603.7A CN105843911B (en) 2016-03-24 2016-03-24 Data buffer storage implementation method, system and data server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610171603.7A CN105843911B (en) 2016-03-24 2016-03-24 Data buffer storage implementation method, system and data server

Publications (2)

Publication Number Publication Date
CN105843911A CN105843911A (en) 2016-08-10
CN105843911B true CN105843911B (en) 2019-05-17

Family

ID=56583218

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610171603.7A Active CN105843911B (en) 2016-03-24 2016-03-24 Data buffer storage implementation method, system and data server

Country Status (1)

Country Link
CN (1) CN105843911B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110990343B (en) * 2019-11-25 2023-08-04 中国银行股份有限公司 Parameter undistorted transmission method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1725186A (en) * 2004-07-23 2006-01-25 中兴通讯股份有限公司 Multiple data base data syne method
CN101431476A (en) * 2008-12-12 2009-05-13 中国工商银行股份有限公司 Data transmission method based on message queue, server and system
CN104202375A (en) * 2014-08-22 2014-12-10 广州华多网络科技有限公司 Method and system for synchronous data
CN104363303A (en) * 2014-11-28 2015-02-18 东莞中国科学院云计算产业技术创新与育成中心 Method for synchronizing asynchronously cached data

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1725186A (en) * 2004-07-23 2006-01-25 中兴通讯股份有限公司 Multiple data base data syne method
CN101431476A (en) * 2008-12-12 2009-05-13 中国工商银行股份有限公司 Data transmission method based on message queue, server and system
CN104202375A (en) * 2014-08-22 2014-12-10 广州华多网络科技有限公司 Method and system for synchronous data
CN104363303A (en) * 2014-11-28 2015-02-18 东莞中国科学院云计算产业技术创新与育成中心 Method for synchronizing asynchronously cached data

Also Published As

Publication number Publication date
CN105843911A (en) 2016-08-10

Similar Documents

Publication Publication Date Title
JP5632010B2 (en) Virtual hard drive management as a blob
US9208168B2 (en) Inter-protocol copy offload
US9569400B2 (en) RDMA-optimized high-performance distributed cache
EP3637275B1 (en) Directory leasing
CN102402596B (en) A kind of reading/writing method of master slave separation database and system
US9645753B2 (en) Overlapping write detection and processing for sync replication
CN109074297A (en) Data integrity inspection and faster application recovery are enabled in the data set of synchronous duplication
US10318194B2 (en) Method and an apparatus, and related computer-program products, for managing access request in multi-tenancy environments
CN102272751B (en) Data integrity in a database environment through background synchronization
US9244822B2 (en) Automatic object model generation
JP5611889B2 (en) Data transfer device, data transmission system, and data transmission method
WO2014186940A1 (en) Hard disk and data processing method
CN109710185A (en) Data processing method and device
WO2018022931A1 (en) Multi-part upload
CN109005226A (en) The acquisition methods of sensing data, acquisition system and relevant apparatus in server
CN105138679A (en) Data processing system and method based on distributed caching
CN104580501A (en) Http interface dynamic publishing method and system based on reflex mechanism
JP2016053946A (en) Supporting rma api over active message
CN101741866A (en) On-line storage system and method
WO2005046146A1 (en) Method, system, and program for constructing a packet
CN105843911B (en) Data buffer storage implementation method, system and data server
US9111598B2 (en) Increased I/O rate for solid state storage
US20150381727A1 (en) Storage functionality rule implementation
CN106534249A (en) File transmission system based on file straight-through technology
WO2015055036A1 (en) Backup object sending and backup method, production end, disaster recovery end, and system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230427

Address after: Room 501-502, 5/F, Sina Headquarters Scientific Research Building, Block N-1 and N-2, Zhongguancun Software Park, Dongbei Wangxi Road, Haidian District, Beijing, 100193

Patentee after: Sina Technology (China) Co.,Ltd.

Address before: 100080, International Building, No. 58 West Fourth Ring Road, Haidian District, Beijing, 20 floor

Patentee before: Sina.com Technology (China) Co.,Ltd.