CN102426553A - Method and device for transmitting data to user based on double-cache pre-reading - Google Patents

Method and device for transmitting data to user based on double-cache pre-reading Download PDF

Info

Publication number
CN102426553A
CN102426553A CN2011103576122A CN201110357612A CN102426553A CN 102426553 A CN102426553 A CN 102426553A CN 2011103576122 A CN2011103576122 A CN 2011103576122A CN 201110357612 A CN201110357612 A CN 201110357612A CN 102426553 A CN102426553 A CN 102426553A
Authority
CN
China
Prior art keywords
user
free buffer
data
buffer
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011103576122A
Other languages
Chinese (zh)
Other versions
CN102426553B (en
Inventor
李俊
韩坤鹏
马书超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN201110357612.2A priority Critical patent/CN102426553B/en
Publication of CN102426553A publication Critical patent/CN102426553A/en
Application granted granted Critical
Publication of CN102426553B publication Critical patent/CN102426553B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a method and device for transmitting data to a user based on double-cache pre-reading. The method mainly comprises the following steps: establishing a plurality of groups of double caches and allocating one group of double caches for each user, wherein the group of double caches comprises a working cache and an idle cache and is used for storing the data pre-read for the user; establishing a plurality of processing threads, wherein each processing thread is respectively in charge of the data reading of a designated disk and further serves a designated user; and aiming at each user served by each processing thread, transmitting the data to each user through the working cache in the group of double caches allocated for each user by using each thread processing, and after the transmission of the data cached in the working cache is completed, controlling to carry out the switching between the working cache and the idle cache. By utilizing the embodiment of the invention, the improvement of the throughput rate of the storage of a conventional disk can be achieved, and the delay brought about by the usage of a single cache is eliminated.

Description

Transmit the method and apparatus of data based on what two buffer memorys were read in advance to the user
Technical field
The present invention relates to communication technical field, more particularly, relate to and a kind ofly transmit the method and apparatus of data to the user based on what two buffer memorys were read in advance.
Background technology
Streaming Media is meant the media formats that the mode of employing stream transmission is play on the internet.Streaming Media is streaming video again, and common businessman is packaged into flow media data packet to program with a video delivery server and sends, and is sent on the network.After the user carried out decompress(ion) through decompression apparatus to these flow media data packet, program just can be play as before sending.The application of Streaming Media has now very extensively become the focus of internet.
At present, the principal element of the service performance of restriction Streaming Media comprises: the network storage bottleneck that the seagate development lags behind and brings; The streaming media on demand technical bottleneck; Internet bandwidth bottlenecks etc., above-mentioned service performance mainly comprises: the speed of service and the quality of program.
At present, main flow 7200 is changeed the hard disk reading speed about 150MB/s on the market, and the IO of disk array (Input/Output, output/input) handling capacity is generally about 400MB/s.Though this has had large increase than in the past, the growth rate of otherwise performance raising of the computing machine of comparing and amount of digital information, seagate is still backward relatively.The slow development of seagate has limited network storage ability, and it becomes one of principal element that influences streaming media service speed.
Summary of the invention
Embodiments of the invention provide a kind of and have transmitted the method and apparatus of data based on what two buffer memorys were read in advance to the user, use the delay that single buffer memory brought to realize improving the throughput of existing disk storage, to eliminate.
A kind ofly transmit the method for data to the user, comprising based on what two buffer memorys were read in advance:
Create the two buffer memorys of many groups,, comprise workspace cache and free buffer in said one group of two buffer memory, be used to be stored as the data that this user reads in advance for each user distributes one group of two buffer memory;
Create a plurality of processing threads, the data that each processing threads is responsible for designated disk respectively read, and are user's service of appointment;
Each said processing threads is to its each user who serves; Workspace cache through in the one group of two buffer memory that distributes for each user transmits data to each user; Behind the data in buffer end of transmission, the switching between said workspace cache and the free buffer is carried out in control in said workspace cache.
A kind of based on two buffer memorys read in advance to user's apparatus for transmitting data, it is characterized in that, comprising:
Caching management module is used to create the two buffer memorys of many groups, for each user distributes one group of two buffer memory, comprises workspace cache and free buffer in said one group of two buffer memory, is used to be stored as the data that this user reads in advance;
Thread management module is used to create a plurality of processing threads, and the data that each processing threads is responsible for designated disk respectively read, and is user's service of appointment;
Data processing module; Be used for being directed against each user that each said processing threads is served; Workspace cache through in the one group of two buffer memory that distributes for each user transmits data to each user; Behind the data in buffer end of transmission, the switching between said workspace cache and the free buffer is carried out in control in said workspace cache.
The embodiment of the invention is through distributing one group of two buffer memory for each user; And be stored as the data that this user reads in advance with two buffer memorys; Having now under the constant situation of disk storage capacity; Can improve the throughput of existing disk storage significantly, eliminate effectively and use the delay that single buffer memory brought, thus the whole service ability that improves stream media system.
Description of drawings
In order to be illustrated more clearly in the technical scheme of the embodiment of the invention; The accompanying drawing of required use is done to introduce simply in will describing embodiment below; Obviously, the accompanying drawing in describing below only is some embodiments of the present invention, for those of ordinary skills; Under the prerequisite of not paying creative work property, can also obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 provides for the embodiment of the invention one a kind ofly transmits the processing flow chart of the method for data based on two buffer memorys and read ahead technique to the user;
A kind of free buffer that Fig. 2 provides for the embodiment of the invention one is the time shaft synoptic diagram of filled type strategy at random;
The time shaft synoptic diagram of a kind of exponential back formula filling Strategy that Fig. 3 provides for the embodiment of the invention one;
The simulator program workflow diagram that a kind of pair of buffer memory that Fig. 4 provides for the embodiment of the invention two read in advance, free buffer is filled;
A kind of structural drawing that Fig. 5 provides for the embodiment of the invention three to user's apparatus for transmitting data based on two buffer memorys and read ahead technique.
Embodiment
For the purpose, technical scheme and the advantage that make the embodiment of the invention clearer; To combine the accompanying drawing in the embodiment of the invention below; Technical scheme in the embodiment of the invention is carried out clear, intactly description; Obviously, described embodiment is the present invention's part embodiment, rather than whole embodiment.Based on the embodiment among the present invention, those of ordinary skills are not making the every other embodiment that is obtained under the creative work prerequisite, all belong to the scope of the present invention's protection.
For ease of the understanding to the embodiment of the invention, will combine accompanying drawing below is that example is done further and explained with several specific embodiments, and each embodiment does not constitute the qualification to the embodiment of the invention.
Embodiment one
This embodiment provides a kind of based on two buffer memorys and read ahead technique to transmit the treatment scheme of method of data to the user as shown in Figure 1, comprise following treatment step:
Step 11, in system, create the two buffer memorys of many groups in advance,, comprise workspace cache and free buffer in said one group of two buffer memory, be used to be stored as the data that this user reads in advance for each user distributes one group of two buffer memory.
From user perspective, when playing stream media, most applications, user's custom is to watch in order, watches operation even have to jump, at short notice, majority is also play in order.
Be fit to the custom that characteristics that big data quantity reads and writes in proper order and user play in the most of the time continuously according to the hierarchy of Computer Storage, disk; On the low side and the not high present situation of access efficiency to the hard disk integral reading speed; The embodiment of the invention has proposed a kind of formula file pre-reading method in the know; The content of reading in advance is to select by the order of media file storage; System reads the data in advance that possibly use in the future in two buffer memorys of distributing to each user, thereby improves the hard disk access efficiency, improves user's service effectiveness.
If use single buffer memory to store the data that read in advance, when data cached using up, system can experience one section delay, be used for wait routine once more reading disk with fill buffer memory.For the data that effective use is read in advance, the embodiment of the invention adopts two caching technologys, promptly when data cached using up, is directly switch into another buffer memory, by it data is provided then, has so just eliminated the delay of using single buffer memory to bring.
Initialization system is created the two buffer memorys of many groups in advance in system, connect into work chained list and idle chained list, and when not having the user to insert, all two buffer memory groups all belong to idle chained list.Insert as the user, get one group of two buffer memory from idle chained list and distribute to this user, be this user's service, and add the work chained list; When the user withdraws from, take out two buffer memorys of distributing to this user from the work chained list, join idle chained list.Comprise workspace cache and free buffer in said one group of two buffer memory.
Above-mentioned one group of two buffer memory are meant two dynamically application and big or small identical memory headrooms.Server and the client-side program of different user are set up relatedly is referred to as information such as session, the not only in store user client address port of each session, session status, media file, but also is the inlet of two buffer memorys.Two buffer memorys exist as the structure member of session.When a new user conversation is created, can calls the malloc function simultaneously and dynamically distribute address space for two buffer memorys.Twin Cache Architecture body portion member is following:
Figure BDA0000107759050000041
Find out that from the structure of above-mentioned pair of buffer memory this structure only writes down the address and the size of single buffer memory of the total size of two buffer memorys, remaining data amount, current service, two single buffer memory start addresses, transmission next time data.Wherein LeftSize and SendPtr can to upgrade numerical value after sending data (be LeftSize-=DATA_SIZE each; SendPtr+=DATA_SIZE, wherein DATA_SIZE is each data volume of sending), if LeftSize is kept to 0; Just carry out the switching of two buffer memorys; Upgrade BufferInUse (become 1 by 0, or become 0) and SendPtr (pointing to single buffer memory start address of switching, i.e. SendPtr=BufferPtr [BufferInUse]) immediately by 1.
Step 12, a main thread of establishment and a plurality of processing threads; The data that each processing threads is responsible for designated disk IO respectively regularly read work; And serve designated user regularly; Also according to the average transmission rate of current C PU (Central Processing Unit, the central processing unit) occupation rate of system and total hard disk IO, the state of the current system of real-time judge is busy or free time to main thread.
At first need be based on certain strategy; In system, create a main thread and a plurality of processing threads; The data of specifying each processing threads to be responsible for a part of disk I respectively regularly by main thread read work; Specify each processing threads to serve designated user regularly simultaneously, fill and data transfer operation for this designated user carries out two data in buffer.Main thread is redirected to a processing threads to the data of any user conversation request, is carried out that concrete read-write disk, two buffer memory read in advance and is sent the operation of data to the user by it.Such as, all users are divided into the N group, serve one group of user respectively by N processing threads.Above-mentioned main thread also need calculate the average transmission rate of each hard disk IO according to the reading and writing data and the transmission situation of each processing threads, has just obtained the average transmission rate addition of all hard disk IO the average transmission rate of total hard disk IO then.Then, above-mentioned main thread is judged the average transmission rate of current C PU (Central Processing Unit, central processing unit) occupation rate and total hard disk IO of system all within lower scope, and then the state of the current system of real-time judge is idle; Otherwise the state of judging current system is busy.
Step 13, each processing threads regularly travel through all users that served, and each user is carried out data transmission, two buffer memory blocked operation.And, all users' free buffer is carried out data fill according to predefined free buffer filling Strategy.
Each processing threads need be provided with timer according to the needs of specific code check.Particularly, if per second regularly sends data COUNT time, each data volume is DATA_SIZE kb, and then above-mentioned specific code check is exactly COUNT*DATA_SIZE kbps, just can meet the requirements of specific code check through regulating COUNT and DATA_SIZE; Otherwise, if known specific code check also can be inferred COUNT or DATA_SIZE.Single cache size BUF_SIZE based in the above-mentioned specific code check and the above-mentioned pair of buffer memory can calculate the time span that single buffer memory can be served.
After the timing of timer arrives; Begin to travel through all users that this processing threads is served,, confirm this user's use buffer memory for each user; When using buffer memory not to be sky; From this use buffer memory, take out data, send to the user through network, and the relevant recorded information of maintenance update; When this uses buffer memory as sky, then carry out the switching between workspace cache and the free buffer, original free buffer is switched to the use buffer memory, use buffer memory to switch to free buffer.
Free buffer for each user; Carrying out data according to predefined free buffer filling Strategy fills; Above-mentioned free buffer filling Strategy should guarantee before arrival switching time of two buffer memory, the free buffer filling to be finished, and will guarantee the read operation equalization of each disk; Realize the load balancing of each disk, will guarantee as far as possible that simultaneously the data padding of above-mentioned free buffer will be carried out during the free time in system.
The above-mentioned free buffer filling Strategy that the embodiment of the invention provides mainly comprises: filled type strategy, state-detection formula filling Strategy and exponential back formula filling Strategy at random.Introduce this three filling Strategy below respectively.
1, filled type strategy at random.
In order to tackle server when being the concurrent service of a large number of users, read disk simultaneously the pressure that is brought, the embodiment of the invention has proposed the strategy of filled type at random of free buffer.Main thought is after last two buffer memorys of a user switch, and is not to fill free buffer immediately, and selects a filling time at random, and guarantee before next time, two buffer memorys switched, to accomplish the filling of free buffer.The maximum characteristics of this strategy are randomizations, and macroscopic view can accomplish disk read operation is evenly distributed on the time shaft.
At random in the time shaft of filled type strategy, time point T1 representes the time that last two buffer memorys switch in the free buffer shown in the accompanying drawing 2; Time point T2 representes to fill the time of free buffer; Time point T3 is expressed as the safety time that free buffer is reserved, and the purpose that its exists is: prevent because system is busy or filling time T2 from switching time T4 too near, and before switching time, T4 arrived, do not accomplish the filling of free buffer; The time that the two next time buffer memorys of time point T4 switch.Change action is exactly another buffer memory of transmission pointed buffer structure in the session node, and upgrades relevant statistics.
The randomization of free buffer is filled between time point T1 and the T3 to be carried out.The concrete realization is: at first calculate the timer interruption times INTR_NUM that time interval of T3-T1 experiences; When once two buffer memorys switch on time point T1 carries out then, be the free buffer generation random number m of each user conversation, and 0<m<INTR_NUM.When timer expires; Travel through all users, when traversing certain user, the m with this user in the Interrupt Process function subtracts 1; If this user's m equals 0; Then carry out the filling of free buffer, be this user and construct message and be passed to and read message queue, carry out concrete disk read operation by reading thread pool for this user.
Randomized here effect is that the user conversation of avoiding all is filled free buffer simultaneously, thus the high load capacity of having avoided a large amount of concurrent read operation of disk to bring, the purpose of the load balancing that has reached.
2, state-detection formula filling Strategy.
The state of magnetic disc i/o and CPU when though the filled type strategy has realized that from macroscopic view the load balancing of magnetic disc i/o, this strategy have been ignored two buffer memorys and switched at random, the best filling opportunity that might miss free buffer.
Real-time status information in conjunction with disk and CPU; The embodiment of the invention has proposed improved state-detection formula free buffer filling Strategy; After last once two buffer memorys switch; Processing threads at first obtains the state of current system in real time from main thread, if the state of current system is idle, fill with regard to immediately all users being carried out free buffer simultaneously.And, obtain in real time the state of current system from main thread, if busy during the state of current system, the free buffer that then stops all users is filled, and handles by the above-mentioned strategy of filled type at random at once.
After switching, be busy from the state of the current system that main thread obtains in real time, then directly by the above-mentioned strategy of filled type at random processing at two buffer memorys.
3, exponential back formula filling Strategy.
After last once two buffer memorys switched, processing threads at first obtained the state of current system in real time from main thread, if the state of current system is idle, filled with regard to immediately all users being carried out free buffer simultaneously.And, obtain in real time the state of current system from main thread, if busy during the state of current system, the free buffer that then stops all users is filled at once.
The current state of the total system of obtaining in real time when each said processing threads is when busy; Then stop its each user's who serves free buffer padding; For each user is provided with the safety time point T5 that a free buffer before the switching time between workspace cache and the free buffer is next time filled; Between said T1 and said T5, select the actual filling time T2 of a free buffer for each user;
When certain user's said actual filling time T2 arrives, continue to judge the current state of total system, if current state is idle, then, said certain user fills for carrying out free buffer; If current state is for busy, the principle of then retreating according to the time index mode between said T2 and said T5, is selected the actual filling time T3 of a free buffer for said certain user;
Repeat aforesaid operations, the current state that attendes system up to said certain user's actual filling time point Tn is the free time, is that said certain user carries out the free buffer filling at said actual filling time point Tn then; Perhaps putting Tn greater than said safety time point T5 up to said certain user's the actual filling time, is that said certain user carries out the free buffer filling at said safety time point T5 then.
Shown in accompanying drawing 3, T1 representes the time that last two buffer memory switches, and T2~T4 is called the back off time point; T5 representes the safety time point, and T6 representes next time the time that two buffer memorys switch, and wherein safety time point T5 index is before dodged the time period and is called the back off time section; Its length increases with exponential manner, T2-T1=UNIT_TIME (specific less time period), T3-T2=2*UNIT_TIME; T4-T3=(22) * UNIT_TIME, T5-T4<=(23) * U NIT_TIME.Except T1, T2-T4 is the random time point in the corresponding back off time section.
The process that exponential is retreated strategy is: at time point T1, when two buffer memory finishing switching, judge at first whether current system is idle, if then carry out all users' free buffer padding at once; If not, then get into the exponential back stage.In this stage,,, judge still whether system has much to do, if then repeat said process in the section in excess time when each user arrives should the filling time time to the T1 of each user elder generation~filling time point of T2 time period picked at random.But before the safety time point T5 in last time period (like the T4 here~T5),, then all carry out the padding of free buffer at once no matter whether detect system busy.
The stragetic innovation of exponential back formula filled type and state-detection formula strategy at random, free buffer filling time point selection more reasonable can be better and balancedly utilize the I/O handling capacity of disk and the ability of CPU.But also there is shortcoming in this strategy, mainly is that to obtain the number of times of disk transfer rate and CPU usage numerous, if it is longer to obtain the time of these information costs, then might too much take the time of service-user.
Embodiment two
When the fundamental purpose of reader is test particular cache size in advance, the disk export capability that multi-user concurrent can produce.It has mainly been realized multithreading processing, two cache algorithm, has generated media file name, microsecond level calendar scheduling function at random, and program is short and pithy, easy to understand.
To have the server of three hard disks, creating 1 host process and 3 processing threads is example, and the simulator program workflow that a kind of pair of buffer memory that this embodiment provides read in advance, free buffer is filled is as shown in Figure 3, and concrete processing procedure is following:
1) program will be divided into three equal parts to the user, and Customs Assigned Number mould 3 equals 0,1 and 2 and respectively is one group, will be responsible for by thread 0,1 and 2 respectively.And three processing threads are responsible for a hard disk respectively, and processing threads 0 is responsible for hard disk sdb, and processing threads 1 is responsible for hard disk sdc, and processing threads 2 is responsible for hard disk sdd.So all users just read three hard disks fifty-fifty.And create a two buffer memory for each user.
2) counting initialization random function srand () with the microsecond of current time, is that each user generates media file numbering and starting point respectively at random in three groups.
3) the initialization thread counter is 0, creates three processing threads.Behind each establishment processing threads, all thread counter is added 1, be the appearance that prevents to compete, lock release again after operation is accomplished with thread information mechanism.
4) three processing threads get into treatment scheme.
5) host process gets into waiting status.The termination condition of waiting for is that thread counter becomes 0 once more again; Promptly three processing threads all finish; The time that this moment, host process was accomplished according to each processing threads; Calculate the average transmission rate of each hard disk at first respectively, just obtained total transfer rate of disk really to the transfer rate addition of three hard disks then.Can also calculate average elapsed time according to following formula:
data t 0 + data t 1 + data t 2 = 3 * data t ⇒ t = 3 * t 0 * t 1 * t 2 t 0 * t 1 + t 0 * t 2 + t 1 * t 2
The treatment scheme of processing threads:
1) at first receive the parameter that host process is transmitted, judge the hard disk that oneself is responsible for, the prefix variable is set to correct path.
2) timing that begins the timer of this processing threads is operated.Call Linux function gettimeofday (), obtain the current time.
3) the limited number of time circulation of entering service-user, cycle index is drawn divided by each data of sending by the total amount of data of each user's request, and for example, total amount of data is 120MB, sends 8KB at every turn, then need circulate 15360 times.
4) in the superincumbent circulation, traveling through all users that this processing threads is served again, to each user, is example with the strategy of filled type at random of free buffer, handles following three kinds of situation:
1. whether the workspace cache of judges is empty, if empty, then carry out two buffer memorys and switches; And the filling random number of free buffer is set, and (this is the random number that specially generates; Each circulation time all can subtract 1, when being kept to 0, just fills this free buffer), the scope of this numerical value is 0~SERVE_COUNT-2; Wherein SERVE_COUNT representes the number of times that single buffer memory sheet can be served, and what deduct 2 is safety times of reserving.For example, single buffer memory is 1MB (two buffer memorys just is 2MB), sends 8KB at every turn, and then SERVE_COUNT is 1MB/8KB=128 just.
2. whether judge the free buffer filler smaller or equal to 0, if, then carry out free buffer and fill, read the data of BUF_SIZE size from disk, and be filled into free buffer.
3. send low volume data through network to the user.That the low volume data is here selected is 8KB, is 2Mbps because work as code check, and this is the size of average one-frame video data.
The above-mentioned pair of buffer memory read in advance, the free buffer to-fill procedure is monitored and router with the Streaming Media on upper strata, constituted the two preparatory read apparatus of buffer memory of Streaming Media.In fact, the above-mentioned pair of buffer memory read in advance, the free buffer to-fill procedure is positioned at operating system layer, will exist as a module.When the Streaming Media router on upper strata when data are sent in the bottom request; Can turn to automatically that above-mentioned pair of buffer memory read in advance, the free buffer to-fill procedure; Do a polymerization by it and read in advance, carry out the data management of two buffer types then, return its required data of upper layer application again.The data that upper procedure needs in short time are all also provided by two buffer memorys.
Embodiment three
This embodiment provide a kind of based on two buffer memorys and read ahead technique to user's apparatus for transmitting data, its concrete structure is as shown in Figure 5, comprises following module:
Caching management module 51 is used to create the two buffer memorys of many groups, for each user distributes one group of two buffer memory, comprises workspace cache and free buffer in said one group of two buffer memory, is used to be stored as the data that this user reads in advance;
Thread management module 52 is used to create a plurality of processing threads, and the data that each processing threads is responsible for designated disk respectively read, and is user's service of appointment;
Data processing module 53; Be used for being directed against each user that each said processing threads is served; Workspace cache through in the one group of two buffer memory that distributes for each user transmits data to each user; Behind the data in buffer end of transmission, the switching between said workspace cache and the free buffer is carried out in control in said workspace cache.
Further, described device can also comprise:
Free buffer packing module 54; Be used for carrying out the principle that data in magnetic disk reads in advance the free buffer filling Strategy is set based on the data pre-head extract operation equalization of each disk and/or when total system is idle; Said free buffer filling Strategy also need guarantee before carrying out the switching between the workspace cache and free buffer next time, to accomplish the padding of free buffer;
Each user who serves to each said processing threads carries out to each user's free buffer according to said free buffer filling Strategy that data in magnetic disk reads in advance, data padding.
Concrete; Described free buffer packing module 54; Also be used for when said free buffer filling Strategy during for filled type strategy at random; Each user who serves to each said processing threads, the switching time between last once workspace cache and free buffer is after the T1, for each user is provided with the safety time point T3 that a free buffer before the switching time between workspace cache and the free buffer is next time filled; Between said T1 and said T3, select the actual filling time T2 of a free buffer at random for each user;
Regularly travel through each user that it is served through each said processing threads, when each user's said actual filling time T2 arrived, data in magnetic disk read in advance, data padding for each user's free buffer is carried out.
Concrete; Described free buffer packing module 54; Also be used for when said free buffer filling Strategy is state-detection formula free buffer filling Strategy; In real time obtain the current state of total system after the T1 switching time between last once workspace cache and free buffer, and said current state obtains according to the average transmission rate of the current central processing unit occupation rate of total system and total hard disk input and output; When said current state was the free time, each user who serves to each said processing threads carried out free buffer padding;
When the current state of the total system of obtaining in real time when busy; Then stop each user's that each said processing threads serves free buffer padding; For each user is provided with the safety time point T3 that a free buffer before the switching time between workspace cache and the free buffer is next time filled; Between said T1 and said T3, select the actual filling time T2 of a free buffer at random for each user;
Regularly travel through each user that it is served through each said processing threads, when each user's said actual filling time T2 arrived, data in magnetic disk read in advance, data padding for each user's free buffer is carried out.
Concrete; Described free buffer packing module 54; Also be used for when said free buffer filling Strategy is exponential back formula filling Strategy; In real time obtain the current state of total system after the T1 switching time between last once workspace cache and free buffer, and said current state obtains according to the average transmission rate of the current central processing unit occupation rate of total system and total hard disk input and output; When said current state was the free time, each user who serves to each said processing threads carried out free buffer padding;
When the current state of the total system of obtaining in real time when busy; Then stop each user's that each said processing threads serves free buffer padding; For each user is provided with the safety time point T5 that a free buffer before the switching time between workspace cache and the free buffer is next time filled; Between said T1 and said T5, select the actual filling time T2 of a free buffer for each user;
When certain user's said actual filling time T2 arrives, continue to judge the current state of total system, if current state is idle, then, said certain user fills for carrying out free buffer; If current state is for busy, the principle of then retreating according to the time index mode between said T2 and said T5, is selected the actual filling time T3 of a free buffer for said certain user;
Repeat aforesaid operations, the current state that attendes system up to said certain user's actual filling time point Tn is the free time, is that said certain user carries out the free buffer filling at said actual filling time point Tn then; Perhaps putting Tn greater than said safety time point T5 up to said certain user's the actual filling time, is that said certain user carries out the free buffer filling at said safety time point T5 then.
The concrete processing procedure of transmission data that device that should the embodiment of the invention transmits data to the user is identical with concrete processing procedure among the said method embodiment, repeats no more at this.
One of ordinary skill in the art will appreciate that all or part of flow process that realizes in the foregoing description method; Be to instruct relevant hardware to accomplish through computer program; Described program can be stored in the computer read/write memory medium; This program can comprise the flow process like the embodiment of above-mentioned each side method when carrying out.Wherein, described storage medium can be magnetic disc, CD, read-only storage memory body (Read-Only Memory, ROM) or at random store memory body (Random Access Memory, RAM) etc.
In sum; The embodiment of the invention is through distributing one group of two buffer memory for each user; And be stored as the data that this user reads in advance with two buffer memorys, under the constant situation of existing disk storage capacity, can improve the throughput of existing disk storage significantly; Eliminate effectively and use the delay that single buffer memory brought, thus the whole service ability that improves stream media system.
The embodiment of the invention is provided with the free buffer filling Strategy through the principle of carrying out data in magnetic disk based on the data pre-head extract operation equalization of each disk and/or when total system is idle and reading in advance; Can realize the equalization of each magnetic disc i/o data read operation, the whole like this purpose that has just reached load balancing.And, can guarantee when the state of system is the free time, to carry out the operation that data in magnetic disk reads, free buffer is filled as far as possible.
To the situation of the direct Connection Service device of polylith disk, the embodiment of the invention is responsible for specifying the read-write operation of hard disk through each processing threads, has avoided the competition that the multithreading hybrid processing might cause, the situation that single-threaded processing can make disk utilization factor step-down.
The above; Be merely the preferable embodiment of the present invention, but protection scope of the present invention is not limited thereto, any technician who is familiar with the present technique field is in the technical scope that the present invention discloses; The variation that can expect easily or replacement all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of claim.

Claims (10)

1. one kind transmits the method for data based on what two buffer memorys were read in advance to the user, it is characterized in that, comprising:
Create the two buffer memorys of many groups,, comprise workspace cache and free buffer in said one group of two buffer memory, be used to be stored as the data that this user reads in advance for each user distributes one group of two buffer memory;
Create a plurality of processing threads, the data that each processing threads is responsible for designated disk respectively read, and are user's service of appointment;
Each said processing threads is to its each user who serves; Workspace cache through in the one group of two buffer memory that distributes for each user transmits data to each user; Behind the data in buffer end of transmission, the switching between said workspace cache and the free buffer is carried out in control in said workspace cache.
2. according to claim 1ly transmit the method for data to the user, it is characterized in that described method also comprises based on what two buffer memorys were read in advance:
Carry out the principle that data in magnetic disk reads in advance based on the data pre-head extract operation equalization of each disk and/or when total system is idle the free buffer filling Strategy is set; Said free buffer filling Strategy also need guarantee before carrying out the switching between the workspace cache and free buffer next time, to accomplish the padding of free buffer;
Each said processing threads is to its each user who serves, and each user's free buffer carried out that data in magnetic disk reads in advance, data padding according to said free buffer filling Strategy.
3. according to claim 1ly transmit the method for data to the user based on what two buffer memorys were read in advance; It is characterized in that; Described each said processing threads is to its each user who serves; According to said free buffer filling Strategy each user's free buffer is carried out that data in magnetic disk reads in advance, data padding, comprising:
When said free buffer filling Strategy during for filled type strategy at random; Each said processing threads is to its each user who serves; Switching time between last once workspace cache and free buffer is after the T1; For each user is provided with the safety time point T3 that a free buffer before the switching time between workspace cache and the free buffer is next time filled, between said T1 and said T3, select the actual filling time T2 of a free buffer at random for each user;
Each said processing threads regularly travels through its each user who serves, and when each user's said actual filling time T2 arrived, data in magnetic disk read in advance, data padding for each user's free buffer is carried out.
4. according to claim 1ly transmit the method for data to the user based on what two buffer memorys were read in advance; It is characterized in that; Described each said processing threads is to its each user who serves; According to said free buffer filling Strategy each user's free buffer is carried out that data in magnetic disk reads in advance, data padding, comprising:
When said free buffer filling Strategy is state-detection formula free buffer filling Strategy; Switching time between last once workspace cache and free buffer is after the T1; Each said processing threads obtains the current state of total system in real time; Said current state obtains according to the average transmission rate of the current central processing unit occupation rate of total system and total hard disk input and output; When said current state was the free time, each said processing threads carried out free buffer padding to its each user who serves;
The current state of the total system of obtaining in real time when each said processing threads is when busy; Then stop its each user's who serves free buffer padding; For each user is provided with the safety time point T3 that a free buffer before the switching time between workspace cache and the free buffer is next time filled; Between said T1 and said T3, select the actual filling time T2 of a free buffer at random for each user;
Each said processing threads regularly travels through its each user who serves, and when each user's said actual filling time T2 arrived, data in magnetic disk read in advance, data padding for each user's free buffer is carried out.
5. according to claim 1ly transmit the method for data to the user based on what two buffer memorys were read in advance; It is characterized in that; Described each said processing threads is to its each user who serves; According to said free buffer filling Strategy each user's free buffer is carried out that data in magnetic disk reads in advance, data padding, comprising:
When said free buffer filling Strategy is exponential back formula filling Strategy; Switching time between last once workspace cache and free buffer is after the T1; Each said processing threads obtains the current state of total system in real time; Said current state obtains according to the average transmission rate of the current central processing unit occupation rate of total system and total hard disk input and output; When said current state was the free time, each said processing threads carried out free buffer padding to its each user who serves;
The current state of the total system of obtaining in real time when each said processing threads is when busy; Then stop its each user's who serves free buffer padding; For each user is provided with the safety time point T5 that a free buffer before the switching time between workspace cache and the free buffer is next time filled; Between said T1 and said T5, select the actual filling time T2 of a free buffer for each user;
When certain user's said actual filling time T2 arrives, continue to judge the current state of total system, if current state is idle, then, said certain user fills for carrying out free buffer; If current state is for busy, the principle of then retreating according to the time index mode between said T2 and said T5, is selected the actual filling time T3 of a free buffer for said certain user;
Repeat aforesaid operations, the current state that attendes system up to said certain user's actual filling time point Tn is the free time, is that said certain user carries out the free buffer filling at said actual filling time point Tn then; Perhaps putting Tn greater than said safety time point T5 up to said certain user's the actual filling time, is that said certain user carries out the free buffer filling at said safety time point T5 then.
One kind based on two buffer memorys read in advance to user's apparatus for transmitting data, it is characterized in that, comprising:
Caching management module is used to create the two buffer memorys of many groups, for each user distributes one group of two buffer memory, comprises workspace cache and free buffer in said one group of two buffer memory, is used to be stored as the data that this user reads in advance;
Thread management module is used to create a plurality of processing threads, and the data that each processing threads is responsible for designated disk respectively read, and is user's service of appointment;
Data processing module; Be used for being directed against each user that each said processing threads is served; Workspace cache through in the one group of two buffer memory that distributes for each user transmits data to each user; Behind the data in buffer end of transmission, the switching between said workspace cache and the free buffer is carried out in control in said workspace cache.
7. according to claim 6 based on two buffer memorys read in advance to user's apparatus for transmitting data, it is characterized in that described device also comprises:
The free buffer packing module; Be used for carrying out the principle that data in magnetic disk reads in advance the free buffer filling Strategy is set based on the data pre-head extract operation equalization of each disk and/or when total system is idle; Said free buffer filling Strategy also need guarantee before carrying out the switching between the workspace cache and free buffer next time, to accomplish the padding of free buffer;
Each user who serves to each said processing threads carries out to each user's free buffer according to said free buffer filling Strategy that data in magnetic disk reads in advance, data padding.
8. according to claim 6 based on two buffer memorys read in advance to user's apparatus for transmitting data, it is characterized in that:
Described free buffer packing module; Also be used for when said free buffer filling Strategy during for filled type strategy at random; Each user who serves to each said processing threads; Switching time between last once workspace cache and free buffer is after the T1; For each user is provided with the safety time point T3 that a free buffer before the switching time between workspace cache and the free buffer is next time filled, between said T1 and said T3, select the actual filling time T2 of a free buffer at random for each user;
Regularly travel through each user that it is served through each said processing threads, when each user's said actual filling time T2 arrived, data in magnetic disk read in advance, data padding for each user's free buffer is carried out.
9. according to claim 6 based on two buffer memorys read in advance to user's apparatus for transmitting data, it is characterized in that:
Described free buffer packing module; Also be used for when said free buffer filling Strategy is state-detection formula free buffer filling Strategy; Switching time between last once workspace cache and free buffer is after the T1; Obtain the current state of total system in real time; Said current state obtains according to the average transmission rate of the current central processing unit occupation rate of total system and total hard disk input and output, and when said current state when being idle, each user who serves to each said processing threads carries out free buffer padding;
When the current state of the total system of obtaining in real time when busy; Then stop each user's that each said processing threads serves free buffer padding; For each user is provided with the safety time point T3 that a free buffer before the switching time between workspace cache and the free buffer is next time filled; Between said T1 and said T3, select the actual filling time T2 of a free buffer at random for each user;
Regularly travel through each user that it is served through each said processing threads, when each user's said actual filling time T2 arrived, data in magnetic disk read in advance, data padding for each user's free buffer is carried out.
10. according to claim 6 based on two buffer memorys read in advance to user's apparatus for transmitting data, it is characterized in that:
Described free buffer packing module; Also be used for when said free buffer filling Strategy is exponential back formula filling Strategy; Switching time between last once workspace cache and free buffer is after the T1; Obtain the current state of total system in real time; Said current state obtains according to the average transmission rate of the current central processing unit occupation rate of total system and total hard disk input and output, and when said current state when being idle, each user who serves to each said processing threads carries out free buffer padding;
When the current state of the total system of obtaining in real time when busy; Then stop each user's that each said processing threads serves free buffer padding; For each user is provided with the safety time point T5 that a free buffer before the switching time between workspace cache and the free buffer is next time filled; Between said T1 and said T5, select the actual filling time T2 of a free buffer for each user;
When certain user's said actual filling time T2 arrives, continue to judge the current state of total system, if current state is idle, then, said certain user fills for carrying out free buffer; If current state is for busy, the principle of then retreating according to the time index mode between said T2 and said T5, is selected the actual filling time T3 of a free buffer for said certain user;
Repeat aforesaid operations, the current state that attendes system up to said certain user's actual filling time point Tn is the free time, is that said certain user carries out the free buffer filling at said actual filling time point Tn then; Perhaps putting Tn greater than said safety time point T5 up to said certain user's the actual filling time, is that said certain user carries out the free buffer filling at said safety time point T5 then.
CN201110357612.2A 2011-11-11 2011-11-11 Method and device for transmitting data to user based on double-cache pre-reading Expired - Fee Related CN102426553B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110357612.2A CN102426553B (en) 2011-11-11 2011-11-11 Method and device for transmitting data to user based on double-cache pre-reading

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110357612.2A CN102426553B (en) 2011-11-11 2011-11-11 Method and device for transmitting data to user based on double-cache pre-reading

Publications (2)

Publication Number Publication Date
CN102426553A true CN102426553A (en) 2012-04-25
CN102426553B CN102426553B (en) 2014-05-28

Family

ID=45960541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110357612.2A Expired - Fee Related CN102426553B (en) 2011-11-11 2011-11-11 Method and device for transmitting data to user based on double-cache pre-reading

Country Status (1)

Country Link
CN (1) CN102426553B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577158A (en) * 2012-07-18 2014-02-12 阿里巴巴集团控股有限公司 Data processing method and device
CN105579979A (en) * 2013-07-25 2016-05-11 格罗方德半导体公司 Three-dimensional processing system having multiple caches that can be partitioned, conjoined, and managed according to more than one set of rules and/or configurations
CN106130791A (en) * 2016-08-12 2016-11-16 飞思达技术(北京)有限公司 Quality-of-service based buffer memory device service ability traversal test system and method
CN106951488A (en) * 2017-03-14 2017-07-14 海尔优家智能科技(北京)有限公司 A kind of log recording method and device
CN107908751A (en) * 2017-11-17 2018-04-13 赛凡信息科技(厦门)有限公司 A kind of optimization method of distributive catalogue of document system level quota
CN108073706A (en) * 2017-12-20 2018-05-25 北京四方继保自动化股份有限公司 A kind of method of analogue system history library longitudinal data transverse directionization displaying
CN109918017A (en) * 2017-12-12 2019-06-21 北京机电工程研究所 Data dispatching method and device
CN113419879A (en) * 2021-07-09 2021-09-21 中国电信股份有限公司 Message processing method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1195135A (en) * 1997-03-28 1998-10-07 国际商业机器公司 Method and apparatus for decreasing thread switch latency in multithread processor
US6587937B1 (en) * 2000-03-31 2003-07-01 Rockwell Collins, Inc. Multiple virtual machine system with efficient cache memory design
US20060117218A1 (en) * 2004-11-12 2006-06-01 Nec Electronics Corporation Multi-processing system and multi-processing method
CN101060418A (en) * 2007-05-24 2007-10-24 上海清鹤数码科技有限公司 Special disk reading and writing system suitable for IPTV direct broadcast server with time shift
CN101446932A (en) * 2008-12-24 2009-06-03 北京中星微电子有限公司 Method and device for transmitting audio data
US20100268890A1 (en) * 2009-04-15 2010-10-21 International Business Machines Corporation Information handling system with immediate scheduling of load operations in a dual-bank cache with single dispatch into write/read data flow
US20100268887A1 (en) * 2009-04-15 2010-10-21 International Business Machines Corporation Information handling system with immediate scheduling of load operations in a dual-bank cache with dual dispatch into write/read data flow
CN102081509A (en) * 2009-11-30 2011-06-01 英业达股份有限公司 Method and device for reading RAID1 (Redundant Array of Inexpensive Disk 1) equipment

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1195135A (en) * 1997-03-28 1998-10-07 国际商业机器公司 Method and apparatus for decreasing thread switch latency in multithread processor
US6587937B1 (en) * 2000-03-31 2003-07-01 Rockwell Collins, Inc. Multiple virtual machine system with efficient cache memory design
US20060117218A1 (en) * 2004-11-12 2006-06-01 Nec Electronics Corporation Multi-processing system and multi-processing method
CN101060418A (en) * 2007-05-24 2007-10-24 上海清鹤数码科技有限公司 Special disk reading and writing system suitable for IPTV direct broadcast server with time shift
CN101446932A (en) * 2008-12-24 2009-06-03 北京中星微电子有限公司 Method and device for transmitting audio data
US20100268890A1 (en) * 2009-04-15 2010-10-21 International Business Machines Corporation Information handling system with immediate scheduling of load operations in a dual-bank cache with single dispatch into write/read data flow
US20100268887A1 (en) * 2009-04-15 2010-10-21 International Business Machines Corporation Information handling system with immediate scheduling of load operations in a dual-bank cache with dual dispatch into write/read data flow
CN102081509A (en) * 2009-11-30 2011-06-01 英业达股份有限公司 Method and device for reading RAID1 (Redundant Array of Inexpensive Disk 1) equipment

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577158A (en) * 2012-07-18 2014-02-12 阿里巴巴集团控股有限公司 Data processing method and device
CN103577158B (en) * 2012-07-18 2017-03-01 阿里巴巴集团控股有限公司 Data processing method and device
CN105579979A (en) * 2013-07-25 2016-05-11 格罗方德半导体公司 Three-dimensional processing system having multiple caches that can be partitioned, conjoined, and managed according to more than one set of rules and/or configurations
CN105579979B (en) * 2013-07-25 2019-08-30 格罗方德半导体公司 Three-dimensional process system with the multiple cachings that can divide according to more than one set of rule and/or configuration, combine and manage
CN106130791A (en) * 2016-08-12 2016-11-16 飞思达技术(北京)有限公司 Quality-of-service based buffer memory device service ability traversal test system and method
CN106951488A (en) * 2017-03-14 2017-07-14 海尔优家智能科技(北京)有限公司 A kind of log recording method and device
CN107908751A (en) * 2017-11-17 2018-04-13 赛凡信息科技(厦门)有限公司 A kind of optimization method of distributive catalogue of document system level quota
CN109918017A (en) * 2017-12-12 2019-06-21 北京机电工程研究所 Data dispatching method and device
CN108073706A (en) * 2017-12-20 2018-05-25 北京四方继保自动化股份有限公司 A kind of method of analogue system history library longitudinal data transverse directionization displaying
CN113419879A (en) * 2021-07-09 2021-09-21 中国电信股份有限公司 Message processing method, device, equipment and storage medium
CN113419879B (en) * 2021-07-09 2023-08-04 天翼云科技有限公司 Message processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN102426553B (en) 2014-05-28

Similar Documents

Publication Publication Date Title
CN102426553B (en) Method and device for transmitting data to user based on double-cache pre-reading
CN106201349B (en) A kind of method and apparatus handling read/write requests in physical host
US20170192823A1 (en) Network storage device using dynamic weights based on resource utilization
US20190354317A1 (en) Operation instruction scheduling method and apparatus for nand flash memory device
CN104067576B (en) For the system in transmission over networks simultaneous streaming
Chen et al. A scalable video-on-demand service for the provision of VCR-like functions
CA2362727C (en) Queuing architecture with multiple queues and method for statistical disk scheduling for video servers
CN100484094C (en) Method and system for actively managing central queue buffer allocation
CN108984280B (en) Method and device for managing off-chip memory and computer-readable storage medium
Chang et al. Effective memory use in a media server
CN102638402B (en) Method and device for filling data in streaming media double-buffering technology
CN108369530A (en) Control method, equipment and the system of reading and writing data order in non-volatile cache transfer bus framework
CN103761051A (en) Performance optimization method for multi-input/output stream concurrent writing based on continuous data
CN101080001B (en) Device for realizing balance of media content in network TV system and its method
CN103345451A (en) Data buffering method in multi-core processor
CN104123228B (en) A kind of data-storage system and its application method
US20090183166A1 (en) Algorithm to share physical processors to maximize processor cache usage and topologies
CN102053924A (en) Solid state memory with reduced number of partially filled pages
CN102882809B (en) Network speed-limiting method and device based on message buffering
CN104994135B (en) The method and device of SAN and NAS storage architectures is merged in storage system
CN109451008B (en) Multi-tenant bandwidth guarantee framework and cost optimization method under cloud platform
CN108768898A (en) A kind of method and its device of network-on-chip transmitting message
CN102223510A (en) Method and device for scheduling cache
JP4516395B2 (en) Memory management system with link list processor
CN100488165C (en) Stream scheduling method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140528

Termination date: 20191111

CF01 Termination of patent right due to non-payment of annual fee