CN102426553B - Method and device for transmitting data to user based on double-cache pre-reading - Google Patents

Method and device for transmitting data to user based on double-cache pre-reading Download PDF

Info

Publication number
CN102426553B
CN102426553B CN201110357612.2A CN201110357612A CN102426553B CN 102426553 B CN102426553 B CN 102426553B CN 201110357612 A CN201110357612 A CN 201110357612A CN 102426553 B CN102426553 B CN 102426553B
Authority
CN
China
Prior art keywords
user
free buffer
time
data
filling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201110357612.2A
Other languages
Chinese (zh)
Other versions
CN102426553A (en
Inventor
李俊
韩坤鹏
马书超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CN201110357612.2A priority Critical patent/CN102426553B/en
Publication of CN102426553A publication Critical patent/CN102426553A/en
Application granted granted Critical
Publication of CN102426553B publication Critical patent/CN102426553B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a method and device for transmitting data to a user based on double-cache pre-reading. The method mainly comprises the following steps: establishing a plurality of groups of double caches and allocating one group of double caches for each user, wherein the group of double caches comprises a working cache and an idle cache and is used for storing the data pre-read for the user; establishing a plurality of processing threads, wherein each processing thread is respectively in charge of the data reading of a designated disk and further serves a designated user; and aiming at each user served by each processing thread, transmitting the data to each user through the working cache in the group of double caches allocated for each user by using each thread processing, and after the transmission of the data cached in the working cache is completed, controlling to carry out the switching between the working cache and the idle cache. By utilizing the embodiment of the invention, the improvement of the throughput rate of the storage of a conventional disk can be achieved, and the delay brought about by the usage of a single cache is eliminated.

Description

Transmit the method and apparatus of data based on two cache pre-readings to user
Technical field
The present invention relates to communication technical field, more particularly, relate to and a kind ofly transmit the method and apparatus of data based on two cache pre-readings to user.
Background technology
Streaming Media refers to the media formats that adopts the mode of stream transmission to play on the internet.Streaming Media is again streaming video, and common businessman is packaged into flow media data packet by a video delivery server program and sends, and is sent on network.User carries out after decompress(ion) these flow media data packet by decompression apparatus, and program just can be play as before sending.The application of Streaming Media is now very extensive, has become the focus of internet.
At present, the principal element of the service performance of restriction Streaming Media comprises: the network storage bottleneck that seagate development hysteresis brings; Streaming media on demand technical bottleneck; Internet bandwidth bottlenecks etc., above-mentioned service performance mainly comprises: the speed of service and the quality of program.
At present, on market, main flow 7200 turns hard disk reading speed in 150MB/s left and right, and IO (Input/Output, the input/output) handling capacity of disk array is generally in 400MB/s left and right.Although this has had large increase than in the past, the growth rate of the otherwise performance raising of the computing machine of comparing and amount of digital information, seagate is still relatively backward.The slow development of seagate, has limited network storage ability, and it becomes affects one of principal element of streaming media service speed.
Summary of the invention
Embodiments of the invention provide a kind of based on two cache pre-readings transmit the method and apparatus of data to user, to realize the throughput that improves existing disk storage, eliminate the delay that uses single buffer memory to bring.
Based on two cache pre-readings transmit the methods of data to user, comprising:
Creating the two buffer memorys of many groups, is one group of two buffer memory of each user assignment, and described one group of two buffer memory comprise workspace cache and free buffer, the data that read in advance for being stored as this user;
Create multiple processing threads, the data that each processing threads is responsible for respectively designated disk read, and are user's service of specifying;
Each user that each described processing threads is served for it, transmit data by the workspace cache in one group of two buffer memory for each user assignment to each user, in described workspace cache, after the data transmission of buffer memory, control the switching of carrying out between described workspace cache and free buffer.
Based on two cache pre-readings transmit the devices of data to user, it is characterized in that, comprising:
Caching management module, for creating the two buffer memorys of many groups, is one group of two buffer memory of each user assignment, and described one group of two buffer memory comprise workspace cache and free buffer, the data that read in advance for being stored as this user;
Thread management module, for creating multiple processing threads, the data that each processing threads is responsible for respectively designated disk read, and are user's service of specifying;
Data processing module, for the each user who serves for each described processing threads, transmit data by the workspace cache in one group of two buffer memory for each user assignment to each user, in described workspace cache, after the data transmission of buffer memory, control the switching of carrying out between described workspace cache and free buffer.
The embodiment of the present invention is by being one group of two buffer memory of each user assignment, and be stored as the data that this user reads in advance with two buffer memorys, in the situation that existing disk storage capacity is constant, can improve significantly the throughput of existing disk storage, effectively eliminate the delay that uses single buffer memory to bring, thereby entirety improves the service ability of stream media system.
Accompanying drawing explanation
In order to be illustrated more clearly in the technical scheme of the embodiment of the present invention, below the accompanying drawing of required use during embodiment is described is briefly described, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skills, do not paying under the prerequisite of creative work, can also obtain according to these accompanying drawings other accompanying drawing.
Fig. 1 a kind ofly transmits the processing flow chart of the method for data based on two buffer memorys and read ahead technique to user for what the embodiment of the present invention one provided;
The time shaft schematic diagram of the random filled type strategy of a kind of free buffer that Fig. 2 provides for the embodiment of the present invention one;
The time shaft schematic diagram of a kind of exponential back formula filling Strategy that Fig. 3 provides for the embodiment of the present invention one;
A kind of pair of cache pre-reading that Fig. 4 provides for the embodiment of the present invention two, the simulator program workflow diagram that free buffer is filled;
Fig. 5 a kind ofly transmits the structural drawing of the device of data based on two buffer memorys and read ahead technique to user for what the embodiment of the present invention three provided.
Embodiment
For making object, technical scheme and the advantage of the embodiment of the present invention clearer, below in conjunction with the accompanying drawing in the embodiment of the present invention, technical scheme in the embodiment of the present invention is clearly and completely described, obviously, described embodiment is the present invention's part embodiment, rather than whole embodiment.Based on the embodiment in the present invention, those of ordinary skills, not making the every other embodiment obtaining under creative work prerequisite, belong to the scope of protection of the invention.
For ease of the understanding to the embodiment of the present invention, be further explained explanation below in conjunction with accompanying drawing as an example of several specific embodiments example, and each embodiment does not form the restriction to the embodiment of the present invention.
Embodiment mono-
This embodiment provides a kind of based on two buffer memorys and read ahead technique to user transmit data method treatment scheme as shown in Figure 1, comprise following treatment step:
Step 11, in system, being pre-created the two buffer memorys of many groups, is one group of two buffer memory of each user assignment, and described one group of two buffer memory comprise workspace cache and free buffer, the data that read in advance for being stored as this user.
From the user point of view, in the time of playing stream media, most applications, user's custom is to watch in order, watches operation even if having to jump, at short notice, majority is also play in order.
Be applicable to the feature of big data quantity order read-write and in user's most of the time, carry out the custom of playing continuously according to the hierarchy of Computer Storage, disk, on the low side and the not high present situation of access efficiency for hard disk integral reading speed, the embodiment of the present invention has proposed a kind of formula file pre-reading method in the know, the content pre-reading is to select by the order of media file storage, system reads the data in advance that may use in the future in two buffer memorys of distributing to each user, thereby improve hard disk access efficiency, improve user's service effectiveness.
If store the data that read in advance with single buffer memory, in the time of data cached being finished, system can experience one section of delay, be used for wait routine again reading disk and fill buffer memory.For the data that effectively use is read in advance, the embodiment of the present invention adopts two caching technologys,, in the time of data cached being finished, is directly switch into another buffer memory, then provides data by it, has so just eliminated the delay that uses single buffer memory to bring.
Initialization system is pre-created the two buffer memorys of many groups in system, connects into work chained list and idle chained list, and in the time not having user to access, all pairs of buffer memory groups all belong to idle chained list.When there being user to access, get one group of two buffer memory from idle chained list and distribute to this user, be this user's service, and add work chained list; In the time having user to exit, two buffer memorys of distributing to this user are taken out from work chained list, join idle chained list.Described one group of two buffer memory comprise workspace cache and free buffer.
Above-mentioned one group of two buffer memory refer to two dynamically application and big or small identical memory headrooms.Server is set up with the client-side program of different user is associatedly referred to as session, and each session is in store user's the information such as client address port, session status, media file not only, but also be the entrance of two buffer memorys.Two buffer memorys exist as the structure member of session.In the time that a new user conversation is created, can call malloc function dynamically for two buffer memorys distribute address space simultaneously.Twin Cache Architecture body divides member as follows:
Figure BDA0000107759050000041
Find out from the structure of above-mentioned pair of buffer memory, this structure only records address and the size of single buffer memory of the total size of two buffer memorys, remaining data amount, current service, two single buffer memory start addresses, transmission next time data.Wherein LeftSize and SendPtr can to upgrade numerical value after sending data (be LeftSize-=DATA_SIZE each, SendPtr+=DATA_SIZE, wherein DATA_SIZE is each data volume sending), if LeftSize is kept to 0, just carry out the switching of two buffer memorys, upgrade immediately BufferInUse (become 1 from 0, or become 0 from 1) and SendPtr (point to single buffer memory start address of switching, i.e. SendPtr=BufferPtr[BufferInUse]).
Step 12, a main thread of establishment and multiple processing threads, each processing threads is responsible for respectively the data read work of designated disk IO regularly, and serve regularly designated user, main thread is also according to the current C PU of system (Central Processing Unit, central processing unit) average transmission rate of occupation rate and total hard disk IO, the state of real-time judge current system is busy or idle.
First need according to certain strategy, in system, create a main thread and multiple processing threads, specify each processing threads to be responsible for regularly respectively the data read work of a part of disk I/O by main thread, specify each processing threads to serve regularly designated user, for this designated user carries out data stuffing and the data transfer operation of two buffer memorys simultaneously.Main thread, the data of any one user conversation request, is redirected to a processing threads, is carried out concrete read-write disk, two cache pre-reading and is sent the operation of data to user by it.Such as, all users are divided into N group, serve respectively one group of user by N processing threads.Above-mentioned main thread also needs, according to the reading and writing data of each processing threads and transmission situation, to calculate the average transmission rate of each hard disk IO, then the average transmission rate of all hard disk IO is added to the average transmission rate that has just obtained total hard disk IO.Then, above-mentioned main thread judges current C PU (the Central Processing Unit of system, central processing unit) average transmission rate of occupation rate and total hard disk IO is all within lower scope, and the state of real-time judge current system is idle; Otherwise the state that judges current system is busy.
All users that step 13, each processing threads timing traversal are served, carry out data transmission, two buffer memory blocked operation to each user.And according to predefined free buffer filling Strategy, all users' free buffer is carried out to data stuffing.
Each processing threads need to, according to the needs of specific code check, arrange timer.Particularly, if timing per second sends data COUNT time, each data volume is DATA_SIZE kb, and above-mentioned specific code check is exactly COUNT*DATA_SIZE kbps, by regulating COUNT and DATA_SIZE just can meet the requirements of specific code check; Otherwise, if known specific code check also can be inferred COUNT or DATA_SIZE.According to the single cache size BUF_SIZE in above-mentioned specific code check and above-mentioned pair of buffer memory, can calculate the time span that single buffer memory can be served.
After the timing of timer arrives, start to travel through all users that this processing threads is served, for each user, determine this user's use buffer memory, in the time using buffer memory not to be sky, from this use buffer memory, take out data, send to user by network, and the relevant recorded information of maintenance update; When this use buffer memory is during for sky, carry out the switching between workspace cache and free buffer, original free buffer is switched to use buffer memory, use buffer memory to switch to free buffer.
For each user's free buffer, carry out data stuffing according to predefined free buffer filling Strategy, above-mentioned free buffer filling Strategy should guarantee before arrival switching time of two buffer memory, free buffer is filled complete, and guarantee the read operation equalization of each disk, realize the load balancing of each disk, will guarantee that the data stuffing operation of above-mentioned free buffer will be carried out in the time that system is idle simultaneously as far as possible.
The above-mentioned free buffer filling Strategy that the embodiment of the present invention provides mainly comprises: random filled type strategy, state-detection formula filling Strategy and exponential back formula filling Strategy.Introduce respectively this three filling Strategy below.
1, random filled type strategy.
In order to tackle server while being the concurrent service of a large number of users, read the pressure that disk brings simultaneously, the embodiment of the present invention has proposed the random filled type strategy of free buffer.Main thought is after a user's last time, two buffer memorys switched, and is not to fill immediately free buffer, and selects at random a filling time, and guarantee to complete before two buffer memorys switchings in next time the filling of free buffer.The maximum feature of this strategy is randomization, and macroscopic view, can accomplish disk read operation is evenly distributed on time shaft.
In the time shaft of the random filled type strategy of the free buffer shown in accompanying drawing 2, time point T1 represents the time that last two buffer memory switches; Time point T2 represents to fill the time of free buffer; Time point T3 is expressed as the reserved safety time of free buffer, and the object that its exists is: prevent because the busy or filling time T2 of system from switching time T4 too close to, and not switching time T4 before arriving, complete the filling of free buffer; The time point T4 time that two buffer memorys switch next time.Switch action and exactly the send finger of buffer structure in session node is pointed to another buffer memory, and upgrade relevant statistics.
The randomization of free buffer is filled between time point T1 and T3 to be carried out.Specific implementation is: first calculate the timer interruption times INTR_NUM that time interval of T3-T1 experiences; Then when on time point T1 carries out, once two buffer memorys switch, for the free buffer of each user conversation generates random number m, and 0 < m < INTR_NUM.In the time that timer expires, travel through all users, in the time traversing certain user, in interrupting processing function, this user's m is subtracted to 1, if this user's m equals 0, carry out the filling of free buffer for this user, be this user and construct message and be passed to and read message queue, carry out concrete disk read operation by reading thread pool.
Randomized effect is herein that the user conversation of avoiding all is filled free buffer simultaneously, thus the high load capacity of having avoided a large amount of concurrent read operation of disk to bring, the object of the load balancing having reached.
2, state-detection formula filling Strategy.
Although random filled type strategy has been realized the load balancing of magnetic disc i/o from macroscopic view, when this strategy has been ignored two buffer memorys and switched, the state of magnetic disc i/o and CPU, likely misses the best of free buffer and fills opportunity.
In conjunction with the real-time status information of disk and CPU, the embodiment of the present invention has proposed improved state-detection formula free buffer filling Strategy, after upper once two buffer memorys switchings, processing threads is first from the state of main thread Real-time Obtaining current system, if the state of current system is idle, just immediately all users are carried out to free buffer filling simultaneously.And, obtain in real time the state of current system from main thread, if busy when the state of current system, the free buffer that stops all users is filled at once, by the processing of above-mentioned random filled type strategy.
After switching at two buffer memorys, be busy from the state of the current system of main thread Real-time Obtaining, directly press the processing of above-mentioned random filled type strategy.
3, exponential back formula filling Strategy.
After upper once two buffer memorys switchings, first processing threads from the state of main thread Real-time Obtaining current system, if the state of current system is idle, just carries out free buffer filling to all users immediately simultaneously.And, obtain in real time the state of current system from main thread, if busy when the state of current system, the free buffer that stops all users is filled at once.
When the current state of the whole system of each described processing threads Real-time Obtaining is while being busy, stop its each user's who serves free buffer padding, for each user arranges the safety time point T5 that a free buffer before the switching time between upper once workspace cache and free buffer is filled, between described T1 and described T5, for each user selects the actual filling time T2 of a free buffer;
In the time that certain user's described actual filling time T2 arrives, continue to judge the current state of whole system, if current state is idle, carry out free buffer filling for described certain user; If current state is busy, the principle retreating according to time index mode, between described T2 and described T5, for described certain user selects the actual filling time T3 of a free buffer;
Repeating aforesaid operations, until the current state of described certain user's the upper system of actual filling time point Tn is idle, is that described certain user carries out free buffer filling at described actual filling time point Tn; Or until described certain user's actual filling time point Tn is greater than described safety time point T5, be that described certain user carries out free buffer filling at described safety time point T5.
As shown in Figure 3, T1 represents the time that last two buffer memory switches, T2~T4 is called back off time point, T5 represents safety time point, T6 represents the time that two buffer memorys switch next time, wherein the index before safety time point T5 is dodged the time period and is called back off time section, its length increases with exponential manner, T2-T1=UNIT_TIME (specific less time period), T3-T2=2*UNIT_TIME, T4-T3=(22) * UNIT_TIME, T5-T4 <=(23) * U NIT_TIME.Except T1, T2-T4 is the random time point in corresponding back off time section.
Exponential retreats tactful process: at time point T1, in the time of two buffer memory finishing switching, first judge that whether current system is idle, if so, carry out at once all users' free buffer padding; If not, enter the exponential back stage.In this stage, first T1~T2 time period choose at random a filling time point for each user, in the time that each user arrives this filling time, or judge that whether system is busy, if so, in section, repeat said process in excess time.But in last time period (as T4~T5 herein), no matter whether system busy detected, all carried out at once the padding of free buffer before safety time point T5.
The stragetic innovation of exponential back formula random filled type and state-detection formula strategy, free buffer filling time point selection more reasonable, can utilize the I/O handling capacity of disk and the ability of CPU better and balancedly.But also existent defect of this strategy, is mainly that to obtain the number of times of disk transfer rate and CPU usage numerous, if it is longer to obtain the time of these information costs, likely take too much the time of service-user.
Embodiment bis-
When the fundamental purpose of pre-reader is test particular cache size, the disk export capability that multi-user concurrent can produce.It has mainly been realized multithreading processing, two cache algorithm, has generated media file name, Microsecond grade calendar scheduling function at random, and program is short and pithy, easy to understand.
To have the server of three hard disks, creating 1 host process and 3 processing threads is example, and as shown in Figure 3, concrete processing procedure is as follows for a kind of pair of cache pre-reading that this embodiment provides, the simulator program workflow that free buffer is filled:
1) program will be divided into three equal parts user, and Customs Assigned Number mould 3 equals 0,1 and 2 and is respectively one group, will be responsible for respectively by thread 0,1 and 2.And three processing threads are responsible for respectively a hard disk, processing threads 0 is responsible for hard disk sdb, and processing threads 1 is responsible for hard disk sdc, and processing threads 2 is responsible for hard disk sdd.So all users just read three hard disks fifty-fifty.And be that each user creates a two buffer memory.
2) counting initialization random function srand () with the microsecond of current time, is each user random media file numbering and starting point of generating respectively in three groups.
3) initialization thread counter is 0, creates three processing threads.Each establishment after processing threads, all adds 1 to thread counter, is the appearance that prevents competition, lock by thread information mechanism, and release again after having operated.
4) three processing threads enter treatment scheme.
5) host process enters waiting status.The termination condition of waiting for is that thread counter becomes 0 again again, three processing threads all finish, the time that now host process completes according to each processing threads, first calculate respectively the average transmission rate of each hard disk, then the transfer rate of three hard disks is added and has just obtained total transfer rate of disk really.Can also calculate the mean consumption time according to following formula:
data t 0 + data t 1 + data t 2 = 3 * data t &DoubleRightArrow; t = 3 * t 0 * t 1 * t 2 t 0 * t 1 + t 0 * t 2 + t 1 * t 2
The treatment scheme of processing threads:
1) first receive the parameter that host process is transmitted, the own responsible hard disk of judgement, is correct path prefix variable set up.
2) start the timing operation of the timer of this processing threads.Call Linux function gettimeofday (), obtain current time.
3) enter the limited number of time circulation of service-user, the total amount of data that cycle index is asked by each user draws divided by each data that send, and for example, total amount of data is 120MB, sends 8KB at every turn, needs to circulate 15360 times.
4) in superincumbent circulation, then travel through all users that this processing threads is served, to each user, take the random filled type strategy of free buffer as example, process following three kinds of situations:
Whether the workspace cache that 1. judges user is empty, if it is empty, carrying out two buffer memorys switches, and the filling random number of free buffer is set, and (this is the random number specially generating, each circulation time all can subtract 1, just fills this free buffer in the time being kept to 0), the scope of this numerical value is 0~SERVE_COUNT-2, wherein SERVE_COUNT represents the number of times that single buffer memory sheet can be served, and what deduct 2 is reserved safety times.For example, single buffer memory is 1MB (two buffer memorys is just 2MB), sends 8KB at every turn, and SERVE_COUNT is just 1MB/8KB=128.
2. judge whether free buffer filler is less than or equal to 0, if so, carry out free buffer filling, read the data of BUF_SIZE size from disk, and be filled into free buffer.
3. send low volume data by network to user.That low volume data is herein selected is 8KB, is 2Mbps because work as code check, and this is the size of average one-frame video data.
Above-mentioned pair of cache pre-reading, free buffer to-fill procedure are monitored together with transmission program with the Streaming Media on upper strata, have formed the two cache pre-reading systems of Streaming Media.In fact, above-mentioned pair of cache pre-reading, free buffer to-fill procedure are positioned at operating system layer, will serve as a module and exist.In the time that the Streaming Media on upper strata sends program to bottom request transmission data, above-mentioned pair of cache pre-reading of meeting auto-steering, free buffer to-fill procedure, do a polymerization by it and pre-read, then carry out the data management of two buffer types, then return to its required data of upper layer application.The data that in short time, upper procedure needs, are all also provided by two buffer memorys.
Embodiment tri-
This embodiment provide a kind of based on two buffer memorys and read ahead technique transmit the device of data to user, its concrete structure as shown in Figure 5, comprises following module:
Caching management module 51, for creating the two buffer memorys of many groups, is one group of two buffer memory of each user assignment, and described one group of two buffer memory comprise workspace cache and free buffer, the data that read in advance for being stored as this user;
Thread management module 52, for creating multiple processing threads, the data that each processing threads is responsible for respectively designated disk read, and are user's service of specifying;
Data processing module 53, for the each user who serves for each described processing threads, transmit data by the workspace cache in one group of two buffer memory for each user assignment to each user, in described workspace cache, after the data transmission of buffer memory, control the switching of carrying out between described workspace cache and free buffer.
Further, described device can also comprise:
Free buffer packing module 54, carry out the principle that data in magnetic disk reads in advance for data pre-head extract operation equalization based on each disk and/or in the time that whole system is idle free buffer filling Strategy is set, described free buffer filling Strategy completes the padding of free buffer before also needing to guarantee on carrying out once the switching between workspace cache and free buffer;
The each user who serves for each described processing threads, according to described free buffer filling Strategy to each user's free buffer carry out that data in magnetic disk reads in advance, data stuffing operation.
Concrete, described free buffer packing module 54, also in the time that described free buffer filling Strategy is random filled type strategy, the each user who serves for each described processing threads, switching time between upper once workspace cache and free buffer is after T1, for each user arranges the safety time point T3 that a free buffer before the switching time between upper once workspace cache and free buffer is filled, between described T1 and described T3, be the random actual filling time T2 that selects a free buffer of each user;
Travel through by each described processing threads timing each user that it is served, in the time that each user's described actual filling time T2 arrives, for each user's free buffer is carried out, data in magnetic disk reads in advance, data stuffing operation.
Concrete, described free buffer packing module 54, also when being state-detection formula free buffer filling Strategy when described free buffer filling Strategy, switching time between upper once workspace cache and free buffer is after T1, the current state of Real-time Obtaining whole system, described current state obtains according to the average transmission rate of the current central processing unit occupation rate of whole system and total hard disk input and output, in the time that described current state is the free time, the each user who serves for each described processing threads carries out free buffer padding;
When the current state of the whole system of Real-time Obtaining is while being busy, stop each user's that each described processing threads serves free buffer padding, for each user arranges the safety time point T3 that a free buffer before the switching time between upper once workspace cache and free buffer is filled, between described T1 and described T3, be the random actual filling time T2 that selects a free buffer of each user;
Travel through by each described processing threads timing each user that it is served, in the time that each user's described actual filling time T2 arrives, for each user's free buffer is carried out, data in magnetic disk reads in advance, data stuffing operation.
Concrete, described free buffer packing module 54, also in the time that described free buffer filling Strategy is exponential back formula filling Strategy, switching time between upper once workspace cache and free buffer is after T1, the current state of Real-time Obtaining whole system, described current state obtains according to the average transmission rate of the current central processing unit occupation rate of whole system and total hard disk input and output, in the time that described current state is the free time, the each user who serves for each described processing threads carries out free buffer padding;
When the current state of the whole system of Real-time Obtaining is while being busy, stop each user's that each described processing threads serves free buffer padding, for each user arranges the safety time point T5 that a free buffer before the switching time between upper once workspace cache and free buffer is filled, between described T1 and described T5, for each user selects the actual filling time T2 of a free buffer;
In the time that certain user's described actual filling time T2 arrives, continue to judge the current state of whole system, if current state is idle, carry out free buffer filling for described certain user; If current state is busy, the principle retreating according to time index mode, between described T2 and described T5, for described certain user selects the actual filling time T3 of a free buffer;
Repeating aforesaid operations, until the current state of described certain user's the upper system of actual filling time point Tn is idle, is that described certain user carries out free buffer filling at described actual filling time point Tn; Or until described certain user's actual filling time point Tn is greater than described safety time point T5, be that described certain user carries out free buffer filling at described safety time point T5.
Device that should the embodiment of the present invention the transmits concrete processing procedure of transmission data from data to user is identical with the concrete processing procedure in said method embodiment, does not repeat them here.
One of ordinary skill in the art will appreciate that all or part of flow process realizing in above-described embodiment method, can carry out the hardware that instruction is relevant by computer program to complete, described program can be stored in a computer read/write memory medium, this program, in the time carrying out, can comprise as the flow process of the embodiment of above-mentioned each side method.Wherein, described storage medium can be magnetic disc, CD, read-only store-memory body (Read-Only Memory, ROM) or random store-memory body (Random Access Memory, RAM) etc.
In sum, the embodiment of the present invention is by being one group of two buffer memory of each user assignment, and be stored as the data that this user reads in advance with two buffer memorys, in the situation that existing disk storage capacity is constant, can improve significantly the throughput of existing disk storage, effectively eliminate the delay that uses single buffer memory to bring, thereby entirety improves the service ability of stream media system.
The embodiment of the present invention is carried out the principle that data in magnetic disk reads in advance free buffer filling Strategy is set by the data pre-head extract operation equalization based on each disk and/or in the time that whole system is idle, can realize the equalization of each magnetic disc i/o data read operation, entirety has just reached the object of load balancing like this.And, can guarantee to carry out the operation that data in magnetic disk reads, free buffer is filled in the time that the state of system is the free time as far as possible.
For the situation of the direct connection server of polylith disk, the embodiment of the present invention is responsible for specifying the read-write operation of hard disk by each processing threads, has avoided the competition that multithreading hybrid processing likely causes, the situation that single-threaded processing can make disk utilization step-down.
The above; only for preferably embodiment of the present invention, but protection scope of the present invention is not limited to this, is anyly familiar with in technical scope that those skilled in the art disclose in the present invention; the variation that can expect easily or replacement, within all should being encompassed in protection scope of the present invention.Therefore, protection scope of the present invention should be as the criterion with the protection domain of claim.

Claims (8)

  1. Based on two cache pre-readings transmit the methods of data to user, it is characterized in that, comprising:
    Creating the two buffer memorys of many groups, is one group of two buffer memory of each user assignment, and described one group of two buffer memory comprise workspace cache and free buffer, the data that read in advance for being stored as this user;
    Create multiple processing threads, the data that each processing threads is responsible for respectively designated disk read, and are user's service of specifying;
    Each user that each described processing threads is served for it, transmit data by the workspace cache in one group of two buffer memory for each user assignment to each user, in described workspace cache, after the data transmission of buffer memory, control the switching of carrying out between described workspace cache and free buffer;
    Data pre-head extract operation equalization based on each disk and/or carry out the principle that data in magnetic disk reads in advance in the time that whole system is idle free buffer filling Strategy is set, described free buffer filling Strategy completes the padding of free buffer before also needing to guarantee on carrying out once the switching between workspace cache and free buffer; Wherein, described free buffer filling Strategy comprises: random filled type strategy, state-detection formula filling Strategy and exponential back formula filling Strategy;
    Each user that each described processing threads is served for it, according to described free buffer filling Strategy to each user's free buffer carry out that data in magnetic disk reads in advance, data stuffing operation.
  2. 2. according to claim 1ly transmit the method for data based on two cache pre-readings to user, it is characterized in that, each user that described each described processing threads is served for it, according to described free buffer filling Strategy to each user's free buffer carry out that data in magnetic disk reads in advance, data stuffing operation, comprising:
    In the time that described free buffer filling Strategy is random filled type strategy, each user that each described processing threads is served for it, switching time between upper once workspace cache and free buffer is after T1, for each user arranges the safety time point T3 that a free buffer before the switching time between upper once workspace cache and free buffer is filled, between described T1 and described T3, be the random actual filling time T2 that selects a free buffer of each user;
    Each described processing threads timing travels through each user that it is served, and in the time that each user's described actual filling time T2 arrives, for each user's free buffer is carried out, data in magnetic disk reads in advance, data stuffing operation.
  3. 3. according to claim 1ly transmit the method for data based on two cache pre-readings to user, it is characterized in that, each user that described each described processing threads is served for it, according to described free buffer filling Strategy to each user's free buffer carry out that data in magnetic disk reads in advance, data stuffing operation, comprising:
    In the time that described free buffer filling Strategy is state-detection formula free buffer filling Strategy, switching time between upper once workspace cache and free buffer is after T1, the current state of each described processing threads Real-time Obtaining whole system, described current state obtains according to the average transmission rate of the current central processing unit occupation rate of whole system and total hard disk input and output, in the time that described current state is the free time, each user that each described processing threads is served for it carries out free buffer padding;
    When the current state of the whole system of each described processing threads Real-time Obtaining is while being busy, stop its each user's who serves free buffer padding, for each user arranges the safety time point T3 that a free buffer before the switching time between upper once workspace cache and free buffer is filled, between described T1 and described T3, be the random actual filling time T2 that selects a free buffer of each user;
    Each described processing threads timing travels through each user that it is served, and in the time that each user's described actual filling time T2 arrives, for each user's free buffer is carried out, data in magnetic disk reads in advance, data stuffing operation.
  4. 4. according to claim 1ly transmit the method for data based on two cache pre-readings to user, it is characterized in that, each user that described each described processing threads is served for it, according to described free buffer filling Strategy to each user's free buffer carry out that data in magnetic disk reads in advance, data stuffing operation, comprising:
    In the time that described free buffer filling Strategy is exponential back formula filling Strategy, switching time between upper once workspace cache and free buffer is after T1, the current state of each described processing threads Real-time Obtaining whole system, described current state obtains according to the average transmission rate of the current central processing unit occupation rate of whole system and total hard disk input and output, in the time that described current state is the free time, each user that each described processing threads is served for it carries out free buffer padding;
    When the current state of the whole system of each described processing threads Real-time Obtaining is while being busy, stop its each user's who serves free buffer padding, for each user arranges the safety time point T5 that a free buffer before the switching time between upper once workspace cache and free buffer is filled, between described T1 and described T5, for each user selects the actual filling time T2 of a free buffer;
    In the time that certain user's described actual filling time T2 arrives, continue to judge the current state of whole system, if current state is idle, carry out free buffer filling for described certain user; If current state is busy, the principle retreating according to time index mode, between described T2 and described T5, for described certain user selects the actual filling time T3 of a free buffer;
    Repeating aforesaid operations, until the current state of described certain user's the upper system of actual filling time point Tn is idle, is that described certain user carries out free buffer filling at described actual filling time point Tn; Or until described certain user's actual filling time point Tn is greater than described safety time point T5, be that described certain user carries out free buffer filling at described safety time point T5.
  5. Based on two cache pre-readings transmit the devices of data to user, it is characterized in that, comprising:
    Caching management module, for creating the two buffer memorys of many groups, is one group of two buffer memory of each user assignment, and described one group of two buffer memory comprise workspace cache and free buffer, the data that read in advance for being stored as this user;
    Thread management module, for creating multiple processing threads, the data that each processing threads is responsible for respectively designated disk read, and are user's service of specifying;
    Data processing module, for the each user who serves for each described processing threads, transmit data by the workspace cache in one group of two buffer memory for each user assignment to each user, in described workspace cache, after the data transmission of buffer memory, control the switching of carrying out between described workspace cache and free buffer;
    Free buffer packing module, carry out the principle that data in magnetic disk reads in advance for data pre-head extract operation equalization based on each disk and/or in the time that whole system is idle free buffer filling Strategy is set, described free buffer filling Strategy completes the padding of free buffer before also needing to guarantee on carrying out once the switching between workspace cache and free buffer; Wherein, described free buffer filling Strategy comprises: random filled type strategy, state-detection formula filling Strategy and exponential back formula filling Strategy;
    The each user who serves for each described processing threads, according to described free buffer filling Strategy to each user's free buffer carry out that data in magnetic disk reads in advance, data stuffing operation.
  6. According to claim 5 based on two cache pre-readings transmit the device of data to user, it is characterized in that:
    Described free buffer packing module, also in the time that described free buffer filling Strategy is random filled type strategy, the each user who serves for each described processing threads, switching time between upper once workspace cache and free buffer is after T1, for each user arranges the safety time point T3 that a free buffer before the switching time between upper once workspace cache and free buffer is filled, between described T1 and described T3, be the random actual filling time T2 that selects a free buffer of each user;
    Travel through by each described processing threads timing each user that it is served, in the time that each user's described actual filling time T2 arrives, for each user's free buffer is carried out, data in magnetic disk reads in advance, data stuffing operation.
  7. According to claim 5 based on two cache pre-readings transmit the device of data to user, it is characterized in that:
    Described free buffer packing module, also when being state-detection formula free buffer filling Strategy when described free buffer filling Strategy, switching time between upper once workspace cache and free buffer is after T1, the current state of Real-time Obtaining whole system, described current state obtains according to the average transmission rate of the current central processing unit occupation rate of whole system and total hard disk input and output, in the time that described current state is the free time, the each user who serves for each described processing threads carries out free buffer padding;
    When the current state of the whole system of Real-time Obtaining is while being busy, stop each user's that each described processing threads serves free buffer padding, for each user arranges the safety time point T3 that a free buffer before the switching time between upper once workspace cache and free buffer is filled, between described T1 and described T3, be the random actual filling time T2 that selects a free buffer of each user;
    Travel through by each described processing threads timing each user that it is served, in the time that each user's described actual filling time T2 arrives, for each user's free buffer is carried out, data in magnetic disk reads in advance, data stuffing operation.
  8. According to claim 5 based on two cache pre-readings transmit the device of data to user, it is characterized in that:
    Described free buffer packing module, also in the time that described free buffer filling Strategy is exponential back formula filling Strategy, switching time between upper once workspace cache and free buffer is after T1, the current state of Real-time Obtaining whole system, described current state obtains according to the average transmission rate of the current central processing unit occupation rate of whole system and total hard disk input and output, in the time that described current state is the free time, the each user who serves for each described processing threads carries out free buffer padding;
    When the current state of the whole system of Real-time Obtaining is while being busy, stop each user's that each described processing threads serves free buffer padding, for each user arranges the safety time point T5 that a free buffer before the switching time between upper once workspace cache and free buffer is filled, between described T1 and described T5, for each user selects the actual filling time T2 of a free buffer;
    In the time that certain user's described actual filling time T2 arrives, continue to judge the current state of whole system, if current state is idle, carry out free buffer filling for described certain user; If current state is busy, the principle retreating according to time index mode, between described T2 and described T5, for described certain user selects the actual filling time T3 of a free buffer;
    Repeating aforesaid operations, until the current state of described certain user's the upper system of actual filling time point Tn is idle, is that described certain user carries out free buffer filling at described actual filling time point Tn; Or until described certain user's actual filling time point Tn is greater than described safety time point T5, be that described certain user carries out free buffer filling at described safety time point T5.
CN201110357612.2A 2011-11-11 2011-11-11 Method and device for transmitting data to user based on double-cache pre-reading Expired - Fee Related CN102426553B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110357612.2A CN102426553B (en) 2011-11-11 2011-11-11 Method and device for transmitting data to user based on double-cache pre-reading

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201110357612.2A CN102426553B (en) 2011-11-11 2011-11-11 Method and device for transmitting data to user based on double-cache pre-reading

Publications (2)

Publication Number Publication Date
CN102426553A CN102426553A (en) 2012-04-25
CN102426553B true CN102426553B (en) 2014-05-28

Family

ID=45960541

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110357612.2A Expired - Fee Related CN102426553B (en) 2011-11-11 2011-11-11 Method and device for transmitting data to user based on double-cache pre-reading

Country Status (1)

Country Link
CN (1) CN102426553B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103577158B (en) * 2012-07-18 2017-03-01 阿里巴巴集团控股有限公司 Data processing method and device
US9336144B2 (en) * 2013-07-25 2016-05-10 Globalfoundries Inc. Three-dimensional processing system having multiple caches that can be partitioned, conjoined, and managed according to more than one set of rules and/or configurations
CN106130791B (en) * 2016-08-12 2022-11-04 飞思达技术(北京)有限公司 Cache equipment service capability traversal test system and method based on service quality
CN106951488B (en) * 2017-03-14 2021-03-12 海尔优家智能科技(北京)有限公司 Log recording method and device
CN107908751A (en) * 2017-11-17 2018-04-13 赛凡信息科技(厦门)有限公司 A kind of optimization method of distributive catalogue of document system level quota
CN109918017A (en) * 2017-12-12 2019-06-21 北京机电工程研究所 Data dispatching method and device
CN108073706B (en) * 2017-12-20 2022-02-22 北京四方继保自动化股份有限公司 Method for transversely displaying longitudinal data of simulation system historical library
CN113419879B (en) * 2021-07-09 2023-08-04 天翼云科技有限公司 Message processing method, device, equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1195135A (en) * 1997-03-28 1998-10-07 国际商业机器公司 Method and apparatus for decreasing thread switch latency in multithread processor
US6587937B1 (en) * 2000-03-31 2003-07-01 Rockwell Collins, Inc. Multiple virtual machine system with efficient cache memory design
US20060117218A1 (en) * 2004-11-12 2006-06-01 Nec Electronics Corporation Multi-processing system and multi-processing method
CN101060418A (en) * 2007-05-24 2007-10-24 上海清鹤数码科技有限公司 Special disk reading and writing system suitable for IPTV direct broadcast server with time shift
CN101446932A (en) * 2008-12-24 2009-06-03 北京中星微电子有限公司 Method and device for transmitting audio data
US20100268890A1 (en) * 2009-04-15 2010-10-21 International Business Machines Corporation Information handling system with immediate scheduling of load operations in a dual-bank cache with single dispatch into write/read data flow
CN102081509A (en) * 2009-11-30 2011-06-01 英业达股份有限公司 Method and device for reading RAID1 (Redundant Array of Inexpensive Disk 1) equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8195880B2 (en) * 2009-04-15 2012-06-05 International Business Machines Corporation Information handling system with immediate scheduling of load operations in a dual-bank cache with dual dispatch into write/read data flow

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1195135A (en) * 1997-03-28 1998-10-07 国际商业机器公司 Method and apparatus for decreasing thread switch latency in multithread processor
US6587937B1 (en) * 2000-03-31 2003-07-01 Rockwell Collins, Inc. Multiple virtual machine system with efficient cache memory design
US20060117218A1 (en) * 2004-11-12 2006-06-01 Nec Electronics Corporation Multi-processing system and multi-processing method
CN101060418A (en) * 2007-05-24 2007-10-24 上海清鹤数码科技有限公司 Special disk reading and writing system suitable for IPTV direct broadcast server with time shift
CN101446932A (en) * 2008-12-24 2009-06-03 北京中星微电子有限公司 Method and device for transmitting audio data
US20100268890A1 (en) * 2009-04-15 2010-10-21 International Business Machines Corporation Information handling system with immediate scheduling of load operations in a dual-bank cache with single dispatch into write/read data flow
CN102081509A (en) * 2009-11-30 2011-06-01 英业达股份有限公司 Method and device for reading RAID1 (Redundant Array of Inexpensive Disk 1) equipment

Also Published As

Publication number Publication date
CN102426553A (en) 2012-04-25

Similar Documents

Publication Publication Date Title
CN102426553B (en) Method and device for transmitting data to user based on double-cache pre-reading
US9690705B1 (en) Systems and methods for processing data sets according to an instructed order
US9600337B2 (en) Congestion avoidance in network storage device using dynamic weights
US20190354317A1 (en) Operation instruction scheduling method and apparatus for nand flash memory device
CN104317766B (en) Method and system for improving serial port memory communication latency and reliability
CN109936604A (en) A kind of resource regulating method, device and system
JP2000515706A (en) A system that retrieves data from a stream server
CN102638402B (en) Method and device for filling data in streaming media double-buffering technology
CN106233269A (en) Fine granulation bandwidth supply in Memory Controller
JP2000505983A (en) Method and system for providing a data stream
CN108984280B (en) Method and device for managing off-chip memory and computer-readable storage medium
CN110022267A (en) Processing method of network data packets and device
CN103345451A (en) Data buffering method in multi-core processor
CN104123228B (en) A kind of data-storage system and its application method
CN107864391A (en) Video flowing caches distribution method and device
CN109451008B (en) Multi-tenant bandwidth guarantee framework and cost optimization method under cloud platform
CN102223510A (en) Method and device for scheduling cache
CN100547572C (en) Dynamically set up the method and system of direct memory access path
CN106027426A (en) Packet memory system, method and device for preventing underrun
CN105867848B (en) A kind of information processing method and hard disk mould group
CN104184685B (en) Data center resource distribution method, apparatus and system
CN105988725B (en) Magnetic disc i/o dispatching method and device
CN104247352B (en) A kind of accumulator system and its method for storage information unit
CN100488165C (en) Stream scheduling method
CN102063383B (en) Method for recording mapping relation between logical extents (LE) and physical extents (PE)

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20140528

Termination date: 20191111

CF01 Termination of patent right due to non-payment of annual fee