CN106161503A - File reading in a kind of distributed memory system and service end - Google Patents

File reading in a kind of distributed memory system and service end Download PDF

Info

Publication number
CN106161503A
CN106161503A CN201510142266.4A CN201510142266A CN106161503A CN 106161503 A CN106161503 A CN 106161503A CN 201510142266 A CN201510142266 A CN 201510142266A CN 106161503 A CN106161503 A CN 106161503A
Authority
CN
China
Prior art keywords
file data
read request
file
reading
thread
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510142266.4A
Other languages
Chinese (zh)
Inventor
韩盛中
李中军
江俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN201510142266.4A priority Critical patent/CN106161503A/en
Priority to PCT/CN2015/088998 priority patent/WO2016155238A1/en
Publication of CN106161503A publication Critical patent/CN106161503A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/40Support for services or applications

Abstract

The present invention provides the file reading in a kind of distributed memory system and service end, is applied to the communications field.First, service end obtains read request by reading thread from client;Then, service end gets corresponding file data according to read request from corresponding disk;Finally, the file data of acquisition is sent to client by the return thread pre-build by service end.Compared with prior art, by then passing through the return thread pre-build, file is issued client, rather than return file data by reading thread, so, after reading the file data that thread just need not return reading by the time, just can discharge next read request of process, next read request of process, the treatment effeciency of the read request that improve can be discharged as early as possible, improve the efficiency obtaining file data further, saving processes the time, strengthens user experience.

Description

File reading in a kind of distributed memory system and service end
Technical field
The present invention relates to the communications field, be specifically related to the file reading in a kind of distributed memory system and Service end.
Background technology
The range of application of distributed file system is increasingly wider, and current audio-video frequency media file, picture are civilian Part is the most increasing, and when user downloads, the requirement to speed is more and more higher.And disk is as slow equipment, more More become the bottleneck reading file.Though nowadays solid-state hard disk SSD (Solid State Drives) skill Art develops quickly, but either capacity factor the most in price, and SSD still can not substitute traditional machine Tool dish.Utilize the reading performance of disk the most efficiently, farthest meet user's speed of download, become For one urgent problem of distributed file system.Existing distributed file system is at Multi-thread synchronization The read request of reason user concurrent, as it is shown in figure 1, read request is issued reading thread by client, reads thread and will read Request is thrown away in the queue of corresponding disk, and request is taken out from queue by the process thread that each disk is corresponding, By backtracking to worker thread after running through, now worker thread could return the file of reading to user.So, After each reading thread have to return the file of reading by the time, just can discharge next read request of process, this The treatment effeciency of the read request that sample reduces, reduces the efficiency obtaining file further.
Summary of the invention
The file that the main technical problem to be solved in the present invention is to provide in a kind of distributed memory system reads Method and service end, solve existing after reading thread have to return the file of reading by the time, just can discharge Process next read request, cause the problem that the treatment effeciency of read request is low.
For solving the problems referred to above, the present invention provides the file reading in a kind of distributed memory system, bag Include:
Service end obtains read request by reading thread from client;
Described service end gets corresponding file data according to described read request from corresponding disk;
The file data of acquisition is sent to described client by the return thread pre-build by described service end End.
In an embodiment of the present invention, described from corresponding disk, phase is got according to described read request The file data answered includes:
From corresponding multiple disks, the file data of correspondence is read respectively, after reading according to described read request File data be stored in data buffer area;
Described the file data of acquisition be sent to described client by the return thread that pre-builds include:
Judge whether described data buffer area exists file data, if there is file data, immediately by described File data is sent to described client by the return thread pre-build.
In an embodiment of the present invention, described return thread includes many sub-line of return journeys, and every height returns The corresponding data buffer area of loop line journey;Judge whether described data buffer area exists file data and include: institute State service end and return whether the thread data buffer area of inquiring about its correspondence according to preset rules exists file by son Data;Described described file data be sent to described client by the return thread that pre-builds include: The file data inquired is returned thread by corresponding son and is sent to described client.
In an embodiment of the present invention, obtain after read request from client by reading thread in service end, Include before reading corresponding file data according to described read request from corresponding disk: please by described reading Seek survival into kernel asynchronous process queue;Described service end gets from corresponding disk according to described read request Corresponding file data includes: described service end is regular from described kernel asynchronous process queue according to default process Take out read request, from corresponding disk, get corresponding file data according to the read request taken out.
In an embodiment of the present invention, described default process rule includes:
Take out all read requests in described kernel asynchronous process queue according to predetermined period, sector position is existed Read request merging treatment in certain predetermined value range, obtains sector position in certain predetermined value range simultaneously The each self-corresponding multiple file datas of multiple read requests;
Or
According to being stored in the reading from described kernel asynchronous process queue of the order of priority of kernel asynchronous process queue Request, the file data corresponding according to obtaining described read request.
For solving the problems referred to above, the present invention also provides for a kind of service end, including read request acquisition module, file Data acquisition module and return threading models:
Described read request acquisition module is for obtaining read request by reading thread from client;
Described file data acquisition module is for getting corresponding according to described read request from corresponding disk File data;
Described return threading models is for being sent to the file data of acquisition by the return thread pre-build Described client.
In an embodiment of the present invention, described file data acquisition module includes that file data obtains submodule Block and data cache sub-module:
Described data acquisition submodule is right for reading respectively from corresponding multiple disks according to described read request The file data answered, data buffer storage submodule file data after described data acquisition submodule will read It is stored in data buffer area;
Described return threading models is additionally operable to judge whether described data buffer area exists file data, if deposited At file data, immediately described file data is sent to described client by the return thread pre-build.
In an embodiment of the present invention, described return thread includes many sub-line of return journeys, and every height returns The corresponding data buffer area of loop line journey;Described data acquisition submodule is additionally operable to: described service end is by son Whether the data buffer area that return thread inquires about its correspondence according to default rule searching exists file data;Described Return threading models is additionally operable to: by corresponding son, the file data inquired is returned thread and is sent to described Client.
In an embodiment of the present invention, kernel asynchronous process queue is also included;Described kernel asynchronous process Queue is for obtaining after read request from client by reading thread in service end, according to described read request from correspondence Disk in read corresponding file data before, described read request is stored in kernel asynchronous process queue; Described file data acquisition module also includes receiving submodule: described reception submodule is for according to default process Rule takes out read request from described kernel asynchronous process queue, and described data acquisition submodule is additionally operable to according to taking The read request gone out gets corresponding file data from corresponding disk.
In an embodiment of the present invention, described default process rule includes:
Take out all read requests in described kernel asynchronous process queue according to predetermined period, sector position is existed Read request merging treatment in certain predetermined value range, obtains sector position in certain predetermined value range simultaneously The each self-corresponding multiple file datas of multiple read requests;
Or
According to being stored in the reading from described kernel asynchronous process queue of the order of priority of kernel asynchronous process queue Request, the file data corresponding according to obtaining described read request.
For solving the problems referred to above, the present invention provides the file reading in a kind of distributed memory system, bag Include:
Service end obtains read request by reading thread from client;
Described service end gets corresponding file data according to described read request from corresponding disk;
The file data of acquisition is sent to described client by the return thread pre-build by described service end End.
In an embodiment of the present invention, described from corresponding disk, phase is got according to described read request The file data answered includes:
From corresponding multiple disks, the file data of correspondence is read respectively, after reading according to described read request File data be stored in data buffer area;
Described the file data of acquisition be sent to described client by the return thread that pre-builds include:
Judge whether described data buffer area exists file data, if there is file data, immediately by described File data is sent to described client by the return thread pre-build.
In an embodiment of the present invention, described return thread includes many sub-line of return journeys, and every height returns The corresponding data buffer area of loop line journey;Described judge whether asynchronous input and output Accreditation Waiting Area exists file data Return whether thread inquires about the data buffer area of its correspondence according to preset rules including: described service end by son There is file data;Described described file data is sent to described client by the return thread that pre-builds End includes: by corresponding son, the file data inquired is returned thread and is sent to described client.
In an embodiment of the present invention, obtain after read request from client by reading thread in service end, Include before reading corresponding file data according to described read request from corresponding disk: please by described reading Seek survival into kernel asynchronous process queue;Described service end gets from corresponding disk according to described read request Corresponding file data includes: described service end is regular from described kernel asynchronous process queue according to default process Take out read request, from corresponding disk, get corresponding file data according to the read request taken out.
In an embodiment of the present invention, described default process rule includes:
Take out all read requests in described kernel asynchronous process queue according to predetermined period, sector position is existed Read request merging treatment in certain predetermined value range, obtains sector position in certain predetermined value range simultaneously The each self-corresponding multiple file datas of multiple read requests;
Or
According to being stored in the reading from described kernel asynchronous process queue of the order of priority of kernel asynchronous process queue Request, the file data corresponding according to obtaining described read request.
For solving the problems referred to above, the present invention also provides for a kind of service end, including read request acquisition module, file Data acquisition module and return threading models:
Described read request acquisition module is for obtaining read request by reading thread from client;
Described file data acquisition module is for getting corresponding according to described read request from corresponding disk File data;
Described return threading models is for being sent to the file data of acquisition by the return thread pre-build Described client.
In an embodiment of the present invention, described file data acquisition module includes that file data obtains submodule Block and data cache sub-module:
Described data acquisition submodule is right for reading respectively from corresponding multiple disks according to described read request The file data answered, data buffer storage submodule file data after described data acquisition submodule will read It is stored in data buffer area;
Described return threading models is additionally operable to judge whether described data buffer area exists file data, if deposited At file data, immediately described file data is sent to described client by the return thread pre-build.
In an embodiment of the present invention, described return thread includes many sub-line of return journeys, and every height returns The corresponding data buffer area of loop line journey;Described data acquisition submodule is additionally operable to: described service end is by son Whether the data buffer area that return thread inquires about its correspondence according to default rule searching exists file data;Described Return threading models is additionally operable to: by corresponding son, the file data inquired is returned thread and is sent to described Client.
In an embodiment of the present invention, kernel asynchronous process queue is also included;Described kernel asynchronous process Queue is for obtaining after read request from client by reading thread in service end, according to described read request from correspondence Disk in read corresponding file data before, described read request is stored in kernel asynchronous process queue; Described file data acquisition module also includes receiving submodule: described reception submodule is for according to default process Rule takes out read request from described kernel asynchronous process queue, and described data acquisition submodule is additionally operable to according to taking The read request gone out gets corresponding file data from corresponding disk.
In an embodiment of the present invention, described default process rule includes:
Take out all read requests in described kernel asynchronous process queue according to predetermined period, sector position is existed Read request merging treatment in certain predetermined value range, obtains sector position in certain predetermined value range simultaneously The each self-corresponding multiple file datas of multiple read requests;
Or
According to being stored in the reading from described kernel asynchronous process queue of the order of priority of kernel asynchronous process queue Request, the file data corresponding according to obtaining described read request.
The invention has the beneficial effects as follows:
The present invention provides the file reading in a kind of distributed memory system and service end, first, clothes Business end obtains read request by reading thread from client;Then, service end according to read request from corresponding disk In get corresponding file data;Finally, service end by the file data of acquisition by returning of pre-building Loop line journey is sent to client.Compared with prior art, by then passing through the return thread pre-build by literary composition Part is issued client rather than returns file data by reading thread, so, reads thread and just need not wait until After returning the file data read, just can discharge next read request of process, it is possible to discharge place as early as possible Manage next read request, the treatment effeciency of the read request that improve, improves the effect obtaining file data further Rate, saving processes the time, strengthens user experience.
Accompanying drawing explanation
Fig. 1 is the file reading schematic flow sheet in existing distributed memory system;
File reading flow process signal in the distributed memory system that Fig. 2 provides for the embodiment of the present invention one Figure;
The service end structural representation one that Fig. 3 provides for the embodiment of the present invention two;
The service end structural representation two that Fig. 4 provides for the embodiment of the present invention two;
The service end structural representation three that Fig. 5 provides for the embodiment of the present invention two;
File reading flow process signal in the distributed memory system that Fig. 6 provides for the embodiment of the present invention three Figure;
The one in file reading in the distributed memory system that Fig. 7 provides for the embodiment of the present invention three Service end structural representation;
Asynchronous in file reading in the distributed memory system that Fig. 8 provides for the embodiment of the present invention three Input/output module structural representation.
Detailed description of the invention
Combine accompanying drawing below by detailed description of the invention the present invention is described in further detail.
Embodiment one:
The handover delay measuring method that the present embodiment provides, as in figure 2 it is shown, comprise the following steps:
Step S101: service end obtains read request by reading thread from client;
In this step, client obtains the read request of user, and read request includes read request parameter, read request Parameter includes, file handle, skew, length etc..It is right that read request is gone meta data server to obtain by client The locations of copies (server, disk) answered, is dealt into the service end of correspondence by read request.Namely service end passes through Read thread and obtain read request from client.Being worth importantly, file data here can a read request Corresponding complete file data, it may also be said to the partial document data that this read request is corresponding.Such as, user A file data to be read, this document is video file, and the distribution of A file data is stored in certain video server In middle disk 1, disk 2 and disk 3.Service end just from this video server just by read thread from client Obtain the read request of A file data.
Step S102: service end gets corresponding file data according to read request from corresponding disk;
In this step, corresponding file data refers to that the file distribution corresponding with read request is stored in this here Partial document in disk.Example explanation in integrating step S101, service end is receiving the reading of A file data After request, just disk 1, disk 2 and the disk 3 from the server of storage A file data obtains respectively The partial document data of corresponding A file data.
Step S103: the file data of acquisition is sent to client by the return thread pre-build by service end End.
In this step, return thread and be construed as with to read thread different, can be used to return the literary composition of acquisition The thread of number of packages evidence.Example explanation in integrating step S102, is dividing from disk 1, disk 2 and disk 3 After the partial document data of the A file not obtaining correspondence, these file datas are sent to by returning thread Client.Rather than use existing in, i.e. read thread return to client by running through Hou Anyuan road.So Quickly read request can be discharged, next read request is processed.
Further, may differ due to the reading data of different disk, it is to avoid due to the Pan Chu of certain disk Reason influences whether the process of multiple worker thread slowly, it is notable that worker thread here refers to own It is used for processing the thread of read request, including reading thread and returning the line that thread etc. relates in processing read request Journey.Each disk can be made to carry out asynchronous process for certain read request, and concrete implementation can be, upper State and step S102 gets corresponding file data according to read request from corresponding disk can be: according to Read request reads the file data of correspondence from corresponding multiple disks respectively, and the file data after reading is deposited Enter in data buffer area;Preferably, after file data reads, it is immediately placed in data buffer area.Above-mentioned The file data of acquisition is sent to client by the return thread pre-build by step S103 include: sentence Whether disconnected data buffer area exists file data, if there is file data, immediately by file data by pre- The return thread first set up is sent to client.So, for the disk first processed and corresponding read request, Can be carried out the process of next read request, go to obtain the file data that next read request is corresponding, improve Read the treatment effeciency of file and the concurrent throughput utilisation of disk.
Further, in order to as soon as possible by file data to client, prevent the file data choke line got Journey affects the data of disk below and processes and the handling capacity of disk.Return thread specifically can be set and include multiple Son returns to thread, every corresponding data buffer area of sub-line of return journey;The above-mentioned asynchronous input and output of judgement etc. Treating whether district exists file data can be that service end inquires about its correspondence by son return thread according to preset rules Data buffer area whether there is file data;Above-mentioned file data is sent out by the return thread that pre-builds Giving client can be by corresponding son, the file data inquired to be returned thread be sent to client. Preferably, preset rules inquiry here is poll inquiry rule.
Further, in order to improve release and the treatment effeciency of bigger raising read request, the maximum limit of read request The handling capacity promoting disk of degree.After above-mentioned steps S101, can also include before step S102 Step, i.e. obtains after read request from client by reading thread in service end, according to read request from corresponding magnetic Step is included: read request is stored in kernel asynchronous process queue before dish reads corresponding file data; It is concrete that above-mentioned steps S102 i.e. service end gets corresponding file data according to read request from corresponding disk Can be that service end takes out read request from kernel asynchronous process queue, according to take out according to the default rule that processes Read request gets corresponding file data from corresponding disk.Concrete, preset process rule and include: The all read requests in kernel asynchronous process queue are taken out, by sector position in certain predetermined according to predetermined period Read request merging treatment in value range, obtains the sector position multiple readings in certain predetermined value range simultaneously Ask each self-corresponding multiple file datas;Or according to be stored in kernel asynchronous process queue priority order from Read request in kernel asynchronous process queue, the file data corresponding according to obtaining read request.Certainly, also may be used To arrange other rules to improve the maximum throughput of disk.
Embodiment two:
A kind of service end that the present embodiment provides, as it is shown on figure 3, include read request acquisition module, number of files According to acquisition module and return threading models: read request acquisition module is for reading from client acquisition by reading thread Request;File data acquisition module is for getting corresponding number of files according to read request from corresponding disk According to;Return threading models for the file data of acquisition is sent to client by the return thread pre-build End.
Further, a kind of service end that the present embodiment also provides for, as shown in Figure 4, file data acquisition module Submodule and data cache sub-module is obtained: data acquisition submodule is for according to read request including file data Reading the file data of correspondence from corresponding multiple disks respectively, data buffer storage submodule is used for data acquisition File data after submodule will read is stored in data buffer area;Return threading models to be additionally operable to judge data Whether buffer area exists file data, if there is file data, immediately by file data by pre-building Return thread be sent to client.
Further, return thread and include that many sub-line of return journeys, every corresponding data of sub-line of return journey are delayed Deposit district;Data acquisition submodule is additionally operable to: service end returns thread by son and inquires about according to default rule searching Whether the data buffer area of its correspondence exists file data;Return threading models is additionally operable to: the literary composition that will inquire Number of packages is sent to client according to by corresponding son return thread.
Further, a kind of service end that the present embodiment also provides for, as it is shown in figure 5, also include the asynchronous place of kernel Reason Queue module;Kernel asynchronous process Queue module is for obtaining reading by reading thread from client in service end After request, before reading corresponding file data according to read request from corresponding disk, read request is deposited Enter kernel asynchronous process queue;File data acquisition module also includes receiving submodule: receives submodule and is used for Taking out read request according to the default rule that processes from kernel asynchronous process queue, data acquisition submodule is additionally operable to root From corresponding disk, corresponding file data is got according to the read request taken out.
Concrete, preset process rule and include: take out the institute in kernel asynchronous process queue according to predetermined period There is read request, by sector position read request merging treatment in certain predetermined value range, obtain sector simultaneously The each self-corresponding multiple file datas of the position multiple read requests in certain predetermined value range;Or according to being stored in The order of the priority of kernel asynchronous process queue read request from kernel asynchronous process queue, reads according to obtaining The file data that request is corresponding.It should be noted that the read request merging treatment in preset range value here Refer to, by read request merging treatment nearer for sector position, reduce the unnecessary tracking time.
Embodiment three:
The handover delay measuring method that the present embodiment provides, as shown in Figure 6, comprises the following steps:
Step S301: user (user here be for file system for, namely call file system The process of interface) call the reading data-interface of file system;
Step S302: client-side program according to for submitting to read request parameter (file handle, offset, long Degree etc.) go meta data server to obtain corresponding locations of copies (server, disk), it is right read request to be dealt into The service end answered.
Step S303: the disc information that service end gets at meta data server according to client, please by reading Ask the asynchronous process queue being put into corresponding disk.
Step S304: the AIO module of kernel can ceaselessly scan the read request in asynchronous queue, certain In time range, close for sector read request being merged and perform, that farthest reduces that tracking brings is time-consuming. After the read request of disk returns, the request of data that these meet with a response can be put in response queue
Step S305: poll thread is ceaselessly inquiring about corresponding disk AIO module, if the Accreditation Waiting Area of AIO There is no data, continue poll, until till having data, entering S306.
These data are returned to client TSR by step S306: after poll thread gets data. Continue poll AIO module.Here the work of poll thread is exactly the Accreditation Waiting Area ceaselessly scanning AIO module, Data are had just to issue client TSR.
Step S307: after client TSR gets data, returns to user.
Further, shown in Fig. 7, the composition illustrating service end brief in Fig. 7, noticeable It is that structure composition here is only the one of service end, can also be the service end of other different structures certainly. Asynchronous input and output (AIO) module is the one of file data acquisition module in above-described embodiment two, poll line Journey is the one returning thread.Concrete, AIO receptor is the one receiving submodule, and AIO processor is The one of data acquisition submodule, AIO Accreditation Waiting Area is the one of cache sub-module.Can be obvious find out, Each reading thread, as long as read request is put into kernel asynchronous process queue, is immediately available for discharging under process Article one, request, will not be suspended in over there always.And at this moment have asynchronous input and output (AIO) module will put The laggard row that please seek out in asynchronous process queue processes (Fig. 8 elaborates), is ranked up and gathers Close, farthest play the handling capacity of disk.As shown in Figure 8, AIO module is broadly divided into three parts: AIO receptor, AIO processor and AIO Accreditation Waiting Area, AIO receptor is mainly responsible for periodically taking from AIO queue Go out read request, such as can take once with 100us, these requests are lost to AIO processor, AIO processor meeting Arrange these requests, by read request merging treatment nearer for sector position, reduce the unnecessary tracking time. The request data that additionally AIO has processed, is put into AIO Accreditation Waiting Area, waits thread to be polled to obtain.
One of ordinary skill in the art will appreciate that all or part of step in said method can be come by program Instruction related hardware completes, and said procedure can be stored in computer-readable recording medium, such as read-only storage Device, disk or CD etc..Alternatively, all or part of step of above-described embodiment can also use one or Multiple integrated circuits realize.Correspondingly, each module/unit in above-described embodiment can use the shape of hardware Formula realizes, it would however also be possible to employ the form of software function module realizes.The present invention is not restricted to any particular form The combination of hardware and software.
Above example is only in order to illustrate technical scheme and unrestricted, reference only to preferred embodiment The present invention has been described in detail.It will be understood by those within the art that, can be to the present invention's Technical scheme is modified or equivalent, without deviating from the spirit and scope of technical solution of the present invention, all Should contain in the middle of scope of the presently claimed invention.

Claims (10)

1. the file reading in a distributed memory system, it is characterised in that including:
Service end obtains read request by reading thread from client;
Described service end gets corresponding file data according to described read request from corresponding disk;
The file data of acquisition is sent to described client by the return thread pre-build by described service end End.
2. the file reading in distributed memory system as claimed in claim 1, its feature exists In, described from corresponding disk, get corresponding file data according to described read request and include:
From corresponding multiple disks, the file data of correspondence is read respectively, after reading according to described read request File data be stored in data buffer area;
Described the file data of acquisition be sent to described client by the return thread that pre-builds include:
Judge whether described data buffer area exists file data, if there is file data, immediately by described File data is sent to described client by the return thread pre-build.
3. the file reading in distributed memory system as claimed in claim 2, its feature exists In, described return thread includes many sub-line of return journeys, every corresponding data buffer area of sub-line of return journey; Judge whether described data buffer area exists file data and include: described service end by son return thread according to Preset rules inquires about whether the data buffer area of its correspondence exists file data;Described described file data is led to Cross the return thread pre-build to be sent to described client and include: by the file data that inquires by correspondence Son return thread be sent to described client.
4. the file reading in the distributed memory system as described in any one of claim 1-3, It is characterized in that, obtain after read request from client by reading thread in service end, according to described read request from Include before corresponding disk reads corresponding file data: described read request is stored in the asynchronous place of kernel Reason queue;Described service end gets corresponding file data bag according to described read request from corresponding disk Include: described service end takes out read request according to the default rule that processes from described kernel asynchronous process queue, according to The read request taken out gets corresponding file data from corresponding disk.
5. the file reading in distributed memory system as claimed in claim 4, its feature exists In, described default process rule includes:
Take out all read requests in described kernel asynchronous process queue according to predetermined period, sector position is existed Read request merging treatment in certain predetermined value range, obtains sector position in certain predetermined value range simultaneously The each self-corresponding multiple file datas of multiple read requests;
Or
According to being stored in the reading from described kernel asynchronous process queue of the order of priority of kernel asynchronous process queue Request, the file data corresponding according to obtaining described read request.
6. a service end, it is characterised in that include that read request acquisition module, file data obtain mould Block and return threading models:
Described read request acquisition module is for obtaining read request by reading thread from client;
Described file data acquisition module is for getting corresponding according to described read request from corresponding disk File data;
Described return threading models is for being sent to the file data of acquisition by the return thread pre-build Described client.
7. service end as claimed in claim 6, it is characterised in that described file data acquisition module Including file data acquisition submodule and data cache sub-module:
Described data acquisition submodule is right for reading respectively from corresponding multiple disks according to described read request The file data answered, data buffer storage submodule file data after described data acquisition submodule will read It is stored in data buffer area;
Described return threading models is additionally operable to judge whether described data buffer area exists file data, if deposited At file data, immediately described file data is sent to described client by the return thread pre-build.
8. service end as claimed in claim 7, it is characterised in that described return thread includes multiple Son returns to thread, every corresponding data buffer area of sub-line of return journey;Described data acquisition submodule is also used In: the data buffer area that described service end inquires about its correspondence by son return thread according to default rule searching is No there is file data;Described return threading models is additionally operable to: the file data inquired is passed through correspondence Son returns thread and is sent to described client.
9. the service end as described in any one of claim 6-8, it is characterised in that also include that kernel is different Step processes queue;Described kernel asynchronous process queue is for obtaining reading by reading thread from client in service end After request, before reading corresponding file data according to described read request from corresponding disk, by described Read request is stored in kernel asynchronous process queue;Described file data acquisition module also include receive submodule: institute State reception submodule for taking out read request, institute according to the default rule that processes from described kernel asynchronous process queue State data acquisition submodule to be additionally operable to from corresponding disk, get corresponding file according to the read request taken out Data.
10. service end as claimed in claim 9, it is characterised in that described default process rule includes:
Take out all read requests in described kernel asynchronous process queue according to predetermined period, sector position is existed Read request merging treatment in certain predetermined value range, obtains sector position in certain predetermined value range simultaneously The each self-corresponding multiple file datas of multiple read requests;
Or
According to being stored in the reading from described kernel asynchronous process queue of the order of priority of kernel asynchronous process queue Request, the file data corresponding according to obtaining described read request.
CN201510142266.4A 2015-03-27 2015-03-27 File reading in a kind of distributed memory system and service end Pending CN106161503A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201510142266.4A CN106161503A (en) 2015-03-27 2015-03-27 File reading in a kind of distributed memory system and service end
PCT/CN2015/088998 WO2016155238A1 (en) 2015-03-27 2015-09-06 File reading method in distributed storage system, and server end

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510142266.4A CN106161503A (en) 2015-03-27 2015-03-27 File reading in a kind of distributed memory system and service end

Publications (1)

Publication Number Publication Date
CN106161503A true CN106161503A (en) 2016-11-23

Family

ID=57003866

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510142266.4A Pending CN106161503A (en) 2015-03-27 2015-03-27 File reading in a kind of distributed memory system and service end

Country Status (2)

Country Link
CN (1) CN106161503A (en)
WO (1) WO2016155238A1 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106790389A (en) * 2016-11-25 2017-05-31 中国石油天然气集团公司 The acquisition methods of seismic channel data, host node server and working node server
CN107704328A (en) * 2017-10-09 2018-02-16 郑州云海信息技术有限公司 Client accesses method, system, device and the storage medium of file system
CN108959519A (en) * 2018-06-28 2018-12-07 郑州云海信息技术有限公司 A kind of method, apparatus and computer readable storage medium reading data
CN108989392A (en) * 2018-06-21 2018-12-11 聚好看科技股份有限公司 A kind of server data caching method, device and server
CN111522787A (en) * 2019-02-01 2020-08-11 阿里巴巴集团控股有限公司 Data processing method and device of distributed system and storage medium
CN111881096A (en) * 2020-07-24 2020-11-03 北京浪潮数据技术有限公司 File reading method, device, equipment and storage medium
CN112711483A (en) * 2020-12-10 2021-04-27 广州广电运通金融电子股份有限公司 High-concurrency method, system and equipment for processing big data annotation service
CN113254415A (en) * 2021-05-19 2021-08-13 浪潮商用机器有限公司 Method and device for processing read request of distributed file system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133297A (en) * 2017-04-26 2017-09-05 努比亚技术有限公司 Data interactive method, system and computer-readable recording medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070027912A1 (en) * 2005-07-19 2007-02-01 Microsoft Corporation Common concurrency runtime
CN102682012A (en) * 2011-03-14 2012-09-19 成都市华为赛门铁克科技有限公司 Method and device for reading and writing data in file system
CN102981773A (en) * 2011-09-02 2013-03-20 深圳市快播科技有限公司 Storage device access method and storage device access system and storage device access supervisor
CN103336672A (en) * 2013-06-28 2013-10-02 华为技术有限公司 Data reading method, device and computer equipment
CN103338156A (en) * 2013-06-17 2013-10-02 南京国电南自美卓控制系统有限公司 Thread pool based named pipe server concurrent communication method
CN103577158A (en) * 2012-07-18 2014-02-12 阿里巴巴集团控股有限公司 Data processing method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722449B (en) * 2012-05-24 2015-01-21 中国科学院计算技术研究所 Key-Value local storage method and system based on solid state disk (SSD)
CN102981805B (en) * 2012-11-02 2015-11-18 浪潮(北京)电子信息产业有限公司 The response method of serialized software and system

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070027912A1 (en) * 2005-07-19 2007-02-01 Microsoft Corporation Common concurrency runtime
CN102682012A (en) * 2011-03-14 2012-09-19 成都市华为赛门铁克科技有限公司 Method and device for reading and writing data in file system
CN102981773A (en) * 2011-09-02 2013-03-20 深圳市快播科技有限公司 Storage device access method and storage device access system and storage device access supervisor
CN103577158A (en) * 2012-07-18 2014-02-12 阿里巴巴集团控股有限公司 Data processing method and device
CN103338156A (en) * 2013-06-17 2013-10-02 南京国电南自美卓控制系统有限公司 Thread pool based named pipe server concurrent communication method
CN103336672A (en) * 2013-06-28 2013-10-02 华为技术有限公司 Data reading method, device and computer equipment

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106790389A (en) * 2016-11-25 2017-05-31 中国石油天然气集团公司 The acquisition methods of seismic channel data, host node server and working node server
CN106790389B (en) * 2016-11-25 2020-09-08 中国石油天然气集团公司 Seismic channel data acquisition method, main node server and working node server
CN107704328A (en) * 2017-10-09 2018-02-16 郑州云海信息技术有限公司 Client accesses method, system, device and the storage medium of file system
CN108989392A (en) * 2018-06-21 2018-12-11 聚好看科技股份有限公司 A kind of server data caching method, device and server
CN108959519A (en) * 2018-06-28 2018-12-07 郑州云海信息技术有限公司 A kind of method, apparatus and computer readable storage medium reading data
CN111522787A (en) * 2019-02-01 2020-08-11 阿里巴巴集团控股有限公司 Data processing method and device of distributed system and storage medium
CN111522787B (en) * 2019-02-01 2023-04-07 阿里巴巴集团控股有限公司 Data processing method and device of distributed system and storage medium
CN111881096A (en) * 2020-07-24 2020-11-03 北京浪潮数据技术有限公司 File reading method, device, equipment and storage medium
CN111881096B (en) * 2020-07-24 2022-06-17 北京浪潮数据技术有限公司 File reading method, device, equipment and storage medium
CN112711483A (en) * 2020-12-10 2021-04-27 广州广电运通金融电子股份有限公司 High-concurrency method, system and equipment for processing big data annotation service
CN113254415A (en) * 2021-05-19 2021-08-13 浪潮商用机器有限公司 Method and device for processing read request of distributed file system
CN113254415B (en) * 2021-05-19 2022-11-04 浪潮商用机器有限公司 Method and device for processing read request of distributed file system

Also Published As

Publication number Publication date
WO2016155238A1 (en) 2016-10-06

Similar Documents

Publication Publication Date Title
CN106161503A (en) File reading in a kind of distributed memory system and service end
KR101086514B1 (en) Continuous media priority aware storage scheduler
US20130238582A1 (en) Method for operating file system and communication device
CN101290604A (en) Information processing apparatus and method, and program
CN108549525B (en) Data storage and access method and device, electronic equipment and storage medium
CA2573156A1 (en) Apparatus and method for supporting memory management in an offload of network protocol processing
US8082307B2 (en) Redistributing messages in a clustered messaging environment
US20130111159A1 (en) Digital Signal Processing Data Transfer
CN105159841B (en) A kind of internal memory migration method and device
CN111615692A (en) Data transfer method, calculation processing device, and storage medium
CN108228327B (en) Task processing method and device
US9514072B1 (en) Management of allocation for alias devices
US11010094B2 (en) Task management method and host for electronic storage device
CN110716691B (en) Scheduling method and device, flash memory device and system
EP2214103B1 (en) I/O controller and descriptor transfer method
JP2009508215A5 (en)
CN104049955A (en) Multistage cache consistency pipeline processing method and device
CN102402422A (en) Processor component and memory sharing method thereof
US10339064B2 (en) Hot cache line arbitration
CN103106164A (en) Highly efficient direct memory access (DMA) controller
US9817583B2 (en) Storage system and method for allocating virtual volumes based on access frequency
US10831561B2 (en) Method for changing allocation of data using synchronization token
CN1304954C (en) Memory device control system
CN107025064B (en) A kind of data access method of the high IOPS of low latency
RU2475817C1 (en) Apparatus for buffering data streams read from ram

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20161123

RJ01 Rejection of invention patent application after publication