CN103944958A - Wide area file system and implementation method - Google Patents

Wide area file system and implementation method Download PDF

Info

Publication number
CN103944958A
CN103944958A CN201410095627.XA CN201410095627A CN103944958A CN 103944958 A CN103944958 A CN 103944958A CN 201410095627 A CN201410095627 A CN 201410095627A CN 103944958 A CN103944958 A CN 103944958A
Authority
CN
China
Prior art keywords
wide area
area file
module
data
file
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410095627.XA
Other languages
Chinese (zh)
Inventor
马留英
刘浏
许鲁
刘振军
闫鹏飞
蔡杰明
何文婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin Zhongke Bluewhale Information Technology Co ltd
Institute of Computing Technology of CAS
Original Assignee
Tianjin Zhongke Bluewhale Information Technology Co ltd
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin Zhongke Bluewhale Information Technology Co ltd, Institute of Computing Technology of CAS filed Critical Tianjin Zhongke Bluewhale Information Technology Co ltd
Priority to CN201410095627.XA priority Critical patent/CN103944958A/en
Publication of CN103944958A publication Critical patent/CN103944958A/en
Pending legal-status Critical Current

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a wide area file system and an implementation method, and relates to the technical field of file systems. The implementation method for the file system is applicable in a wide area. According to the system, an event-driven module receives an operation request, whether a file exists in a local cache is inquired through a cache management module according to the node number of the requested file, if yes, the file is accessed from the cache, if not, the path name, in a server end database, of the file is inquired according to the node number and through a node management module, then the file and the meta-information of the file are obtained according to the information of the operation request and through an event processing module and are sent to the cache management module, and finally, the cache management module stores the file and the meta-information of the file into the local cache. The wide area file system and the implementation method do not rely on a bottom file system, it is only needed that the bottom file system can be shared, the space management function of the cache is provided, and the resource utilization rate of the system is increased.

Description

A kind of wide area file system and implementation method
Technical field
The invention belongs to file system (file system) technical field, is a kind of implementation method of file system applicable on wide area.
Background technology
Along with information technology is goed deep into different social sectors, be distributed in global data total amount and just with how much speed, rapidly increase.Under the background of global metadata amount explosive increase, the enterprise of data center of future generation, globalization, cloud storage system all requirement can effectively and reliably carry out share and access to the data of wide area distribution, therefore, the global storage of high efficient and reliable seems particularly important.
Although there is for example GPFS of some large-scale cluster file system, Lustre etc., also has for example GFS of some file system in internet arena extensive use, HDFS etc., can both aspect access bandwidth and capacity, realize expansion, a large amount of clients are provided to the access support of PB DBMS, but above-mentioned file system is the network environment towards local when design, and at the net environment of wide area, file system must be considered two key restrains when design realizes: with respect to local environment, the low bandwidth of wide area environment, high latency, the features such as network jitter all can bring challenges to the access performance of wide area file and availability, and the local file system of isomery often also may be used in use state in wide-area data center, this just needs the design of wide area file system to have good isomery compatibility and less implementation cost.
At present, towards the global storage of wide area Environment Design, such as Amazon Dynamo Design and implementation is a highly available key-value storage system, it provides a simple major key as unique interface.Dynamo is successfully applied in Amazon, and can be deployed on up to ten thousand nodes service is provided across data center.The data storing platform PNUTS of Yahoo is designed primarily to online storage service, mainly solve online a large amount of unirecords or the write access of set of records ends among a small circle, support simple relational data model, to the access of data object, be to take key as id and the corresponding whole piece record of read-write operation, and allow application program to making a choice between the consistency under wide area environment and performance, comprise the latest edition that reads record, certain version, or be not less than any version of certain indicated release.The cloud storage platform Windows Azure Platform of Microsoft provides OO memory module, mainly provide three kinds of data storage methods (Blob, Table and Queue) to meet the different demands of application program, and supported user to use storage by internet login system.How above-mentioned three kinds of storage systems are all stores service that inner multiple business logic of confession of exploitation is carried out data access, can not provide and, between isomeric data center, carry out file-sharing access services, and not support POSIX file access interface standard.
Summary of the invention
The object of the present invention is to provide a kind of wide area file system and implementation method of supporting cross-region data access.The present invention is typical C/S (client end/server end) pattern, comprises a client and a server end.At server end, only need to arrange after Export directoey, in client, just can see the catalog views consistent with server end Export directoey on wide area.The POSIX api interface that FUSE KERNEL (FUSE kernel module) and the FUSE LIB (FUSE user state storehouse) of client by use standard carrys out compatibility standard, support is used the system call (for example: open/creat/mkdir/unlink/rmdir/read/write/opendir/readdir, etc.) of standard to carry out access visit to long-range (being server end) file and catalogue.
Particularly, the invention discloses a kind of wide area file system, server end comprises node number administration module, data transmission module, event processing module; Client comprises event-driven module, data transmission module, caching management module,
Wherein, by this event-driven module, receive operation requests, if this request is read requests, according to the node number of asked wide area file and block information, by whether there is this wide area file in this caching management module inquiry local cache, if exist, from this local cache, read this wide area file, if do not exist, according to this node number, by this node number administration module, inquire about the pathname of this wide area file in this server-side database, and according to this block information, by this event processing module, obtain the data message of this wide area file, this data message is sent to this caching management module, this caching management module deposits this data message and block information corresponding thereto in this local cache according to buffer update strategy, to facilitate, read, this server end and this client are carried out information transmission by data transmission module separately.
Described wide area file system, also comprise that this event-driven module receives establishment wide area file request, parent directory node number by new wide area file in this server-side database and the title of this new wide area file send to this node number administration module, this node number administration module inquires about according to this parent directory node number the pathname that this server-side database is obtained this parent directory, according to this pathname, locate the position of this parent directory in this server end bottom local file system, and under this position, according to this title, create this new wide area file and obtain the metamessage of this new establishment file, this node number administration module is this new establishment wide area file allocation node number simultaneously, this event processing module sends to this caching management module by the node number of this distribution and this metamessage and deposits this local cache in.
Described wide area file system, also comprise that this event-driven module receives to wide area file write request, according to the node number of this wide area file, locate this position of wide area file in this local cache, will data writing and the block information of this write request write this local cache, and in this local cache of mark, this block information is " 1 ", to represent wanting data writing to exist only in this local cache, there is difference with the data in this wide area file of this server end.
Described wide area file system, also comprise data backwash, concrete steps are: after log-on data backwash, according to this node number, locate the position of this wide area file in server end bottom local file system, should want data writing to write this wide area file of this server end, and this block information is " 2 " in this local cache of mark, to represent in this local cache to want the data consistent in this wide area file of data writing and this server end.
Described wide area file system, also comprise that this event-driven module receives deletion wide area file request, parent directory node number by wide area file to be deleted in this server-side database and the title of this wide area file to be deleted send to this node number administration module, this node administration module is located the position of this parent directory in server end local file system according to this parent directory node number, by event processing module, delete the wide area file under this position with this title, whether by this caching management module, inquire about this wide area file to be deleted is cited simultaneously, if, by this wide area file hiding to be deleted in this local cache, after unquote, delete this wide area file to be deleted in this local cache, if not, delete this wide area file to be deleted in this local cache.
Described wide area file system, this buffer update strategy comprises: record the reference count of wide area file metamessage in this local cache, when this reference count becomes 0, this metamessage is replaced away from this local cache, and deposited in database.
Described wide area file system, this buffer update strategy also comprises: the system time when recording this client and obtaining metamessage from this server end, by system time during this metamessage in this system time and this local cache of access, compare, if difference is less than time-out time, this metamessage in this local cache effectively and the data corresponding with this metamessage effective, and from this local cache, obtain this metamessage and this data, otherwise obtain new metamessage from this server end, relatively in this local cache, whether the modification time of this metamessage and the modification time of this new metamessage be identical, if identical, these data are effective, otherwise this data failure.
Described wide area file system, this buffer update strategy also comprises: detect the higher limit whether this local cache space of having used has reached setting, when reaching this higher limit, wake asynchronous cleaning thread up, this thread is deleted metamessage and has been replaced to the shared data buffer storage of wide area file in database, if remain the lower limit that this local cache space reaches setting, this thread enters sleep state, otherwise according to the access time of wide area file, delete in this local cache the data buffer storage of not accessed wide area file at most, until remaining cache space reaches this lower limit.
From above scheme, the invention has the advantages that:
The framework of system has good versatility: support measured compatible POSIX interface; Do not rely on bottom document system, isomery local file system that can compatible different vendor, only requires bottom document system to share.Do not need loaded down with trivial details importing, only need Export directoey be set at server end, in client, just can see the catalog views identical with server end.Adopt the localization buffer memory of persistence simultaneously, distinguished metamessage and the data message of caching server end object, improved the access performance of long-range hot spot data, reduced the mutual flow on Wide Area Network.By the availability deciding of the buffer memory based on timeliness, be more suitable for Networks Environment and application demand, can be supported in the instable situation of Wide Area Network, the file of buffer memory can provide lasting access.And the space management function of buffer memory is provided, improved the resource utilization of system.
Accompanying drawing explanation
Fig. 1 is system construction drawing;
Fig. 2 is entire system flow chart;
Fig. 3 is for reading catalogue flow chart;
Fig. 4 is for creating document flowchart;
Fig. 5 is for deleting object flow chart;
Fig. 6 is read data flow process figure;
Fig. 7 is data writing flow chart.
Wherein, Reference numeral is:
Module 1 is client, comprises module 11/12/13;
Module 2 is server end, comprises module 22/23/11;
Step 100, for reading catalogue step, comprises step
101/102/103/104/105/106/107/108/109/110/111/112/113/114/115/116/117/118/119/120
Step 200, for creating file step, comprises step
201/202/203/204/205/206/207/208/209/210/211/212/213/214/215/216/217/218
Step 300, for deleting object step, comprises step
301/302/303/304/305/306/307/308/309/310/311/312/313/314/315/316/317/318/319/320
Step 400 is read data step, comprises step
401/402/403/404/405/406/407/408/409/410/411/412/413/414/415/416/417/418/419/420
Step 500 is data writing step, comprises step
501/502/503/504/505/506/507/508/509/510/511/512/513/514/515/516/517/518/519
Embodiment
As shown in Figure 1, server end 2 comprises node number administration module 23, data transmission module 11, event processing module 22.Described client 1 comprises event-driven module 12, data transmission module 11, caching management module 13.Be below the concrete purposes of each module:
Node number administration module 23, for navigating to fast operand, and for the ease of management object, this module 23 is object (object, refer to file or catalogue) sign that the overall situation is unique of generation, in the life cycle of whole object, remain unchanged, and be the unique identification of the object that communicates with client 1.This module 23 provides the operation of increase to node number, inquiry, deletion.This module 23 is when creating object, for creating object, distribute unique node number, generation, corresponding to a record of this node number, is recorded the mapping relations of this node number and practical object pathname in server end 2 databases, and this record is stored in database; When deleting object, in database, the corresponding record of the node number of object to be deleted is deleted.When event processing module 22 receives the access request of client 1, can obtain the pathname of this object in server end 2 databases by using node number to inquire corresponding record, then carry out corresponding operation.
Event processing module 22, this module 22 is for the treatment of the access request of client 1.After receiving the request of client 1, can through node number administration module 23, navigate to the position of request object in server end 2 databases according to the node number in request, then according to the type of request, make corresponding processing.
Data transmission module 11, this module 11 is present in server end 2 and client 1, be mainly used in the communication (data and agreement) of client 1 and server end 2, and support the compression algorithms such as zlib to carry out the compression of protocol data bag, to improve the efficiency of transmission on Wide Area Network.
Event-driven module 11, this module 11 is called for supporting relevant POSIX interface (portable operating system interface), can receive the system call requests such as file system establishment, deletion, reading and writing.
Caching management module 13, this module 13 is for passing through buffer update tactical management metamessage and data.Specific as follows:
The metamessage of the object that system is being quoted is buffered in internal memory, and in internal memory, safeguard the reference count of metamessage, when reference count becomes 0, the metamessage of object will be replaced away from internal memory, and be persisted in database, do like this and not only meet the memory reliability of current metamessage and inquiry, retrieval capability, also can future to aspect pavings such as capacity extension under good basis.And for data message, support to take the cache way that interval (extent) is granularity, and its block information carries out buffer memory as a part for metamessage, and the corresponding actual data content of block information is buffered in local data base.The judgement of buffer memory validity is taked to simple overtime strategy simultaneously, when access metamessage, once the metamessage in buffer memory has surpassed the time-out time restriction arranging, think that metamessage lost efficacy, and need to communicate by letter to upgrade with server end metamessage.In order to improve resource utilization ratio, this module 13 also provides the threshold value management to spatial cache, when system initialization, set higher limit and the lower limit of spatial cache and designed asynchronous cleaning cleaner thread, in system running, once buffer memory capacity reaches higher limit, cleaner thread can be waken up and partial data buffer memory is removed, to guarantee that spatial cache is in the threshold value of setting.
Buffer update strategy also comprises:
The overtime strategy that the judgement of metamessage validity is taked.The time-out time of metamessage is set when system is initial, when at every turn from long-range acquisition metadata information, record system time at that time.When access object metamessage, the system time recording during by the current time in system and from long-range acquisition metadata compares, if both differences are less than the time-out time of setting, metamessage is effective, can directly from buffer memory, obtain metamessage; Otherwise think that metamessage lost efficacy, now accessing metamessage just need to be from long-range acquisition.
The judgement of data validity is taked to following strategy: if its metamessage is effective, think that the data in its buffer memory are effective; If its metamessage lost efficacy, until again from long-range acquisition metadata, by whether the modification time of metamessage in buffer memory relatively and modification time in new acquisition metadata be identical, judge, if identical, think that data are effective in buffer memory; Otherwise think that the data in buffer memory lost efficacy.
Be below basic handling flow process of the present invention:
As shown in Figure 2, application request arrives virtual file system, virtual file system can continue request to send to user's state storehouse through kernel module, then the event-driven module 12 of client 1 is triggered in user's state storehouse, the inner concrete protocol processes flow process of the present invention afterwards, through caching management module 13, data transmission module 11, event processing module 22, after the processing of node number administration module 23, result returns to user's state storehouse, then through kernel module, request results is returned to virtual file system, last virtual file system returns to application request by request results.
Will introduce successively the handling process of typical protocol in the present invention (reading catalogue/establishment/deletion/getattr/read/write) below.
Read directory protocol process: read the catalogue data in the request msg interval of corresponding objects.Virtual file system receives and reads after catalog request, request can be converted into <ino, offset, and size>(is ino wherein: the node number ino that reads directory object; Offset: the deviation post of reading catalog request; Size: the byte number that reads of reading catalog request) such a tlv triple, and pass to the event-driven module of client.[read the deviation post of catalog request, read the byte number that reads of catalog request] is for reading the block information of catalog request.
As shown in Figure 3, read directory protocol process idiographic flow as follows:
Client 1 flow process: step 101 event-driven module 12 receives the catalog request that reads from user's state storehouse.By step 102 caching management module 13, receive the catalog request that reads that event-driven module 12 passes over.Step 103 caching management module 13 is according to the metamessage that reads the node number inquiry corresponding objects of reading directory object of catalog request, and judge in its metamessage, whether the interval cache information of corresponding requests exists, if existed, execution step 104 directly reads corresponding interval catalogue data from local data banked cache, otherwise read catalog request and need to communicate by letter and obtain corresponding interval catalogue data with server end, step 105 event-driven module 12 receives the result of caching management module 13, if read corresponding interval catalogue data, by step 120 event-driven module 12, corresponding interval catalogue data is returned to user's state storehouse, reading catalogue flow process finishes, otherwise execution step 106 event-driven modules 12 pass to data transmission module 11 by request, and data transmission module 11 will be read catalog request and send to server end 2.
Server end 2 flow processs: the data transmission module 11 of step 107 server end 2 receives client 1 transmission and reads catalog request, parsing obtains the node number that corresponding < reads directory object, read the deviation post of catalog request, that reads catalog request reads byte number > tlv triple, step 108 data transmission module 11 will read catalog request and pass to event processing module 22, by step 109 event processing module 22, will read catalog request and pass to node number administration module 23, step 110 node number administration module 23 is according to the node number Query Database of reading catalog request, obtain corresponding record, and obtain its pathname in database, step 111 is that corresponding pathname information is returned to node number administration module 23, perform step 112 node number administration modules 23 pathname information is passed to event processing module 22, step 113 is according to catalogue corresponding to pathname access bottom local file system, according to block information, read corresponding catalogue data, and the attribute information of corresponding each directory entry of acquisition.The reason that obtains in the lump the attribute information of each directory entry is herein in order to accelerate follow-up getattr request, avoids the repeatedly mutual of client 1 and server end 2 in time-out time.Execution step 114 returns to event processing module 22 by the information of acquisition, and step 115 event processing module 22 passes to data transmission module 11 by the information of acquisition, and by data transmission module 11, the information of acquisition is sent to client 1.
Client 1 flow process: step 116 data transmission module 11 receives the information that server end 2 returns, step 117 event-driven module 12 receives the information of data transmission module 11, perform step 118 event-driven modules 12 information of reception is passed to caching management module 13, step 119 caching management module 13 according to the information receiving by the catalogue data information of reading in the interval of catalog request, block information and attribute information corresponding to each directory entry are cached in local data base, perform step afterwards 120 event-driven modules 12 and return to the catalogue data content in request block information to user's state storehouse, reading catalogue flow process finishes.
Create file protocol process: virtual file system receives and creates after file request, request can be converted into <pino, name, mode>(is pino wherein: the node number ino that creates object parent directory; Name: the name that create object; Mode: creation mode) such a tlv triple, and pass to the event-driven module 12 of client 1.
As shown in Figure 4, establishment file protocol process idiographic flow is as follows:
Client 1 flow process: the event-driven module 12 of step 201 client 1 receives the request to create from user's state storehouse, step 202 event-driven module 12 passes to data transmission module 11 by corresponding request.Data transmission module 11 is packaged into request to create to send to server end 2.
Server end 2 flow processs: the data transmission module 11 of step 203 server end 2 receives the establishment file request that client 1 sends, resolve this request and obtain corresponding request to create, step 204 server end 2 passes to event processing module 22 by request to create, event processing module 22 obtains its corresponding pathname according to the node number Query Database that will create file parent directory in request to create, navigate to the position of parent directory in bottom local file system that will create file, step 205 is carried out creation operation under parent directory, according to the title that will create file, create file, and obtain the attribute information of this new establishment file.Step 206 returns to event processing module 22 by the successful information of establishment, step 207 event processing module 22 passes to node number administration module 23 by the successful information of establishment, perform step 208 node number administration modules 23 for new its unique identification node number of file allocation that creates, and generate creating a record of file, this record is inserted in database, execution step 209 records successful information by insertion and returns to node number administration module 23, perform step 210 node number administration modules 23 node number that newly creates file is sent to event processing module 22, step 211 event processing module 22 passes to data transmission module 11 by the node number and the attribute information that create file, and by data transmission module 11, information package is sent to client 1.
Client flow process: the data transmission module 11 of step 212 client 1 receives the information of the packing of server end 2 transmissions, perform step 213 data transmission modules 11 information of packing is sent to event-driven module 12, perform step 214 event-driven modules 12 information of packing is sent to caching management module 13, perform step 215 caching management module according to the information of packing, the metamessage that newly creates object is carried out to buffer memory, execution step 216 returns to caching management module 13 by the successful information of buffer memory, execution step 217 returns to event-driven module 12 by caching management module 13 by the successful information of buffer memory, execution step 218 is returned to the successful information that creates to user's state storehouse by event-driven module 12, visioning procedure finishes.
Delete protocol procedures: delete corresponding object.Virtual file system receives after removal request, request can be converted into <pino, and name>(is pino wherein: the node number ino that deletes object parent directory; Name: the name that delete object) such a two tuples, and pass to the event-driven module 12 of client 1.
As shown in Figure 5, deletion protocol procedures idiographic flow is as follows:
Client 1 flow process: the event-driven module 12 of step 301 client 1 receives the removal request from user's state storehouse, perform step 302 event-driven modules 12 removal request is sent to caching management module 13, perform step 303 caching management module 13 and navigate to the object that will the delete position in local cache according to removal request, and judge whether this object is cited, if, this removal request is changed into rename request (by the hidden file starting with ". " under this object RNTO root), otherwise continue, execution step 304 returns to caching management module by the type of request, step 305 event-driven module 12 receives the result of caching management module 13, step 306 event-driven module 12 passes to data transmission module 11 by corresponding request, data transmission module 11 sends to server end 2 by request packing.
Server end flow process: the data transmission module 11 of step 307 server end 2 receives the request that client 1 sends, resolve this request and obtain corresponding informance, perform step 308 data transmission modules 11 request is sent to event processing module 22, step 309 event processing module 22 sends to node number administration module 23 by information, perform step 310 node number administration modules 23 according to the node number Query Database of request, obtain corresponding record and obtain the pathname that parent directory is corresponding, and splice with the title that will delete object, obtain the pathname of operand in database, execution step 311 returns to node number administration module 23 by corresponding pathname information, step 312 node number administration module 23 passes to event processing module 22 by the positional information of corresponding operand, step 313 event processing module 23 is carried out corresponding operation according to the type of request, after operating successfully, record modification by request object in database becomes deletion state, execution step 314 returns to event processing module 22 by the successful information of operation, perform step 315 event processing modules 22 the successful information of operation is passed to data transmission module 11.And by data transmission module 11, information package is sent to client 1.
Client 1 flow process: the data transmission module 11 of step 316 client 1 receives the information of the packing of server end 2 transmissions, step 317 data transmission module 11 sends to event-driven module 12 by this information, step 318 event-driven module 12 sends to caching management module 13 by this information, execution step 319 is analyzed this information by caching management module 13, if the returning of rename operation, in caching management module 13, object to be deleted is set to hidden state, show that this object later can not be accessed, after the reference count of this object is kept to 0, this object is just by real deletion, otherwise the cache information of deleting object is directly deleted, step 320 event-driven module 12 is returned to the successful information of operation to user's state storehouse, deletion flow process finishes.
Getattr protocol procedures: the attribute information that obtains object.Process prescription is as follows:
Client 1 handling process: client 1 receives after getattr request, the attribute information of judging object whether in local cache, if exist, whether the attribute information of judging its buffer memory is effective, the principle of judgement is: whether the difference between the system time of attribute information last time record from server end 2 obtains and current system time is less than the time-out time of system setting when initial, if be less than, the attribute information that buffer memory is described is effective, can directly use the attribute information in buffer memory to return to the getattr request on upper strata, so far, getattr request processing finishes, otherwise, the attribute information that buffer memory is described lost efficacy, now need again from server end 2, to obtain attribute information, if do not exist, before illustrating, there is not the getattr request to this object, now need to obtain attribute information from server end 2.
Below for obtain the attribute information of object from server end 2: client 1 sends to server end 2 by getattr request.
Server end 2 handling processes: server end 2 receives after the getattr request that client 1 sends, and navigates to request object, and obtains its attribute information, and attribute information is sent to client 1 in the bottom local file system at its Export directoey place.
Client 1 handling process: client 1 receives after returning of request, if for the first time this object is carried out to getattr request, attribute information is carried out to buffer memory, and record current system time, for whether attribute information is effectively judged, and return to the getattr request on upper strata, so far, getattr request process finishes.If there is the attribute information having lost efficacy in buffer memory, first whether the modification time in the attribute information of the new acquisition of judgement and the attribute information of inefficacy is identical, if identical, the data buffer storage that this object is described is still effective, otherwise, the data buffer storage that this object is described lost efficacy, and its data buffer storage was removed.So just, guaranteed that the information of its buffer memory is consistent with the state of server end 2 after obtaining attribute.Then the attribute information newly obtaining is updated in buffer memory, and records current system time, finally to the getattr request on upper strata, return, so far, getattr request process finishes.
Reading out data protocol procedures: read the data of corresponding objects in data interval.Virtual file system receives after reading out data request, and request is converted into <ino, offset, and size>(is ino wherein: the node number ino that is read object; Offset: by the deviation post of read data; Size: read byte number) such a tlv triple, and pass to the event-driven module 12 of client 1, [by the deviation post of read data, reading byte number] is defined as to the block information of read request herein.
As shown in Figure 6, read data protocol procedures idiographic flow is as follows:
Client 1 flow process: step 401 event-driven module 12 receives the read data request in user's state storehouse, step 402 caching management module 13 receives the read data request that event-driven module 12 sends, perform step 403 event-driven modules 12 and according to the node number inquiry of read data request, read the metamessage of object, and judge in its metamessage, whether the interval cache information of corresponding read request exists, if existed, execution step 404 directly reads corresponding interval censored data from the data buffer storage of local data base, otherwise read data request need to be communicated by letter with server 2 ends and obtain corresponding interval censored data, step 405 event-driven module 12 receives the result of caching management module 13, if read corresponding interval censored data, event-driven module 12 returns to user's state storehouse by corresponding interval censored data, reading flow process finishes, otherwise, perform step 406 event-driven modules 12 request is sent to data transmission module 11.Data transmission module 11 sends to server end 2 by above-mentioned read data request packing.
Server end 2 flow processs: the data transmission module 11 of step 407 server end 2 receives client 1 and sends read data request, parsing obtains corresponding informance, step 408 data transmission module 11 sends to event processing module 22 by read data request, step 409 event processing module 22 sends to node number administration module 23 by read data request, perform step 410 node number administration modules 23 according to the node number Query Database of read data request, obtain corresponding record, and obtain it at the pathname of database, step 411 returns to node number administration module 23 by corresponding pathname information, step 412 node number administration module 23 sends to event processing module 22 by this pathname information, step 413 is according to corresponding file in this pathname access bottom local file system, and read corresponding data according to the block information of read data request, step 414 returns to event processing module 22 by the interval censored data of reading, perform step 415 event processing modules 22 data of reading are passed to data transmission module 11, and send to client 1 by data transmission module 11 packings.
Client 1 flow process: the information that step 416 data transmission module 11 reception server ends 2 return, step 417 event-driven module 12 receives the information that data transmission module 11 sends, step 418 event-driven module 12 sends to caching management module 13 by the information of reception, and execution step 419 caching management module 13 are updated to the data content of read data request and block information in local cache according to this information.When data message is written to database, can detect the spatial cache having used and whether reach default higher limit, as mentioned before system asynchronous cleaning thread, this thread is generally in sleep state, when spatial cache being detected and surpassed the higher limit of setting, will wake this thread up, first this thread can be deleted those and be replaced to the shared data buffer storage of object in database, if now remaining cache space reaches the lower limit of setting, this thread continues sleep, otherwise will continue the access time according to object, delete those and reside in data buffer storage corresponding to object in internal memory, until remaining cache space reaches the lower limit of setting, then thread proceeds to sleep state.For supporting spatial cache management, when spatial cache is inadequate, not accessed at most target cache is deleted, the access time that needs log file, for accelerating the speed of locating file, the object being substituted in database sorted from small to large according to the access time, and in internal memory, object can carry out LRU (recent minimum use algorithm) sequence according to the access time, during deletion, the data buffer storage of little object of access time was deleted.After above step is finished, perform step 420 event-driven modules 12 and to user's state storehouse, return to the data content of request, read flow process and finish.
Write protocol procedures: the data in interval are written to corresponding object.Virtual file system receives after write request, request can be converted into <ino, buf, and offset, size>(is ino wherein: the node number ino that is written into the object of data; Buf: the stored address of data writing; Offset: the deviation post of these data in being written into object; Size: the byte number writing) such a four-tuple, and pass to the event-driven module 12 of client 1, wherein [deviation post of these data in being written into object, the byte number writing] is the block information of write request.Traditional linux write request process in two steps, is first write the content of writing in internal memory, and then by the content synchronization of writing to disk.Use for reference such thought, and the processing procedure that makes write request is more suitable in Wide Area Network, the process of writing of the present invention is also divided into two steps, first write data in local cache and record the corresponding block information of data writing, now just can return to write operation to application request and finish, this process is referred to as to write; Next only has when triggering backwash operation just can really write server end 2 by revising, at this moment just real completing of write request.
As shown in Figure 7, write protocol procedures idiographic flow as follows:
Client 1 flow process: step 501 event-driven module 12 receives the write request in user's state storehouse, step 502 caching management module 13 receives the write request that event-driven module 12 sends, perform step 503 caching management module 13 and according to the node number of write request, inquire the metamessage of writing object, and navigate to it at the pathname of bottom local file system, client 1 navigates to the corresponding object of writing in caching management module 13 according to write request, by the data content of write request and corresponding block information writes in corresponding buffer memory and this interval censored data of mark is " 1 ", and the part using corresponding block information as metamessage is carried out buffer memory, step 504 returns to caching management module by the result of this write request, so far, the flow process of writing of wide area file system finishes.Step 505 event-driven module 12 receives the result of caching management module 13, and step 519 will write successful information and return to user's state storehouse, and ablation process finishes.
To describe backwash process below: once backwash process is triggered, execution step 506 event-driven modules 12 pass to data transmission module 11 by request, data transmission module 11 sends to server end 2 by above-mentioned write request packing.
Server end 2 flow processs: the data transmission module 11 of step 507 server end 2 receives the write request that client 1 sends, parsing obtains corresponding informance, step 508 data transmission module 11 sends to event processing module 22 by write request, perform step 509 event processing modules 22 write request is sent to node number administration module 23, perform step 510 node administration modules 23 according to the node number Query Database of write request, obtain corresponding record, and obtain its pathname in database, step 511 returns to node number administration module 23 by corresponding pathname information, step 512 node number administration module 23 sends to event processing module 22 by this pathname information, execution step 513 is according to file corresponding to this pathname access bottom local file system, and according to the information of write request, corresponding data are write, step 514 will write successful information and return to event processing module 22, perform step 515 event processing modules 22 information is sent to data transmission module 11, and send to client 1 by data transmission module packing.
Client 1 flow process: step 516 data transmission module 11 receives the information that server end 2 returns, step 517 event-driven module 12 receives the information of data transmission module 11, step 518 event-driven module 12 sends to caching management module 13 by the information of reception, the block information that caching management module 13 is write object by correspondence is labeled as " 2 ", represent that this interval censored data is consistent with the data of server end 2, perform step 519 event-driven modules 12 and return to successful information to user's state storehouse, backwash flow process finishes.
Be below overall operation flow process of the present invention:
A wide area file system, server end comprises node number administration module, data transmission module, event processing module; Client comprises event-driven module, data transmission module, caching management module,
Wherein, by this event-driven module, receive operation requests, if this request is read requests, according to the node number of asked wide area file and block information, by whether there is this wide area file in this caching management module inquiry local cache, if exist, from this local cache, read this wide area file, if do not exist, according to this node number, by this node number administration module, inquire about the pathname of this wide area file in this server-side database, and according to this block information, by this event processing module, obtain the data message of this wide area file, this data message is sent to this caching management module, this caching management module deposits this data message and block information corresponding thereto in this local cache according to buffer update strategy, to facilitate, read, this server end and this client are carried out information transmission by data transmission module separately.
Described wide area file system, also comprise that this event-driven module receives establishment wide area file request, parent directory node number by new wide area file in this server-side database and the title of this new wide area file send to this node number administration module, this node number administration module inquires about according to this parent directory node number the pathname that this server-side database is obtained this parent directory, according to this pathname, locate the position of this parent directory in this server end bottom local file system, and under this position, according to this title, create this new wide area file and obtain the metamessage of this new establishment file, this node number administration module is this new establishment wide area file allocation node number simultaneously, this event processing module sends to this caching management module by the node number of this distribution and this metamessage and deposits this local cache in.
Described wide area file system, also comprise that this event-driven module receives to wide area file write request, according to the node number of this wide area file, locate this position of wide area file in this local cache, will data writing and the block information of this write request write this local cache, and in this local cache of mark, this block information is " 1 ", to represent wanting data writing to exist only in this local cache, there is difference with the data in this wide area file of this server end.
Described wide area file system, also comprise data backwash, concrete steps are: after log-on data backwash, according to this node number, locate the position of this wide area file in server end bottom local file system, should want data writing to write this wide area file of this server end, and this block information is " 2 " in this local cache of mark, to represent in this local cache to want the data consistent in this wide area file of data writing and this server end.
Described wide area file system, also comprise that this event-driven module receives deletion wide area file request, parent directory node number by wide area file to be deleted in this server-side database and the title of this wide area file to be deleted send to this node number administration module, this node administration module is located the position of this parent directory in server end local file system according to this parent directory node number, by event processing module, delete the wide area file under this position with this title, whether by this caching management module, inquire about this wide area file to be deleted is cited simultaneously, if, by this wide area file hiding to be deleted in this local cache, after unquote, delete this wide area file to be deleted in this local cache, if not, delete this wide area file to be deleted in this local cache.
Described wide area file system, this buffer update strategy comprises: record the reference count of wide area file metamessage in this local cache, when this reference count becomes 0, this metamessage is replaced away from this local cache, and deposited in database.
Described wide area file system, this buffer update strategy also comprises: the system time when recording this client and obtaining metamessage from this server end, by system time during this metamessage in this system time and this local cache of access, compare, if difference is less than time-out time, this metamessage in this local cache effectively and the data corresponding with this metamessage effective, and from this local cache, obtain this metamessage and this data, otherwise obtain new metamessage from this server end, relatively in this local cache, whether the modification time of this metamessage and the modification time of this new metamessage be identical, if identical, these data are effective, otherwise this data failure.
Described wide area file system, this buffer update strategy also comprises: detect the higher limit whether this local cache space of having used has reached setting, when reaching this higher limit, wake asynchronous cleaning thread up, this thread is deleted metamessage and has been replaced to the shared data buffer storage of wide area file in database, if remain the lower limit that this local cache space reaches setting, this thread enters sleep state, otherwise according to the access time of wide area file, delete in this local cache the data buffer storage of not accessed wide area file at most, until remaining cache space reaches this lower limit.

Claims (9)

1. a wide area file system, is characterized in that, server end comprises node number administration module, data transmission module, event processing module; Client comprises event-driven module, data transmission module, caching management module,
Wherein, by this event-driven module, receive operation requests, if this request is read requests, according to the node number of asked wide area file and block information, by whether there is this wide area file in this caching management module inquiry local cache, if exist, from this local cache, read this wide area file, if do not exist, according to this node number, by this node number administration module, inquire about the pathname of this wide area file in this server-side database, and according to this block information, by this event processing module, obtain the data message of this wide area file, this data message is sent to this caching management module, this caching management module deposits this data message and block information corresponding thereto in this local cache according to buffer update strategy, to facilitate, read, this server end and this client are carried out information transmission by data transmission module separately.
2. wide area file system as claimed in claim 1, it is characterized in that, also comprise that this event-driven module receives establishment wide area file request, parent directory node number by new wide area file in this server-side database and the title of this new wide area file send to this node number administration module, this node number administration module inquires about according to this parent directory node number the pathname that this server-side database is obtained this parent directory, according to this pathname, locate the position of this parent directory in this server end bottom local file system, and under this position, according to this title, create this new wide area file and obtain the metamessage of this new establishment file, this node number administration module is this new establishment wide area file allocation node number simultaneously, this event processing module sends to this caching management module by the node number of this distribution and this metamessage and deposits this local cache in.
3. wide area file system as claimed in claim 1, it is characterized in that, also comprise that this event-driven module receives to wide area file write request, according to the node number of this wide area file, locate this position of wide area file in this local cache, will data writing and the block information of this write request write this local cache, and in this local cache of mark, this block information is " 1 ", to represent wanting data writing to exist only in this local cache, there is difference with the data in this wide area file of this server end.
4. wide area file system as claimed in claim 3, it is characterized in that, also comprise data backwash, concrete steps are: after log-on data backwash, according to this node number, locate the position of this wide area file in server end bottom local file system, should want data writing to write this wide area file of this server end, and in this local cache of mark, this block information is " 2 ", to represent in this local cache to want the data consistent in this wide area file of data writing and this server end.
5. wide area file system as claimed in claim 1, it is characterized in that, also comprise that this event-driven module receives deletion wide area file request, parent directory node number by wide area file to be deleted in this server-side database and the title of this wide area file to be deleted send to this node number administration module, this node administration module is located the position of this parent directory in server end local file system according to this parent directory node number, by event processing module, delete the wide area file under this position with this title, whether by this caching management module, inquire about this wide area file to be deleted is cited simultaneously, if, by this wide area file hiding to be deleted in this local cache, after unquote, delete this wide area file to be deleted in this local cache, if not, delete this wide area file to be deleted in this local cache.
6. wide area file system as claimed in claim 1, it is characterized in that, this buffer update strategy comprises: record the reference count of wide area file metamessage in this local cache, when this reference count becomes 0, this metamessage is replaced away from this local cache, and deposited in database.
7. wide area file system as claimed in claim 6, it is characterized in that, this buffer update strategy also comprises: the system time when recording this client and obtaining metamessage from this server end, by system time during this metamessage in this system time and this local cache of access, compare, if difference is less than time-out time, this metamessage in this local cache effectively and the data corresponding with this metamessage effective, and from this local cache, obtain this metamessage and this data, otherwise obtain new metamessage from this server end, relatively in this local cache, whether the modification time of this metamessage and the modification time of this new metamessage be identical, if identical, these data are effective, otherwise this data failure.
8. wide area file system as claimed in claim 6, it is characterized in that, this buffer update strategy also comprises: detect the higher limit whether this local cache space of having used has reached setting, when reaching this higher limit, wake asynchronous cleaning thread up, this thread is deleted metamessage and has been replaced to the shared data buffer storage of wide area file in database, if remain the lower limit that this local cache space reaches setting, this thread enters sleep state, otherwise according to the access time of wide area file, delete in this local cache the data buffer storage of not accessed wide area file at most, until remaining cache space reaches this lower limit.
9. an implementation method that adopts claim 1-9 any one.
CN201410095627.XA 2014-03-14 2014-03-14 Wide area file system and implementation method Pending CN103944958A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410095627.XA CN103944958A (en) 2014-03-14 2014-03-14 Wide area file system and implementation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410095627.XA CN103944958A (en) 2014-03-14 2014-03-14 Wide area file system and implementation method

Publications (1)

Publication Number Publication Date
CN103944958A true CN103944958A (en) 2014-07-23

Family

ID=51192439

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410095627.XA Pending CN103944958A (en) 2014-03-14 2014-03-14 Wide area file system and implementation method

Country Status (1)

Country Link
CN (1) CN103944958A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104301442A (en) * 2014-11-17 2015-01-21 浪潮电子信息产业股份有限公司 Method for achieving client of access object storage cluster based on fuse
CN104537026A (en) * 2014-12-22 2015-04-22 福建亿榕信息技术有限公司 Paper archive file processing method based on local cache
CN104702700A (en) * 2015-03-30 2015-06-10 四川神琥科技有限公司 Mail extracting method
CN104735152A (en) * 2015-03-30 2015-06-24 四川神琥科技有限公司 Mail reading method based on network
CN104821907A (en) * 2015-03-30 2015-08-05 四川神琥科技有限公司 Email processing method
CN106446097A (en) * 2016-09-13 2017-02-22 郑州云海信息技术有限公司 File reading method and system
TWI576703B (en) * 2015-03-27 2017-04-01 宏碁股份有限公司 Electronic apparatus and method for temporarily storing data thereof
CN106708833A (en) * 2015-08-03 2017-05-24 腾讯科技(深圳)有限公司 Position information-based data obtaining method and apparatus
CN106709056A (en) * 2017-01-09 2017-05-24 郑州云海信息技术有限公司 Nfs mounted directory exporting method and device
CN108959291A (en) * 2017-05-19 2018-12-07 腾讯科技(深圳)有限公司 Querying method and relevant apparatus
CN109241021A (en) * 2018-09-04 2019-01-18 郑州云海信息技术有限公司 A kind of file polling method, apparatus, equipment and computer readable storage medium
CN109947719A (en) * 2019-03-21 2019-06-28 昆山九华电子设备厂 A method of it improving cluster and reads directory entry efficiency under catalogue
CN110096295A (en) * 2019-05-08 2019-08-06 吉旗(成都)科技有限公司 The hot update method and system of multimode mobile application based on ReactNative
CN111125168A (en) * 2019-11-07 2020-05-08 网银在线(北京)科技有限公司 Data processing method and device, electronic equipment and storage medium
CN111522879A (en) * 2020-04-16 2020-08-11 北京雷石天地电子技术有限公司 Data distribution method based on cache and electronic equipment
CN111752905A (en) * 2020-07-01 2020-10-09 浪潮云信息技术股份公司 Large file distributed cache system based on object storage
CN112214247A (en) * 2019-07-12 2021-01-12 华为技术有限公司 System starting method and related equipment
CN113076292A (en) * 2021-03-30 2021-07-06 山东英信计算机技术有限公司 File caching method, system, storage medium and equipment
CN114579514A (en) * 2022-04-25 2022-06-03 阿里云计算有限公司 File processing method, device and equipment based on multiple computing nodes
CN114756509A (en) * 2022-05-19 2022-07-15 北京百度网讯科技有限公司 Operation method, system, device and storage medium of file system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102143215A (en) * 2011-01-20 2011-08-03 中国人民解放军理工大学 Network-based PB level cloud storage system and processing method thereof
CN102497428A (en) * 2011-12-13 2012-06-13 方正国际软件有限公司 Remote storage system and method for remote storage thereof
CN103139224A (en) * 2011-11-22 2013-06-05 腾讯科技(深圳)有限公司 Network file system and method for accessing network file system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102143215A (en) * 2011-01-20 2011-08-03 中国人民解放军理工大学 Network-based PB level cloud storage system and processing method thereof
CN103139224A (en) * 2011-11-22 2013-06-05 腾讯科技(深圳)有限公司 Network file system and method for accessing network file system
CN102497428A (en) * 2011-12-13 2012-06-13 方正国际软件有限公司 Remote storage system and method for remote storage thereof

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104301442A (en) * 2014-11-17 2015-01-21 浪潮电子信息产业股份有限公司 Method for achieving client of access object storage cluster based on fuse
CN104537026B (en) * 2014-12-22 2018-08-24 福建亿榕信息技术有限公司 Archives of paper quality document handling method based on local cache
CN104537026A (en) * 2014-12-22 2015-04-22 福建亿榕信息技术有限公司 Paper archive file processing method based on local cache
TWI576703B (en) * 2015-03-27 2017-04-01 宏碁股份有限公司 Electronic apparatus and method for temporarily storing data thereof
US9836468B2 (en) 2015-03-27 2017-12-05 Acer Incorporated Electronic apparatus and method for temporarily storing data thereof
CN104702700A (en) * 2015-03-30 2015-06-10 四川神琥科技有限公司 Mail extracting method
CN104821907A (en) * 2015-03-30 2015-08-05 四川神琥科技有限公司 Email processing method
CN104735152A (en) * 2015-03-30 2015-06-24 四川神琥科技有限公司 Mail reading method based on network
CN104821907B (en) * 2015-03-30 2018-01-30 四川神琥科技有限公司 A kind of E-mail processing method
CN106708833A (en) * 2015-08-03 2017-05-24 腾讯科技(深圳)有限公司 Position information-based data obtaining method and apparatus
US11144609B2 (en) 2015-08-03 2021-10-12 Tencent Technology (Shenzhen) Company Limited Method and apparatus for obtaining data based on location information
CN106708833B (en) * 2015-08-03 2020-04-07 腾讯科技(深圳)有限公司 Method and device for acquiring data based on position information
CN106446097B (en) * 2016-09-13 2020-02-07 苏州浪潮智能科技有限公司 File reading method and system
CN106446097A (en) * 2016-09-13 2017-02-22 郑州云海信息技术有限公司 File reading method and system
CN106709056A (en) * 2017-01-09 2017-05-24 郑州云海信息技术有限公司 Nfs mounted directory exporting method and device
CN106709056B (en) * 2017-01-09 2020-11-20 苏州浪潮智能科技有限公司 Nfs mount directory export method and device
CN108959291A (en) * 2017-05-19 2018-12-07 腾讯科技(深圳)有限公司 Querying method and relevant apparatus
CN108959291B (en) * 2017-05-19 2023-03-24 腾讯科技(深圳)有限公司 Query method and related device
CN109241021A (en) * 2018-09-04 2019-01-18 郑州云海信息技术有限公司 A kind of file polling method, apparatus, equipment and computer readable storage medium
CN109947719A (en) * 2019-03-21 2019-06-28 昆山九华电子设备厂 A method of it improving cluster and reads directory entry efficiency under catalogue
CN109947719B (en) * 2019-03-21 2022-10-11 昆山九华电子设备厂 Method for improving efficiency of cluster reading directory entries under directory
CN110096295B (en) * 2019-05-08 2023-08-08 吉旗(成都)科技有限公司 Multi-module mobile application thermal updating method and system based on reactivating
CN110096295A (en) * 2019-05-08 2019-08-06 吉旗(成都)科技有限公司 The hot update method and system of multimode mobile application based on ReactNative
US11868631B2 (en) 2019-07-12 2024-01-09 Huawei Technologies Co., Ltd. System startup method and related device
CN112214247A (en) * 2019-07-12 2021-01-12 华为技术有限公司 System starting method and related equipment
CN112214247B (en) * 2019-07-12 2022-05-17 华为技术有限公司 System starting method and related equipment
CN111125168A (en) * 2019-11-07 2020-05-08 网银在线(北京)科技有限公司 Data processing method and device, electronic equipment and storage medium
CN111125168B (en) * 2019-11-07 2023-11-03 网银在线(北京)科技有限公司 Data processing method and device, electronic equipment and storage medium
CN111522879B (en) * 2020-04-16 2023-09-29 北京雷石天地电子技术有限公司 Data distribution method based on cache and electronic equipment
CN111522879A (en) * 2020-04-16 2020-08-11 北京雷石天地电子技术有限公司 Data distribution method based on cache and electronic equipment
CN111752905A (en) * 2020-07-01 2020-10-09 浪潮云信息技术股份公司 Large file distributed cache system based on object storage
CN111752905B (en) * 2020-07-01 2024-04-09 浪潮云信息技术股份公司 Large file distributed cache system based on object storage
CN113076292B (en) * 2021-03-30 2023-03-14 山东英信计算机技术有限公司 File caching method, system, storage medium and equipment
CN113076292A (en) * 2021-03-30 2021-07-06 山东英信计算机技术有限公司 File caching method, system, storage medium and equipment
CN114579514A (en) * 2022-04-25 2022-06-03 阿里云计算有限公司 File processing method, device and equipment based on multiple computing nodes
CN114756509A (en) * 2022-05-19 2022-07-15 北京百度网讯科技有限公司 Operation method, system, device and storage medium of file system

Similar Documents

Publication Publication Date Title
CN103944958A (en) Wide area file system and implementation method
JP6371858B2 (en) Atomic writing for multiple extent operations
JP6371859B2 (en) Session management in distributed storage systems
JP6259532B2 (en) Namespace management in distributed storage systems
JP6322722B2 (en) File storage using variable stripe size
CN105324770B (en) Effectively read copy
CN102591970B (en) Distributed key-value query method and query engine system
KR101542707B1 (en) Distributed replica storage system with web services interface
CN104618482B (en) Access method, server, conventional memory device, the system of cloud data
US8463802B2 (en) Card-based management of discardable files
JP2017515212A (en) Scalable file storage service
CN112236758A (en) Cloud storage distributed file system
US20130218934A1 (en) Method for directory entries split and merge in distributed file system
CN104281506A (en) Data maintenance method and system for file system
CN102136003A (en) Large-scale distributed storage system
CN103237046A (en) Distributed file system supporting mixed cloud storage application and realization method thereof
EP3803618A1 (en) Distributed transactions in cloud storage with hierarchical namespace
CN106294870B (en) Object-based distribution cloud storage method
CN108108476A (en) The method of work of highly reliable distributed information log system
US20160179435A1 (en) Systems and methods for shadow migration progress estimation
CN103870202A (en) Distributed storage method and system of block device
CN103067461A (en) Metadata management system of document and metadata management method thereof
CN109120709A (en) A kind of caching method, device, equipment and medium
US10831719B2 (en) File consistency in shared storage using partial-edit files
CN106960011A (en) Metadata of distributed type file system management system and method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140723