GB2218833A - File system - Google Patents
File system Download PDFInfo
- Publication number
- GB2218833A GB2218833A GB8909350A GB8909350A GB2218833A GB 2218833 A GB2218833 A GB 2218833A GB 8909350 A GB8909350 A GB 8909350A GB 8909350 A GB8909350 A GB 8909350A GB 2218833 A GB2218833 A GB 2218833A
- Authority
- GB
- United Kingdom
- Prior art keywords
- information
- data
- file system
- storing
- storage means
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K17/00—Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
An improved file system is disclosed with particular application in an UNIX operation environment. The file system allows for an asynchronous I/O between a memory and a storage device. A number of improvements to a file system such as an UNIX file system, which lead to increased operating speed and efficiencies, comprise separation of data 507 and other information 506 within the available file space, attempting to store all data from a single file in a contiguous region in the file system, use of an inode (file information node) indirect block 504 for addressing inodes which allows for growth of the number of inodes blocks without need to reconfigure the file system and use of a bit map 505 to track free blocks in the system. <IMAGE>
Description
FILE SYSTEM
BACKGROUND OF THE INVENTION
1. Field of the invention.
The present invention relates to methods for handling asynchronous (async) inputioutput (I/Os) and file buffering in a computer system and in the preferred embodiment is specifically applicable to async I/Os and file buffering in the UNIXTH Operatin-g System. UNSXT is a trademark of Bell Laboratories, a division of AT & .
2. Prior Art.
The UNIX operating system, among many other operating systems, views files generally as a stream of bytes. However, internally files are typically distinguished and categorized into a plurality of file types. For example, in UNIX four file types are known; a regular file, a directory type, special files and pipe files. A regular file typically comprises source or object code for a program or text or other end-user data. A directory file contains a list of filenames in any particular directory along with data identifying information nodes (commonly referred to as
INODES) for each file. The INODES contain administrative data for each file which will be discussed in more detail below. Special files actually in represent devices associated with the computer system such as disk drives, terminals and printers.A pipe file may be thought of as a regular file with a first-in, first-out (FIFO) organization and is intended to be written to by one process executing on the computer system and to be read by another process, providing a communication means between the two processes.
An INODE exists for each file in a UNIX-based computer system and comprises information describing attributes of the file, security information or permissions, and other administrative data. Further, INODES comprise the physical disk address of the file they represent Files in a UNIX system do not require contiguous storage space and disk space is not pre-allocated. Each INODE contains 39 bytes of address information. The 39 bytes are utilized to point either directly or indirectly to the physical addresses in which the file is stored.
The UNIX files are divided into a plurality of one block pieces and the blocks are placed in physical disk space at whatever free blocks are available. The size of a block in
UNIX System V is either 512 bytes or 1024 bytes. Thus, on a system using 1024 byte blocks, a file containing 8000 bytes would require eight blocks of physical disk space. The eight blocks would not necessarily be contiguous and would be individually addressed based on the address information in the INODE. Other UNIX implementations may utilize differing block sizes. Notably, the Berkeley System Distribution (BSD)
UNIX System utilizes 8K byte blocks. The BSD UNIX implementation increased block size to yield increased file system performance.
In both regular type and directory type files, the 39 bytes of address information in the INODE are comprised of 13 three-byte fields. The first ten of these fields (fields 09, bytes 0-29) contain addresses of the physical disk blocks containing file data. Thus, the previously mentioned#8000- byte file would be entirely addressed by the first eight of these fields.
The eleventh field (field 10, bytes 30-32) contains the address of a block containing direct block addresses. This is referred to as indirect block address. An indirect block may comprise up to the block size/3 direct block addresses.
The twelfth block (Field 11, bytes 33-35) is a double indirect block containing addresses of indirect blocks, in turn each indirect block contains addresses of physical disk blocks containing file data. The double indirect block may comprise up to the block size/3 indirect block addresses.
Finally, the thirteenth field (field 12, bytes 36-38) is a triple indirect block address. It points to a block containing double indirect block addresses which in turn point to blocks containing indirect block addresses which in turn point to blocks of direct addresses which in turn point to blocks containing file data.
Such an approach to file addresses allows addressing of a file with a very large address space. For example, considering the case of a system utilizing 1024-byte blocks, the first ten fields may address a file of up to 10K bytes of data. It will be obvious that addressing a file utilizing direct addressing techniques is faster than addressing utilizing indirect techniques. Further, utilizing a single indirect technique is faster than double indirect and double indirect is faster than triple indirect.
In the example of a system having 1024-byte blocks, further assuming each block may be accessed by a four-byte integer, the indirect address block can hold up to 256 block addresses (1024/4=256), adding 256K to the file's address space. The double indirect block adds an additional 256 indirect blocks or 256 times 256 direct blocks for an additional 64 megabytes of address space. Finally, the triple indirect block adds another 256 double indirect blocks providing a file address space of over 16 gigabytes.
Known UNIX file systems further comprise a block of information about the file system such as its size, how many files may be stored, where free space may be available, etc.
This block is commonly referred to as the superblock and is typically the second block in the file system. The first block in the file system is known as the boot block. If the file system is one from which UNIX is being booted, the boot block is used, otherwise it is unused.
In UNIX System V file systems, the blocks following the superblock are called i~list blocks and comprises the inodes for the file system. The number of inodes is specified by the system administrator when the system is configured. Am INODE is 64 bytes long so the number of i~list blocks depends on the number of INODES. In UNIX System V file systems, unlike regular file storage, the i~list is contiguous and always follows the superblock. Data blocks immediately follow the i~list and use the remaining blocks in the file system.
In the BSD file system, INODES are distributed over the disk instead of being contiguously located starting after the superblock. The INODES are located as close as possible to the data associated with them. This design is intended to yield increased speed and efficiency when accessing files.
As previously mentioned, the free block list is maintained in the superblock. Referring to Figure 1, the free block list is described in more detail. The free block list is maintained as a plurality of free block address blocks 101, 102 and 103. Each of the free block address blocks 101, 102 and 103 comprise a count field 105, a linked list pointer field 106 and, in most UNIX file systems, 49 free block pointers 107. The first free block address block 101 is actually maintained in the superblock and the remaining free block address blocks, such as blocks 101 and 102 are located in the data block space and allocated as required.
Each linked list pointer 105 points to the next free block address block in the linked list. The count field 105 maintains a count of the number of free block pointers available in the block. Each free block pointer 107 contains the address of a free block in the file system.
Referring to Figure 2, the overall organization of the
UNIX System V file system 200 is illustrated. The boot block 201 is located a block ~, the superblock 202 is located a block 1, the i~list blocks 203 are located contiguously starting at block 2 and continuing for a number of blocks specified by the system administrator and the data blocks 254 are located after the last i list block.
It is desired to develop an improved file system which offers increased speed and efficiency in accessing and updating data in the file system and which allows growth of the number of files maintained in the system without requirement of reconfiguration of the system.
Referring to Figure 3, in known UNIX systems, process A 301 may make a I/O request, such as a read request. Process
A makes such a read request by communicating to a disk process 303 the request. Further, process A writes to a control block 202 information such as the memory location to write data to (or read data from in the case of an output operation) and the number of bytes to be transferred. The control block 302 further comprises status information written by the disk process and read by requesting process indicating the status of the I/O (in progress, waiting, complete, etc.).
Once process A 301 initiates an I/O request to the disk process 303, process A 301 enters a sleep state. Process A 301 is said to be sleeping on an event, the event being completion of the requested I/O on the requested device such as disk 304. Upon completion of the I/O the disk process receives a completion interrupt, updates process A's control block=302 and wakes process A 301.
It will be obvious of one of ordinary skill that process
A 301 must be maintained as an alive process during execution.
of the I/O in the described system. During execution of the
I/O, process A 301 sleeps which insures that it may not execute further instructions which may cause it to terminate.
If the process was allowed to execute asynchronously with the
I/O and terminated as a result of such execution, a new process may be started and assigned to the same memory I/O area as owned by the terminated process. When the I/O is completed, the disk process may update the new process' control block and memory I/O area leading to unexpected results.
Two types of I/O are supported in a UNlX operating system, buffered and unbuffered. Buffered I/O passes through a set of system buffers. Unbuffered I/O uses a direct memory access facility (DMA) of UNIX to directly transfer data to process A's I/O area in memory 305. Buffered I/O is typically used for slower access speed devices. Further, buffered I/O is commonly used where data is transferred a block at a time rather than in single byte units.
Referring to Figure 4, when buffered I/O is utilized, data is transferred from the I/O device such as disk to a system buffer cache 402. The system buffer cache 402 is utilized to maintain data as well as control and status information. After the completion of the I/O transfer from disk, the DMA facility is utilized to perform a memory to memory transfer putting the data and control and status information in the process' I/O area 403.
It is desired to develop an I/O system which allows for asynchronous I/O in a file system such as a UNIX file system.
It is further desired to develop an I/O system which minimizes memory to memory transfers of information and provides other improvements over the prior art which will be described with reference to the detailed description of the present invention and the accompanying drawings.
It is still further an object of all aspects of the present invention to provide compatibility for programs written for other UNIX implementations.
SURDMRY OF THF INVENTION
The present invention relates to the field of file systems and the preferred embodiment relates specifically to improvements to a UNIX-based file system.
The present invention discloses a method of organizing data in a UNIX-based file system such that accesses to data may normally be accomplished by accessing the data on disk or other storage device a track at a time. Regular file data is stored separately from other data in the system. Further, the present invention discloses a method for allocating space on a disk or other storage device such that data belonging to a single file is located on the disk as close together as possible. These and other aspects of the present invention lead to file system efficiencies and increased speed.
Further, the present invention discloses a method for allocating information nodes (inodes) such that the number of inodes and, thus, the number of files may increase during operation of the computer system and is not limited to a predetermined number established at the time the computer system is configured.
Further, the present invention discloses an improved method of accounting for free blocks in the file system.
Still further, the present invention discloses a method for implementing asynchronous I/O instructions in a file system such as the UNIX file system through use of a monitor process. The disk process reports completion of an I/O transaction to the monitor process instead of to the requesting process, allowing the requesting process to continue execution until requiring requested data.
BRIEF DESCRIPTION OF THF DRAWINGS
Figure i is a block diagram illustrating a prior art free list chain.
Figure 2 is a block diagram illustrating a prior art UNIX
File System.
Figure 3 is a block diagram illustrating a prior art I/O method.
Figure 4 is a block diagram further illustrating the prior art I/O method.
Figure 5 is a block diagram illustrative of the file system of the present invention.
Figure 6 is a block diagram illustrative of the format of an inode indirect block address entry.
Figure 7 is a block diagram illustrative of a method of allocating file space in the present invention.
Figure 8 is a block diagram illustrating an I/O method of the present invention.
DETATLED DESCRIPTION OF THE PRESENT INVENTION
An improved file and I/O system is described. In the following description, numerous specific details are set forth such as field sizes, offsets, etc., in order to provide a thorough understanding of the present invention. It will be obvious, however, to one of ordinary skill in the art that the present invention may be practiced without these specific details. In other instances, well-known structures and techniques have not been described in detail in order not to unnecessarily obscure the present invention. Although the present invention is described with specific reference to implementation in a UNIX operating environment, it is obvious to one of ordinary skill that the described invention may be applicable to other file systems and will have specific applications in file systems which are compatible with UNIX.
Referring to Figure 5, a layout of the file system as utilized by the present invention is shown. The file system of the present invention organizes available file space as an array of logical cylinders 501. The first plurality of blocks in the file system of the present invention is either the boot block 502 or unused. The next contiguous plurality of blocks comprise the superblock 503. The superblock comprises a structure as defined in Table I.
TABLE I typedef unsigned char ~ucaddr~t; typedef struct
ulong magic; /' Magic number for file system ~/ daddrj size; /* Size (in blocks) of the file system */ ushort tracksize; /' Size (in blocks) of a logical track ~/ ushort cylsize; /* Size (in tracks) of a log. cylinder ~/ char fnamei6 ] ; /* File system "name" ~/ char fpackl6i; /0 File system "pack name" "/
ulong flags; /* File system flags - see below */
time,t utime; /' Time of last superblock update */ ushort ilock: /# Inode list modification lock ~/ ushort inoblks:: /0 'node blocks allocated */
ulong ifree; /0 Total free inodes 0/ iblock~t iblock; /* Pointer to the iblock 0/ ushort mlock: /* Bit map usage lock ~/ ulong free; /' Total free blocks */ ulong tfree; /* Total free tracks ~/
ucadr~t tmap: /* The free track bit map */ ulong tmapsize; /* Pages used by track map ~/
ucaddr~t bmap; /* The free block bit map 0/ ulong tmapsize; /* Pages used by block map #i ucaddr~t bmap; /' The free block bit map 0/
ulong tmapsize; /* Pages used by block map */
filsys~t;
The size field gives the size of the file system in blocks. The tracksize field gives the number of blocks in a logical track. As one aspect of the present invention, file space is allocated in tracks. It has been found that allocating file space in relatively large units leads to increased processing efficiency. Allocating data in single track units appears to be optimal. Allocating in greater than single track units appears to suffer from some small rotational delays.
The cylsize field gives the number of tracks in a logical cylinder. In the case of both cylsize and tracksize, the number represents a logical size for allocation purposes and may or may not correspond with the physical disk cylinder and tracksize, respectively.
The iblock field is a pointer to the inode indirect block structure, discussed in more detail below. The ifree field is a count of the total number of free inodes in the system. The ilock field is a lock to guarantee exclusive access to the inode block, preventing inconsistencies and race conditions.
The bfree field is a count of the total number of free blocks in the system. The tfree field is a count of the total number of free tracks in the system. The tmap field is a pointer to a bit map of free tracks in the file system and the bmap field is a pointer to a bit map of free blocks in the file system. Both the bit map of free tracks and bit map of free blocks will be discussed in more detail below. The mlock field is a lock utilized to guarantee exclusive access to the free block bit map, preventing inconsistencies and race conditions.
As one inventive aspect of the present invention, unlike known UNIX file systems, the number of inodes is not fixed in the file system of the preferred embodiment when the file system is created. Instead, the preferred embodiment utilizes the first block following the superblock 503, called an inode indirect block address block or iblock 504 as an array with a structure as shown in Table II, below.
TASLElI typedef struct unsigned long addr.24; /' Addr aidata block Ll unsigned long free:8; /' Count of fgr#e nodes #/ iblockf; The addr is an address of a data block comprising inodes and the free field is a count of the number of free inodes in the data block. In the preferred embodiment a block size of 4K bytes is utilized and an inode requires 64 bytes.
Therefore, each data block may hold up to 64 inodes (4096/64=64). Each array entry in the iblock is 32 bits long (4 bytes), therefore, the iblock 504 may hold up to 1024 array entries (4096/4=1024) and a total of 64K inodes may exist in the file system (1024x64=65536).
With reference to Figure 6, the address of the particular data block containing a particular inode is found by indexing the array of iblock 504 with the 10 most significant bits 601 of the inode number 600. The offset in the data block is the 6 least significant bits 602 multiplied by the size (in chars) of an inode.
When the file system is first built, only the first entry in the iblock array is allocated to provide inodes for the root directory and the lost and found directory. Data blocks are allocated from the disk free space as necessary when additional inodes are required and appropriate entries are made in the iblock array. In the preferred embodiment, inode data blocks are not freed once allocated.
Referring again to Figure 5, disk free space is accounted for through use of free space bit maps 505 in the preferred embodiment instead of utilizing th linked list of free block address blocks as utilized by known UNIX file systems. The free block bit map comprises one bit for each block in the file system. In the preferred embodiment, a 0.
bit (value 12) indicates the block is free and a off bit (value 02) indicates the block is allocated. It is obvious to one of ordinary skill that these values may be reversed or more than a single bit may be used to represent each block.
The size of the free space bit map, in blocks is:
File system size (in blocks
(8 x block size (in bytes)) where there are 8 bits per byte.
The present invention further discloses dividing the file space into an overhead area 506 and a data area 507.
The point of division 508 between the overhead area 506 and data area 507 is determined at the time the file system is initialized. The overhead area comprises directory information, inode indirect blocks, the superblock, blocks to backup pipe files, etc. The data area comprises regular files. The organization of the file space in this manner has been found to lead to System efficiencies and aids in allowing data to be stored in contiguous areas.
It has been found that attempting to keep data from a single file together in a contiguous region on disk leads to further efficiencies in the file system and ~helps to prevent space fragmentation.
The disk allocation routine of the preferred embodiment attempts to spread the beginning of the files throughout the available file space. With reference to Figure 7, the allocation routine may attempt to allocate space for a first file, File A 701 in the file space 700. The allocation process of the present invention requests allocation at a particular location for File A 701. The disk subsystem then provides file space as close to the requested space as possible.
A request may then be made to allocate file space for a second file, File B 705. The allocation routine-requests allocation from the disk subsystem at a location which allows for contiguous growth of File A 701. A request may be made for allocation of a third file, File C 706. The allocation routine requests allocation from the disk subsystem at a location which allows for contiguous growth of both File A 701 and File B 702.
When a request is made for an additional extent for one of the existing files, such as File A 701, the allocation routine requests space from the disk subsystem contiguous with File A, such as extent 1 702 and extent 2 703.
It has been found that such an allocation method leads to files occupying contiguous blocks in the file space.
The present invention utilizes a buffer cache when accessing information in the file system. The buffer cache allows for mapping of up to one track of data from the file system. When mapping data into the buffer cache, the file system insures that each block in the requested track belongs to the current file and maps to the buffer cache those tracks belonging to the current file. A process may make an asynchronous I/O request as will be explained in more detail below and the file system ill provide data to the buffer cache from up to one track of the data file. The process may begin accessing the data as soon as the first block of the buffer cache is written.
As previously stated, the present invention provides for asynchronous I/O requests in a UNIX file system. By utilizing the methods of the present invention, a process may make an I/O request and continue execution concurrent with the file system processing the request. Execution of the process will be held up when the process attempts to access data which is not yet available from the I/O request.
Referring to Figure 8, a method utilized by the present invention to perform asynchronous I/O's is disclosed. The method will be discussed with specific reference to reading data from the file system, although it will be obvious to one of ordinary skill that the method is equally applicable to other types of I/O transactions.
In the preferred embodiment, process a 801 may make an
I/O request by issuing an asynchronous read request. The asynchronous read request comprises parameters including a memory location and a number of bytes to be read. Process A 801 may then continue execution of instructions. The memory location and number of bytes is stored in control block 802.
The disk process 803 initiates the requested I/O on the requested device, such as disk 804, in a manner as described in the Prior Art Section. Upon completion of the I/O, the disk process receives a completion interrupt and updates process A's control block 802.
The present invention discloses use of a monitor process 810. The disk process notifies the monitor process 810 upon completion of the I/O instead of notifying process A 801 as known in the prior art. The monitor process 810 is then responsible for cleaning up the control block 802.
In the preferred embodiment, if process A 810 attempts to execute an instruction which causes it to terminate execution, process A 801 will not release the memory allocated to it for the pending I/O and its control block 802 until the monitor process 810 has notified process A 801 that the I/O process has completed.
A counter is maintained in the preferred embodiment of the number of outstanding asynchronous I/O requests. The monitor process continues processing asynchronous I/O requests until the counter indicates by a zero value that all asynchronous I/Os have completed. Each time an asynchronous
I/O is requested by a process, the disk process 803 increments the counter. Each time the monitor process 810 completes cleaning-up the control block area for an asynchronous I/O transaction, the monitor process 810 decrements the counter. Thus, a counter value of zero indicates there are no outstanding asynchronous I/O requests.
The preferred embodiment of the present invention utilizes a computer system having a plurality of processing units. As such, it is possible that disk process 803 may be executing on one processor while the monitor process 810 is executing on a separate processor. The disk process 803 and monitor process 810 may therefore compete to access the counter. The present invention utilizes a semaphore mechanism to prevent access to the counter field by competing processes.
The preferred embodiment further discloses writing data directly into process A's I/O area in memory rather than utilizing the system buffer cache (402 of Figure 4) when performing a buffered I/O transaction. Such a method avoids the requirement of utilizing the computer systems DMA facility to perform a memory to memory transfer.
Thus, an improved file system which is especially suited for a UNIX operating environment is disclosed.
Claims (23)
1. In a file system having a storage means for storing a first plurality of information, an improvement in which said storage means comprises:
a first storage area for storing a second plurality of information, said second plurality of information comprising data about said file system;
a plurality of second storage areas for storing a third plurality of information, said third plurality of information comprising data about individual files in said file system;
a third storage area for storing address information, said address information comprising addresses for each of said plurality of second storage areas.
2. The improvement as recited by Claim 1, wherein said first storage area and said second storage area are contiguous.
3. In a file system having a storage means for storing a plurality of blocks of data, an improvement in which said storage means comprises:
a first storage area for storing a first plurality of information comprising data about said file system;
a second storage area, said second storage area comprising a plurality of data fields, each of said plurality of fields associated with one of said plurality of blocks of data, said data fields indicating whether each of said blocks of data are used.
4. The improvement as recited by Claim 3, wherein said plurality of fields comprises a bit map.
5. In an UNIX-based or compatible file system, said file system comprising a storage means for storing a first plurality of information, an improvement in which said.
storage means comprises:
a first storage area for storing file system overhead information;
a second storage area for storing file system data information.
6. The improvement as recited by Claim 5, wherein said first storage area comprises:
a third storage area for storing a second plurality of information, said second plurality of information comprising data about said file system;
a plurality of fourth storage area for storing a third plurality of information, said third plurality of information comprising data about individual files in said file system;
a fifth storage area for storing address information, said address information comprising addresses for each of said plurality of fourth storage areas.
7. The improvement as recited by Claim 6, wherein said first storage area further comprises:
a sixth storage area for storing a plurality of data fields, each said data fields indicative of whether a block in said file system is used.
8. The improvement as recited by Claim 7, wherein said third storage area, said fifth storage area and said sixth storage areas are contiguous.
9. In a UNIX-based or compatible file system, said file system comprising a storage means having a plurality of blocks of area for storing information, a method for allocating space in said storage means comprising the steps of:
allocating space for a first block of a first file at one of said plurality of blocks of area for storing information;
allocating space for a first block of a second file at one of said plurality of blocks of area for storing information, said space for said second file being allocated at a location to allow for contiguous growth of said first file.
10. The method as recited by Claim 9, wherein said storage means comprises a disk having a plurality of tracks for recording information and said method further comprises the step of retrieving data from said storage means.
11. The method as recited by Claim 10, wherein said step of retrieving data from said storage means further comprises, the steps of:
requesting an asynchronous data transfer;
said file system reading data from said storage means, said data residing on one of said plurality of tracks;
said file system further storing data in a memory means;
said data being made available to a requesting process as it is stored in said memory means.
12. In a file system having a storage means for storing information, a memory means having a first memory area for storing information received from said storage means and a second memory area for storing control information, a method of retrieving information from said storage means comprising the steps of:
a first process executing a first instruction said first instruction requesting retrieval of information from said storage means;
a second process initiating retrieval of said information responsive to said first process executing said first instruction;
said second process notifying a third process upon completion of said retrieval of said information;
said first process executing a second instruction subsequent to executing said first instruction and prior to completion of said retrieval of information.
13. The method as recited by Claim 12, wherein said first process stores control information in said second memory area and said third process updates said second memory area upon completion of said retrieval of information.
14. The method as recited by Claim 13, wherein said second process increments a counter means when initiating said request for retrieval of information.
15. The method as recited by Claim 14, wherein said second process decrements said counter means when notified of said completion of retrieval of information.
16. A file system comprising:
a data storage means for storing information;
a least one processor means coupled with said data storage means for executing instructions, one of said processor means executing a first instruction of a first process, said first instruction requesting retrieval of information from said data storage means;
one of said processor means executing a second process, said second process initiating retrieval of information from said data storage means responsive to execution of said first instruction;
one of said processor means executing a third process, said second process notifying said third process upon completion of said retrieval of information;
one of said processor means executing a second instruction of said first process subsequent to execution of said first instruction and prior to completion of said retrieval of information.
17. The file system as recited by Claim 16, further comprising:
a memory means coupled with said data storage means for storage of information retrieved from said data storage means, said memory means comprising a counter, said second process incrementing said counter when initiating said retrieval of information, said third process decrementing said counter when notified of completion of said retrieval of information; and,
a control means, said control means controlling access to said counter.
18. The file system as recited by Claim 17, wherein said control means comprises a semaphore.
19. A file system comprising:
a data storage means for storing information;
a memory means coupled with said data storage means, said memory means having information to be stored on said data storage means;
a least one processor means coupled with said data storage means for executing instructions, one of said processor means executing a first instruction of a first process, said first instruction requesting a transfer of information from said memory means to said data storage means;
one of said processor means executing a second process, said second process initiating said transfer of information to said data storage means;
one of said processor means further executing a third process, said second process notifying said third process upon completion of said transfer of information; ;
one of said processor means executing a second instruction of said first process subsequent to executing said first instruction and prior to completion of said transfer of information.
20. The file system as recited by Claim 19, wherein said memory means further comprises a counter, said second process incrementing said counter when initiating said transfer of information, said third process decrementing said counter when notified of completion of said transfer of information, and;
a control means, said control means controlling access to said counter.
21. The file system as recited by Claim 20, wherein said control means comprises a semaphore means.
22. An improved storage means in a file system
substantially as hereinbefore described with reference
to the accompanying drawings.
23. A method for allocating space in storage means
of a UNIX-based or compatible file system substantially
as hereinbefore described with reference to the
accompanying drawings.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US19446088A | 1988-05-16 | 1988-05-16 |
Publications (2)
Publication Number | Publication Date |
---|---|
GB8909350D0 GB8909350D0 (en) | 1989-06-14 |
GB2218833A true GB2218833A (en) | 1989-11-22 |
Family
ID=22717683
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
GB8909350A Withdrawn GB2218833A (en) | 1988-05-16 | 1989-04-25 | File system |
Country Status (2)
Country | Link |
---|---|
KR (1) | KR890017640A (en) |
GB (1) | GB2218833A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2312059A (en) * | 1996-04-12 | 1997-10-15 | Sony Uk Ltd | Data storage free space management |
EP0975167A2 (en) * | 1998-07-14 | 2000-01-26 | Alcatel | Method for using a server, server and control unit |
WO2002017057A2 (en) * | 2000-08-18 | 2002-02-28 | Network Appliance, Inc. | Improved space allocation in a write anywhere file system |
US7418465B1 (en) | 2000-08-18 | 2008-08-26 | Network Appliance, Inc. | File system block reservation manager |
US7797670B2 (en) | 2006-04-14 | 2010-09-14 | Apple Inc. | Mirrored file system |
US7822922B2 (en) | 2004-04-22 | 2010-10-26 | Apple Inc. | Accessing data storage systems without waiting for read errors |
US8250397B2 (en) | 2007-01-08 | 2012-08-21 | Apple Inc. | N-way synchronization of data |
US8321374B2 (en) | 2005-06-21 | 2012-11-27 | Apple Inc. | Peer-to-peer N-way syncing in decentralized environment |
US8495015B2 (en) | 2005-06-21 | 2013-07-23 | Apple Inc. | Peer-to-peer syncing in a decentralized environment |
US8868491B2 (en) | 2006-08-04 | 2014-10-21 | Apple Inc. | Method and system for using global equivalency sets to identify data during peer-to-peer synchronization |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB1534290A (en) * | 1974-12-23 | 1978-11-29 | Honeywell Inf Systems | Computer system with data base instructions |
-
1989
- 1989-04-25 GB GB8909350A patent/GB2218833A/en not_active Withdrawn
- 1989-05-09 KR KR1019890006186A patent/KR890017640A/en not_active Application Discontinuation
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB1534290A (en) * | 1974-12-23 | 1978-11-29 | Honeywell Inf Systems | Computer system with data base instructions |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2312059B (en) * | 1996-04-12 | 2000-11-15 | Sony Uk Ltd | Data storage |
US6215748B1 (en) | 1996-04-12 | 2001-04-10 | Sony Corporation | Data storage with data masking array, apparatus and method therefor |
GB2312059A (en) * | 1996-04-12 | 1997-10-15 | Sony Uk Ltd | Data storage free space management |
EP0975167A2 (en) * | 1998-07-14 | 2000-01-26 | Alcatel | Method for using a server, server and control unit |
EP0975167A3 (en) * | 1998-07-14 | 2004-03-31 | Alcatel | Method for using a server, server and control unit |
US7930326B2 (en) | 2000-08-18 | 2011-04-19 | Network Appliance, Inc. | Space allocation in a write anywhere file system |
WO2002017057A2 (en) * | 2000-08-18 | 2002-02-28 | Network Appliance, Inc. | Improved space allocation in a write anywhere file system |
WO2002017057A3 (en) * | 2000-08-18 | 2003-03-20 | Network Appliance Inc | Improved space allocation in a write anywhere file system |
US7418465B1 (en) | 2000-08-18 | 2008-08-26 | Network Appliance, Inc. | File system block reservation manager |
US7822922B2 (en) | 2004-04-22 | 2010-10-26 | Apple Inc. | Accessing data storage systems without waiting for read errors |
US8321374B2 (en) | 2005-06-21 | 2012-11-27 | Apple Inc. | Peer-to-peer N-way syncing in decentralized environment |
US8495015B2 (en) | 2005-06-21 | 2013-07-23 | Apple Inc. | Peer-to-peer syncing in a decentralized environment |
US8635209B2 (en) | 2005-06-21 | 2014-01-21 | Apple Inc. | Peer-to-peer syncing in a decentralized environment |
US7797670B2 (en) | 2006-04-14 | 2010-09-14 | Apple Inc. | Mirrored file system |
US8868491B2 (en) | 2006-08-04 | 2014-10-21 | Apple Inc. | Method and system for using global equivalency sets to identify data during peer-to-peer synchronization |
US8250397B2 (en) | 2007-01-08 | 2012-08-21 | Apple Inc. | N-way synchronization of data |
Also Published As
Publication number | Publication date |
---|---|
GB8909350D0 (en) | 1989-06-14 |
KR890017640A (en) | 1989-12-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US5276840A (en) | Disk caching method for writing data from computer memory including a step of writing a plurality of physically adjacent blocks in a single I/O operation | |
KR940005775B1 (en) | Method of opening disk file | |
US5715455A (en) | Apparatus and method for storing file allocation table efficiently in memory | |
US5680570A (en) | Memory system with dynamically allocatable non-volatile storage capability | |
US5371885A (en) | High performance file system | |
US4536837A (en) | Improved disk file allocation and mapping system utilizing cylinder control blocks and file map having unbalanced tree structure | |
US5386524A (en) | System for accessing information in a data processing system | |
US4603380A (en) | DASD cache block staging | |
US5363487A (en) | Method and system for dynamic volume tracking in an installable file system | |
EP0375188B1 (en) | File system | |
US20030149836A1 (en) | Storage device and method for data sharing | |
US20040030846A1 (en) | Data storage system having meta bit maps for indicating whether data blocks are invalid in snapshot copies | |
US20040105332A1 (en) | Multi-volume extent based file system | |
JPH0578857B2 (en) | ||
EP1265152B1 (en) | Virtual file system for dynamically-generated web pages | |
JPS62165249A (en) | Automatic enlargement of segment size in page segmenting virtual memory data processing system | |
JP2004240985A (en) | Data storage device | |
US20080162863A1 (en) | Bucket based memory allocation | |
JPS59114658A (en) | Management of data memory space | |
CN112463753B (en) | Block chain data storage method, system, equipment and readable storage medium | |
US20080320052A1 (en) | Method and a computer program for inode allocation and De-Allocation | |
GB2218833A (en) | File system | |
US6286089B1 (en) | Coupling facility using dynamic address translation | |
US8918621B1 (en) | Block address isolation for file systems | |
US20060190689A1 (en) | Method of addressing data in a shared memory by means of an offset |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
732 | Registration of transactions, instruments or events in the register (sect. 32/1977) | ||
WAP | Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1) |