CA1266532A - Method to share copy on write segment for mapped files - Google Patents

Method to share copy on write segment for mapped files


Publication number
CA1266532A CA000523241A CA523241A CA1266532A CA 1266532 A CA1266532 A CA 1266532A CA 000523241 A CA000523241 A CA 000523241A CA 523241 A CA523241 A CA 523241A CA 1266532 A CA1266532 A CA 1266532A
Prior art keywords
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
Other languages
French (fr)
Keith E. Duvall
Anthony D. Hooten
Larry K. Loucks
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US06/819,455 priority Critical patent/US4742450A/en
Priority to US819,455 priority
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Application granted granted Critical
Publication of CA1266532A publication Critical patent/CA1266532A/en
Anticipated expiration legal-status Critical
Application status is Expired - Fee Related legal-status Critical



    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F12/00Accessing, addressing or allocating within memory systems or architectures
    • G06F12/02Addressing or allocation; Relocation
    • G06F12/08Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
    • G06F12/10Address translation
    • G06F12/1027Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
    • G06F12/1036Address translation using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB] for multiple virtual address spaces, e.g. segmentation
    • G06F2212/00Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
    • G06F2212/65Details of virtual memory and virtual address translation
    • G06F2212/656Address space sharing



A method for facilitating the interchange of data in a UNIX* file between two processes being run concurrently on two virtual machines in a page segmented virtual memory virtual machine type data processing system. A Shared Copy_On_Write (SCOW) command is created for the UNIX type operating system which when executed in response to a system call from one processes causes the specified UNIX
file to be mapped to a unique segment of the virtual memory. A map node data structure is established for storing the ID of the unique segment and for maintaining a count value of the number of user sharing the unique segment. A system call to the SCOW command by the second process involving the same UNIX file checks the map node data structure to see if the file is currently mapped for the SCOW mode. Subsequent instructions in the application programs which are run concurrently on the virtual machines operate on the copy of the file in the unique segment so that any data that is changed, i.e. written by one process, is available to be read by the second process.



Technical Field:
The invention relates in general to methods for controlling access to data stored in a virtual memory of a multi-user information handling system which is being run under a UNIX type operating S system. The invention relates, in particular, to a method which permits a user to gain access to a file stored in a virtual memory segment in order to update it, even though another user has previously requested access to the same virtual memory segment of the file and is in the process of currently updating that segment.
Cross-Referenced ApPlications:
U.S. Patent No. 4,742,4~7, issued May 3, 1988, by Duvall , et . al ., entitled "Method to Control I/O Access in a Multi-Tasking, Virtual Memory, Virtual Machine Type Data Processing System" is directed to a method for use in a multi-user paged segmented virtual memory data processing system in which a mapped file data structure is selectively created to permit all I / O operations to the secondary storage devices to be executed by simple load and store instructions under the control of the page fault handler.

Back~round Art:
The prior art discloses various multi-user virtual memory informa-tion handling systems. In general, a virtual memory system implies a system having a main memory that is relatively fast, but somewhat limited in capacity, because of its cost, and a backing store device which is relatively slow, but is rather Iarge, since the cost of storage per bit is relatively inexpensive. Implicit also in a virtual memory system is a paging system which functions to control the transfer of data between the main memory and the backing store. In practice, the main memory is generally a semi-conductor memory array, while the backing store is generally one or more disk drives or files, some of which may even allow the media to be replaced by an operator.

U~ix is a trademark of A . T . & T .

AT9-~5-017 The main memory has its own arrangement for defining real ad-dress storage locations, as does the disk storage subsystem. The system, therefore, employs a virtual address when requesting data from storage. The Virtual Memory Manager (VMI~q) has the responsibility to check that the data at the virtual address is in main memory and iI notl to transfer the data to main memory from the backing store. The specific rrlanner in which the Virtual Memory Manager accomplishes the transfer varies significantly among the prior art systems, primarily because of the inherent characteristics of the specific hardware, including the conventions adopted for deining real addresses of the storage devices and also because of the differences in the operating systems under which the hardware is being run.

The motivation for creating a virtual memory type system is based primarily on the realization that the cost of providing real memory for the system of a size that wol~ld support either one complex program, or a number of smaller programs which could be run concurrently by one or more users, is prohibitive. Further, since generally there is no real reason for having the entire program resident in main memory, it would be more cost effective to store the program data on less expensive disk file backing stores and "page" portions of the data and program into main memory, as required. The paging process, when conducted by the Virtual Memory Manager, does not significantly impact the overall system performance, since the main processor can switch to another task or process which has previously been paged into main memory.

The prior art virtual memory systems employ vario~ls operating systems since an operating system is generally designed to take advantage of the architecture of the processing unit and a particu-lar application or environment. Some operating systems, such as PC DOS, for the family of IBM *Personal Computers ~PCs) and compatibles, is designed primarily for a single user environment.
On the other hand, the UNIX operating system is designed pri-marily for a multi-user environment. The use of the UNIX opera-tion system has, for a number of technical and non-technical reasons, been somewhat restricted to particular systems. As a result, the number of application programs that are run under a UNIX operating system have, until recently, been also rather limited. Multi-user UNIX systems employing virtual memory have even been more limited.

* P~egistered Tradernark 2 AT9-~35-017 ~2Çi~32 The manner in which UNIX implements System Calls, particularly to storage, is in many respects quite advantageous to system perfor-mance. ~n UNIX, the System Call ;s the interface between UNIX
and an application program. A System Call by the application program requests the "kernel" portion of the UNIX operating system to perform one particular task or service on behalf of the operating system. The "kernel" portion of UNIX includes approx-imately 60 ~ystem Calls which are not changed between different hardware systems, and are the standard interface to UNIX. Other programs in ~JNIX adopt the kernel to the particular hardware environment .

UNIX has a unique file system for managing data stored on the systems' external storage devices, e. g., disk files. While UNIX
allows a file to be accessed by many different concurrent users, if the file is to be updated, additional System Calls are required in order to insure that the updating occurs in a serial fashion.
These additional System Calls function to lock portions of the file temporarily, reserving that area for the exclusive use of the calling program that is to do the updating. This does require involvement by the "kernel" in the locl~ing and unlocking tasks and, hence, has an adverse effect on overall system performance. The prior art non-virtual UNIX systems do, never-theless, permit the concurrent use of the same file by different users. The ability to share a portion of the same file among various users is advantageous for interprogram or interprocess communication, in that once the portion of the file is updated by one program, the data is immediately available to all the other programs or processes that are sharing that segment. Tlle term i0 "process," in U~IX terminology, means simply a program that it is currently executing.

The memory management function of a typical UNIX operating system is a part of the UNIX kernel and generally is unique for each different Central Processing IJnit. Some processing units require the total program to be in memory before any portion of the program can be run. Other CPUs can begin execution of a program while only a small portion is in active memory. The first memory management technique is referred to as "swapping, " in ~0 that different processes or programs are run for a given period of time and then the entire - program is "swapped1' out for another program. The second technique is the Virtual Memory technique, ~2~,6~3~:

which implies that provision must be made for the memory manage-ment function to handle page faults, so that defined portions or pages of the program can be br ougnt into main memory as needed and returned to the back-up store when the pages are no longer required.

If the Virtual Memory Management function is left with the kernel of the UNIX operating system, the page fault mechanism will consume a considerable portion of the ~PU operating time. As a result, prior art virtual memory systems generally prefer to estab-lish a Virtual Memory Management function as a separate level of programming on a device whose primary function is memory management. The page fault mechanism is then a part of the memory manager, and the CPU is free from time-consuming tasks of controlling the paging operation.

In the cross-referenced application (Docket '018), a virtual memory data processing system is disclosed in which virtual machines are estabLished by a Yirtual Resource Manager which provides each virtual machine with a large virtual memory. In that system, to avoid the potential conflicts that arise in some virtual memory systems between the operating system's request for IIO disk storage operations and I / O disk storage operations controlled by the page fault handler, the responsibility for performing all IIO
disk storage operations was assigned solely to the page fault handling mechanism. In addition, the normal UNIX interface to the application program by System Calls was supplemented by a map-ped page technique. This latter technique permitted the applica tion program to employ simple load and store type instructions to address memory, rather than tie up the system processor in executing UNIX System Calls to the disk storage. Any file stored in a defined segment of virtual memory could be mapped at the request of the application program which, in effect, established a table Gf virtual addresses and assigned disk block addresses for each page of data that was in the defined segment of virtual memory assigned to that file. The table or map was stored in a separate "segment" of the virtual memory.

The "kernel" of the UNIX operating system was enhanced to pro-'~O vide a new System Call designated "SHMAT_MAP. " The conven-tion~l UNIX operating system includes a variety of "SHMAT"
System Calls, each with a slightly different function, such as 1 ) 1;~ 6 6 53 e~

~ZlEi6532 read only, 2) read/write, 3) copy_on write, etc. The SHMAT MAP
command was also provided with the corresponding functions.

Since the system described in the cross-referenced application was designed to operate with applications previously written for a conventional UNIX operating system, ~ll UNIX System Calls had to be supported. The support is transparent to the user, in that any conventional UNIX System Call from an application program to the UNIX kernel is effectively intercepted by the Memory Manager, which then assigns the tasks to the page fault mechanism. Thus, in that system, the SHMAT MAP command further specified whether the file was to be mapped, read/write (R/W), read only (RO), or copy_on write (CW). The copy_on write function in UNIX allows a file in system memory to be changed. When the CW file is paged out of real memory, it does not replace the permanent file. A
separate System Call is required for ~he copy on write file, which is usually in a disk cache, to replace the permanent copy of the file in the secondary storage device. Two users who ccncurrently map a file read/write or read only share the same mapped segment.
However, each user who requests to map the same file, copy on write 9 at the same time, create their own private copy on write segment. The term segment implies a section of the virtual address space. Each user is permitted to have only one CW segment for a given file at one time. The system of the cross-referenced application, therefore, is fully compatible with the prior art UNIX approach for shared files.

This aspect of the common design, however, perpetuates the problem which exists with UNI~ files, in that the sharing of a mapped file CW segment by multipie users is prohibited. The capability of multiple users sharing the same mapped file copy on write segment is highly desirable, and a method of achiev-ing that function in systems of the type described in the cross-referenced application is the subject of the present invention.
Summarv of the Invention:
In accordance with the method of the present invention, an addi-tional System Call flag is created for the "SHMAT" type System Calls. When this flag is specified by the user in combination with the System Call for a copy_on_write segment, a common copy_on write segment is created for the mapped file.

~66S32 The first user to request the shared copy_on_write segment for the file causes creation of a common mapped file copy_on write segment. The segment ID for this segment would then be saved in a data structure such as the inode data structure for the UNIX
file, so that any future request for the shared copy on_write segment for the mapped file cau~es the common copy_on_write segment to be used.

Also saved in the inode structure is a reference counter, used to 1~) indicate how many users currently have access to the shared segment ~ CW) . Each request for the shared copy on write seg-ment for the file causes the counter to be incremented and each closing of the file descriptor by a user accessing the file reference by the file descriptor via the copy on write segment causes the 1`; counter to ~e decremented. Every time the counter is decre-mented, a check is made to see if the counter has become zero, and if so, the shared copy on_write segment is destroyed so that a future request for a shared copy on write segment for the file causes a new shared copy on_write segment to be traded (and a ,t; new segment ID placed in the inode structure for the file).

All existing mapped file features continue to be supported, as described in the cross-referenced application; 1) whenever a file is mapped there exists a read/write segment for the mapped file, so

2' that read or write System Calls reference the file by the mapped file read/write segrnent; 2) the support of private copy on write segments is maintained so that a user can still continue to request a private copy on write version of the file.

It is therefore an object of the present invention to provide an improved method for a number of data processing system users who are concurrently running separate UNIX processes in a page segment virtual memory environment to share a copy of the file in the same segment of virtual memory.
~, A further object of the present invention is to provide an improv-ed method for users in a virtual memory data processing system running a UNIX type operating system to concurrently share a file that has been designated copy_on_write by a SHMAT type UNIX
System Call.
A further object of the present invention is to provide a new method for permitting users of a UNIX operating system to G


concurrently share a file that has been opened by a shared copy_on write UNIX System Call by employing the same mapped copy_on write segment of the virtual memory.

Objects and advantages other than those mentioned above will become apparent from the following description, when read in connection with the drawing.

Brief DescriPtion of the Drawin~:

Fig. 1 is a schematic illustration of a virtual memory system in which the method of the present invention may be advantageously employed .

Fig. 2 illustrates the interrelationship of the Virtual Resource Manager shown in Fig. 1 to the data processing system and a virtual machine.

Fig. 3 illustrates the virtual storage model for the system shown in Fig. 1.

Fig. 4 illustr~tes conceptually, the address translation function of the system shown in Fig. 1.

Fig. 5 illustrates the interrelationships of some of the data structures employed in the system of Fig. 1 Fig. 6 illustrates the interrelationship of a number of data structures to the Virtual Resource Manager, the virtual memory, and real memory.

Fig. 7 is a flow chart, illustrating the operation of mapping a file copy_Gn write.

Fig. 8 is a flow chart, illustrating the steps involved in completing the data structures shown in Fig. 6 by a map page range service.


Descri~tion of the Preferred ~mbodiment:

System Overview: Fig. 1 is a schematic illustration of a virtual memory system in which the method of the present invention is employed. As shown in Fig. 1., the system comprises a hardware section 10 and a software or programming section 11. Hardware section 10, as shown, comprises a processor function 12, a memory management function 13, a system memory function or RAM 14, system bus 15, an Input/Output C~annel Controller (IOCC) 16, and an Input/Output bus 21. The bardware section further in-cludPs a group of I/O devices attsched to the I/O bus 21 through the IOCC 16, including a disk storage function 17, a display function 18, a co-processor function 19, and block 20, represent-ing other I/O devices such as a keyboard or mouse-type device.

The program section of the system includes the application program 22 that is to be run on the system, a group of application devel-opment programs 23, or tools to assist in developing new applica-tions, an operating system kernel 24, which, for example, may be an extension of the UNIX system V kernel, and a Virtual Resource Manager program 25, which functions to permit a number of virtual machines to be created, each of which is running a different operating system, but sharing the system resources. The system may operate, therefore, in a multi-tasking, multi-user environment which is one of the main reasons for requiring a large virtual memory type storage system.

Fig. 2 illustrates the relationship of the Virtual Resource Manager 25 to the other components of the system. As shown in ~ig. 2, a virtual machine includes one or more-application programs such as 22a - 22c and at least one operating system 30. A virtual machine interface 31 is established between the virtual machine and the VRM 25. A hardware interface 32 is also established between the YRM 25 and the hardware section 10. The VRM 25 supports virtual memory. It can be assumed, for purposes of explanation 9 that the memory capabilities of the hardware shown in Fig. 1 includes a 24 bit address space for system memory 14, which equates to a capacity of 16 megabytes for memory 14, and a 40 bit address space for virtual memory, which equates to 1 terrabyte of memory. A paged segmentation technique is implemented for the Memory Management Unit 13, so that the total virtual address ~261~532 space is divided into 4, 096 memory segments, with each memory segment occupying 256 megabytes.
Fig. 3 illustrates the virtual storage model. The processor 12 provides a 32 bit effective address which is specified, for example, by the application program. The high order 4 bits of the 3~ bit address functions to select 1 of 16 segrnent registers which are located in the Memory Management Unit (MMU) 13. Each segment register contains a 12 bit segment ID section, along with other special control-type bits. The 12 bit segment ID is concatenated with the remaining 28 bits of the initial effective address to pro-vide the 40 bit virtual address for the system. The 40 bit virtual address is subsequently translated to a 24 bit real address, which is used to address the system memory 14.

The MMU 13 utilizes a Translation Look-aside ~3uffer (TLB ) to contain translations of the most recently used virtual addresses.
Hardware is used to automatically update TLB entries from main storage page tables as new virtual addresses are presented to the TLBs for translation. Fig. 4 illustrates conceptually, the TLB
reload function.

The 40 bit virtual addresses are loaded into the TLB by looking them up in an Inverted Page Table ( IPT ), as shown in Fig . 4 .
The table is "inverted" because it contains one entry for each real memory page, rather than one per virtual page. Thus, a fi~ed portion of real memory is required for the IPT, regardless of the number of processes or virtual segments supported. To translate an address, a hashing function is applied to the virtual page number (high order part of the 40 bit virtual address, less the page offset) to obtain an index to the Hash Anchor Table (HAT).
Each HAT entry points to a chain of IPT entries with the same hash value. A linear search of the hash chain yields the IPT
entry and, thus, the real page number which corresponds to the original 40 bit virtual address. If no such entry is found, then ~5 the virtual page has not been mapped into the system, and a page fault interrupt is taken.

The function of the Page Fault Handler ( PFH ) is to assign real memory to the referenced virtual page and to perform the neces-~0 sary I/O to transfer the re~uested data into the real memory.
The system is, thus, a demand paging type system.

~;6532 When real memory becomes full, the PFH is also responsible for selecting which page of data is paged out. The selection is done by a suitable algorithm such as a clock page replacement algo-rithm, where pages are replaced bas2d on when the page was last S used or referenced. Pages are transferred out to disk storage.

The details of ~he other data structures employed by the system shown in Fig, 1 and 2 are set forth in the crosc-referenced appli-cation, particularly U.S. Patent No. 4,472,447, issued May 3, 1988. Similarly, the data structures which were unique to the map file service function of that application are also employed in the method of the present invention. Reference should be made to Fig. 6, specifically to the map node data structures 70 and 71. These two structures are described in detail in the cross-referenced application. The copy on write segment field 74 and the copy on write map count field 75 are the two specific fields of the map node data structure employed in the method of the present invention to permit concurrent use of a copy on write segment.
Fig. 7 is a flow chart, illustrating the operation of the mapping of the file copy on_write by an application. The application initiates a process that issues an SH;~IAT COPY ON WRITE instruction as indicated by block 100.
Block 101 determines if the file is currently mapped read/write, by checlsing the inode data structure. If the file is currently map-ped, the process is terminated at block 102, since protocol does not permit a file to be both mapped copy on write and read/write.
~0 If the file is not currently mapped, block 103 tests to de~ermine if the segment exists by checking the inode data structure. If the segment exists, the block 104 tests the map node data structure 70 to determine if a copy on write segment exists, block 105 then ~5 increments the reference count field 75 in map node 70 by 1 and obtains the segment ID from the map node in block 106. Block 107 loads the segment register with the obtained ID and block 108 tests if the file is currently mapped. Block 109 represents the mapped page range service function which is called to map the file from block 108. If block 108 indicates the segrzlent is mapped copy on write, the process ends at block 110- If block 103 indi-cates that the segment does not exist, block 111 creates the AT9-~5 -017 i ~66~32 segment by issuing a call to the create segment service of the system. The test in block 104 is then made and if a copy on write segment does not exist, a call to the create copy on write segment in block 112 is made. The count in the map node field 75 is incremented and the process flow continues, as previously de-scribed .

When the process issues a UNIX read system or load instruction in block 115, or a UNIX write System Call or a store instruction in block 116, the operation performs a basic memory reference pro-cess, as indicated in block 117. Block 118 tests the Inverted Page Table to determine if a page is in system memory. If not, block 119 allocates a page frame in main memory. This requires an I/O
operation in block 120, which halts the process until the page frame is allocated. If block 119 indicates the page is in memory, block 121 tests to see if a read (or load) operation is involved. If so, a request is placed in the I/O queue by block 122.

If a write or store operation is involved, block 123 prepares the page and blocks 124 and 125 prepare the system to receive the copy on write page in a paging space allocation on the disk file for copy on write pages . These operations require I / O to the disk file and, therefore, they are queued by block 122.

Fig. 8 is a flow chart, illustrating the steps involved by page range service in completing the map node data structure 70 and the mapped file data structure 71, shown in Fig. 6.

After a segment has been created the file must be mapped into the segment. This is a dynamic operation, since the primary storage allocation is virtual, and the segment assignment is transient. As illustrated in Fig. 7c the inode structure 181 is read for the block address of each page to be allocated for the file. Each group of contiguously allocated blocks is summed, and the count recorded in the field adjacent to the starting block number 2 ertry in the map page range structure. Discontiguous blocks are reflected in dis-crete entries in the map page range structure . When the entire file inode structure has been scanned, the map page range S~IC is issued and the external page table slot entries ~or the appropriate segment are updated with the block addresses for each page of the file .


While the invention has been shown and described with reference to a particular embodiment, it should be appreciated by those persons skilled in the art that changes and modifications may be made without departing from the spirit of the invention or the scope of the appended claims.

Claims (6)

The embodiments of the invention in which an exclusive property or privilege is claimed are defined as follows:
1. A method for facilitating the interchange of data stored in a Unix file between two UNIX processes being run concurrently on two virtual machines in a page segmented virutal memory virtual machine type data processing system having, (1) a main memory including, a first plurality of byte addressable storage locations each of which functions to store one byte of data, (2) a secondary storage device including, a second plurality of block addressable storage locations each of which functions to store at least one virtual page of data, (3) a virtual resource manager for creating at least first and second virtual machines having a UNIX type Operating System (UOS) program which includes, (a) conventional UNIX commands including commands for opening and creating new UNIX files, data transfer commands having parameters for specifying UNIX
file data to be transferred between said device and said main memory, a map instruction which functions to map a specified UNIX file stored in said device to virtual pages in another segment of said virtual memory so as to relate the newly assigned page addresses in said another segment to said corresponding block address in said device, (b) I/O subroutines which run when said transfer commands are executed, (c) means for storing said map instruction at a virtual address in a predetermined segment of said virtual memory, and (d) means for storing a UNIX offset pointer, (4) an application program which includes conventional Unix system calls to said commands, and (5) a memory manager program having, (a) Load and Store type of instructions employing a virtual address for transferring a page of data between said device and said main memory, (b) a page fault handling mechanism for resolving a page fault that occurs as a result of said application program executing one of said Load and Store instructions involving a virtual page which is not currently stored in said main memory, and (6) means for causing said data transfers defined in said system calls to be made under the control of said memory manager and said page fault handling mechanism, rather than said I/O subroutines of said UOS, including means for dynamically generating another said virtual page address within the address range of said another segment by translating said command parameters and said offset pointer for said specified file in response to each said data transfer command, said method facilitating said interchange of said data between said two processes being run concurrently by said first and second virtual machines involving one specified UNIX File, said method comprising the steps of:
(A) creating a shared copy-on-write (SCOW) command for said UOS which functions to cause a Unix file specified thereby to be mapped to a unique segment by said map instruction, said SCOW command including a first field for storing an indication to distinquish said SCOW command from a conventional copy-on-write command, (B) executing a system call in a first application program being run by said first virtual machine to said SCOW command to cause said specified file to be mapped to said unique segment, (C) establishing a map node data structure with said UOS which includes the step of establishing a SCOW
segment ID field to store the segment ID of said unique segment, (D) storing said unique segment ID in said SCOW
segment ID field of said map node data structure in response to mapping said specified file, (E) executing a system call in a second application program to said SCOW command, (F) checking said map node data structure to determine if said specified file is currently mapped in a mode to be shared, and (G) running said first and second application programs concurrently whereby data in said specified file that is written by either application program is readily available to be read by the other application program.
2. The method recited in claim 1 in which said step of establishing said map node data structure further includes the step of establishing a count field for storing a value indicative of the number of virtual machines that currently have access to said unique segment.
3. The method recited in claim 2 further including the step of updating said value in said count field after said step of checking said map node data structure.
4. The method recited in claim 3 in which said step of updating said value includes the step of incrementing said count by one when another virtual machine starts sharing said unique segment and the step of decrementing said count when a virtual machine stops sharing said segment.
5. The method recited in claim 4 further including the step of destroying said unique segment in response to said step of decrementing said value to zero.
6. The method set forth in claim 5 in which said step of running further includes the steps of changing data stored in said unique segment in accordance with instructions being processed by said first virtual machine and reading said changed data in accordance with instructions being processed by said second virtual machine.
CA000523241A 1986-01-16 1986-11-18 Method to share copy on write segment for mapped files Expired - Fee Related CA1266532A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US06/819,455 US4742450A (en) 1986-01-16 1986-01-16 Method to share copy on write segment for mapped files
US819,455 1986-01-16

Publications (1)

Publication Number Publication Date
CA1266532A true CA1266532A (en) 1990-03-06



Family Applications (1)

Application Number Title Priority Date Filing Date
CA000523241A Expired - Fee Related CA1266532A (en) 1986-01-16 1986-11-18 Method to share copy on write segment for mapped files

Country Status (6)

Country Link
US (1) US4742450A (en)
EP (1) EP0238158B1 (en)
JP (1) JPS62165250A (en)
BR (1) BR8700152A (en)
CA (1) CA1266532A (en)
DE (2) DE3751645D1 (en)

Families Citing this family (67)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS61190638A (en) * 1985-02-20 1986-08-25 Hitachi Ltd File control system for virtual computer
JPS62159239A (en) * 1985-12-30 1987-07-15 Ibm Editing system for virtual machine
USRE36462E (en) * 1986-01-16 1999-12-21 International Business Machines Corporation Method to control paging subsystem processing in virtual memory data processing system during execution of critical code sections
US5202971A (en) * 1987-02-13 1993-04-13 International Business Machines Corporation System for file and record locking between nodes in a distributed data processing environment maintaining one copy of each file lock
US4926322A (en) * 1987-08-03 1990-05-15 Compag Computer Corporation Software emulation of bank-switched memory using a virtual DOS monitor and paged memory management
US5239643A (en) * 1987-11-30 1993-08-24 International Business Machines Corporation Method for reducing disk I/O accesses in a multi-processor clustered type data processing system
JP2561696B2 (en) * 1988-03-31 1996-12-11 三菱電機株式会社 How common area management in the network system
JPH01303527A (en) * 1988-05-31 1989-12-07 Hitachi Ltd Control method for shared resources
JPH01302444A (en) * 1988-05-31 1989-12-06 Toshiba Corp Logical address cache control system
CA1323448C (en) * 1989-02-24 1993-10-19 Terrence C. Miller Method and apparatus for translucent file system
US5182805A (en) * 1989-07-14 1993-01-26 Ncr Corporation Method and system for determining copy-on-write condition
WO1991008537A1 (en) * 1989-11-30 1991-06-13 Storage Technology Corporation Data record copy apparatus for a virtual memory system
WO1991008536A1 (en) * 1989-11-30 1991-06-13 Storage Technology Corporation Data record move apparatus for a virtual memory system
US5218695A (en) * 1990-02-05 1993-06-08 Epoch Systems, Inc. File server system having high-speed write execution
US5293600A (en) * 1990-04-06 1994-03-08 International Business Machines Corporation Counter and flux bit locking for very fast shared serialization of shared data objects
US5276896A (en) * 1990-06-11 1994-01-04 Unisys Corporation Apparatus for implementing data communications between terminal devices and user programs
US5537652A (en) * 1990-08-20 1996-07-16 International Business Machines Corporation Data file directory system and method for writing data file directory information
US5247681A (en) * 1990-12-18 1993-09-21 International Business Machines Corporation Dynamic link libraries system and method
US5379391A (en) * 1991-03-01 1995-01-03 Storage Technology Corporation Method and apparatus to access data records in a cache memory by multiple virtual addresses
US5481701A (en) * 1991-09-13 1996-01-02 Salient Software, Inc. Method and apparatus for performing direct read of compressed data file
US5276878A (en) * 1992-10-07 1994-01-04 International Business Machines Corporation Method and system for task memory management in a multi-tasking data processing system
JPH06348584A (en) * 1993-06-01 1994-12-22 Internatl Business Mach Corp <Ibm> Data processing system
US5584042A (en) * 1993-06-01 1996-12-10 International Business Machines Corporation Dynamic I/O data address relocation facility
US7174352B2 (en) 1993-06-03 2007-02-06 Network Appliance, Inc. File system image transfer
US5963962A (en) * 1995-05-31 1999-10-05 Network Appliance, Inc. Write anywhere file-system layout
US5566326A (en) * 1993-09-28 1996-10-15 Bull Hn Information Systems Inc. Copy file mechanism for transferring files between a host system and an emulated file system
US5604490A (en) * 1994-09-09 1997-02-18 International Business Machines Corporation Method and system for providing a user access to multiple secured subsystems
US5875487A (en) * 1995-06-07 1999-02-23 International Business Machines Corporation System and method for providing efficient shared memory in a virtual memory system
US5940869A (en) * 1995-06-07 1999-08-17 International Business Machines Corporation System and method for providing shared memory using shared virtual segment identification in a computer system
US5805899A (en) * 1995-07-06 1998-09-08 Sun Microsystems, Inc. Method and apparatus for internal versioning of objects using a mapfile
US6353862B1 (en) * 1997-04-04 2002-03-05 Avid Technology, Inc. Video device manager for managing motion video output devices and supporting contexts and buffer adoption
US6516351B2 (en) * 1997-12-05 2003-02-04 Network Appliance, Inc. Enforcing uniform file-locking for diverse file-locking protocols
US6457130B2 (en) 1998-03-03 2002-09-24 Network Appliance, Inc. File access control in a multi-protocol file server
US6317844B1 (en) 1998-03-10 2001-11-13 Network Appliance, Inc. File server storage arrangement
US6604118B2 (en) 1998-07-31 2003-08-05 Network Appliance, Inc. File system image transfer
US6574591B1 (en) * 1998-07-31 2003-06-03 Network Appliance, Inc. File systems image transfer between dissimilar file systems
US6343984B1 (en) 1998-11-30 2002-02-05 Network Appliance, Inc. Laminar flow duct cooling system
US6728922B1 (en) 2000-08-18 2004-04-27 Network Appliance, Inc. Dynamic data space
US6636879B1 (en) * 2000-08-18 2003-10-21 Network Appliance, Inc. Space allocation in a write anywhere file system
US7072916B1 (en) 2000-08-18 2006-07-04 Network Appliance, Inc. Instant snapshot
US6668264B1 (en) 2001-04-03 2003-12-23 Network Appliance, Inc. Resynchronization of a target volume with a source volume
US7694302B1 (en) 2001-04-05 2010-04-06 Network Appliance, Inc. Symmetric multiprocessor synchronization using migrating scheduling domains
US7178137B1 (en) 2001-04-05 2007-02-13 Network Appliance, Inc. Automatic verification of scheduling domain consistency
US6857001B2 (en) * 2002-06-07 2005-02-15 Network Appliance, Inc. Multiple concurrent active file systems
US7502901B2 (en) * 2003-03-26 2009-03-10 Panasonic Corporation Memory replacement mechanism in semiconductor device
US7085909B2 (en) * 2003-04-29 2006-08-01 International Business Machines Corporation Method, system and computer program product for implementing copy-on-write of a file
US7373640B1 (en) 2003-07-31 2008-05-13 Network Appliance, Inc. Technique for dynamically restricting thread concurrency without rewriting thread code
US8171480B2 (en) * 2004-01-27 2012-05-01 Network Appliance, Inc. Method and apparatus for allocating shared resources to process domains according to current processor utilization in a shared resource processor
US7213103B2 (en) 2004-04-22 2007-05-01 Apple Inc. Accessing data storage systems without waiting for read errors
GB2420639A (en) * 2004-11-24 2006-05-31 Hewlett Packard Development Co Monitoring Copy on write (COW) faults to control zero-copy data transfer
US7689999B2 (en) * 2004-12-01 2010-03-30 Bea Systems, Inc. Sharing dynamically changing resources in software systems
US7334076B2 (en) 2005-03-08 2008-02-19 Microsoft Corporation Method and system for a guest physical address virtualization in a virtual machine environment
JP4494263B2 (en) * 2005-03-25 2010-06-30 富士通株式会社 Redundancy method of service system
US8495015B2 (en) 2005-06-21 2013-07-23 Apple Inc. Peer-to-peer syncing in a decentralized environment
US7523146B2 (en) 2005-06-21 2009-04-21 Apple Inc. Apparatus and method for peer-to-peer N-way synchronization in a decentralized environment
US7464237B2 (en) * 2005-10-27 2008-12-09 International Business Machines Corporation System and method for implementing a fast file synchronization in a data processing system
US7797670B2 (en) 2006-04-14 2010-09-14 Apple Inc. Mirrored file system
US7860826B2 (en) 2006-08-04 2010-12-28 Apple Inc. Method and system for using global equivalency sets to identify data during peer-to-peer synchronization
US7657769B2 (en) 2007-01-08 2010-02-02 Marcy M Scott N-way synchronization of data
JP5650508B2 (en) 2010-11-25 2015-01-07 日本クロージャー株式会社 Container lid
US9201678B2 (en) 2010-11-29 2015-12-01 International Business Machines Corporation Placing a virtual machine on a target hypervisor
US9053053B2 (en) * 2010-11-29 2015-06-09 International Business Machines Corporation Efficiently determining identical pieces of memory used by virtual machines
CN102736945B (en) 2011-03-31 2016-05-18 国际商业机器公司 Method and system for multiple instances of a running application program
US20160156631A1 (en) * 2013-01-29 2016-06-02 Kapaleeswaran VISWANATHAN Methods and systems for shared file storage
JP6377257B2 (en) * 2014-09-01 2018-08-22 華為技術有限公司Huawei Technologies Co.,Ltd. File access method and device, and storage system
CN105580010B (en) 2014-09-01 2019-02-19 华为技术有限公司 Access the method, apparatus and storage system of file
US9880755B2 (en) 2015-02-25 2018-01-30 Western Digital Technologies, Inc. System and method for copy on write on an SSD

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4135240A (en) * 1973-07-09 1979-01-16 Bell Telephone Laboratories, Incorporated Protection of data file contents
US4435752A (en) * 1973-11-07 1984-03-06 Texas Instruments Incorporated Allocation of rotating memory device storage locations
US4625081A (en) * 1982-11-30 1986-11-25 Lotito Lawrence A Automated telephone voice service system
US4577274A (en) * 1983-07-11 1986-03-18 At&T Bell Laboratories Demand paging scheme for a multi-ATB shared memory processing system

Also Published As

Publication number Publication date
EP0238158B1 (en) 1995-12-27
BR8700152A (en) 1987-12-01
DE3751645T2 (en) 1996-07-04
EP0238158A3 (en) 1990-06-13
US4742450A (en) 1988-05-03
JPS62165250A (en) 1987-07-21
EP0238158A2 (en) 1987-09-23
DE3751645D1 (en) 1996-02-08
CA1266532A1 (en)

Similar Documents

Publication Publication Date Title
Liskov The design of the Venus operating system
US8639901B2 (en) Managing memory systems containing components with asymmetric characteristics
US5838968A (en) System and method for dynamic resource management across tasks in real-time operating systems
US4787031A (en) Computer with virtual machine mode and multiple protection rings
US5317705A (en) Apparatus and method for TLB purge reduction in a multi-level machine system
US5845331A (en) Memory system including guarded pointers
KR0132696B1 (en) Memory management method
US5845129A (en) Protection domains in a single address space
CA2152752C (en) Multiprocessor system for locally managing address translation table
US4777589A (en) Direct input/output in a virtual memory system
US9262334B2 (en) Seamless application access to hybrid main memory
US8239656B2 (en) System and method for identifying TLB entries associated with a physical address of a specified range
CA2275970C (en) Object and method for providing efficient multi-user access to shared operating system kernal code using instancing
US5819063A (en) Method and data processing system for emulating a program
JP2613001B2 (en) Virtual memory management system, translation lookaside buffer management method, and translation lookaside buffer purge overhead minimization method
US6326973B1 (en) Method and system for allocating AGP/GART memory from the local AGP memory controller in a highly parallel system architecture (HPSA)
US5075845A (en) Type management and control in an object oriented memory protection mechanism
US6055617A (en) Virtual address window for accessing physical memory in a computer system
US4868738A (en) Operating system independent virtual memory computer system
US5353411A (en) Operating system generation method
US5655146A (en) Coexecution processor isolation using an isolation process or having authority controls for accessing system main storage
US5075848A (en) Object lifetime control in an object-oriented memory protection mechanism
US5197148A (en) Method for maintaining data availability after component failure included denying access to others while completing by one of the microprocessor systems an atomic transaction changing a portion of the multiple copies of data
US7620766B1 (en) Transparent sharing of memory pages using content comparison
US5239647A (en) Data storage hierarchy with shared storage level

Legal Events

Date Code Title Description
MKLA Lapsed