PROVIDING CLIENT ACCESSIBLE NETWORK-BASED STORAGE
Background This invention relates to client-server environments and, more particularly, to diskless clients.
As the average selling price of personal computers declines, computer manufacturers often look for ways to more efficiently serve the market. In a client-server network environment, many redundancies may exist which present opportunities for computer manufacturers.
One possible redundancy is the use by several clients on a network of the same operating system and/or application programs. The vast majority of clients in a small business or educational environment have storage space that is identical and redundant. In particular, the storage space used for the operating system (such as Microsoft Windows) and the office productivity applications can take up to one gigabyte of space on a modern client. However, this is duplicated amongst the various clients. The idea of a diskless client or diskless workstation is not new. However, with the advent of a 100 Mbit Ethernet Network Connection, a diskless client may become more practical. The server for these diskless clients may be designed to offer each client an amount of disk storage dedicated for that client. The space on the server would be the equivalent of a "C" drive (or main hard disk drive) for the client.
One way to optimize server storage for diskless clients is to divide the client's files into two categories: shared and private. Those files that are normally shared by all clients are loaded into a different directory structure than the private files. Microsoft used this implementation to support diskless clients in Windows 95. However, this implementation is problematic because most applications do not install and work properly when operating system files are loaded into more than one directory structure. In part because of this, Microsoft chose to drop the diskless support after Windows 95.
Other attempts to support diskless clients have been made. These methods tend to be highly complex and focus on using a higher-level file system protocol, such as the Network File System (NFS). However, particularly in low- end small business environments with very simple networks, such solutions may be too complex.
Thus, there is a continuing need to provide a diskless client-server environment that works seamlessly with operating system and other software.
Brief Description of the Drawings Figure 1 is a block diagram of a system according to one embodiment of the invention;
Figure 2 is a flow diagram of the client request processor according to one embodiment of the invention;
Figure 3 is a block diagram of the drive apportionment software according to one embodiment of the invention; Figure 4 is a detailed block diagram of the drive apportionment software according to one embodiment of the invention;
Figure 5 is a flow diagram of operation of the drive apportionment software according to one embodiment of the invention;
Figure 6 is a block diagram of the driver apportionment software using a cached according to one embodiment of the invention;
Figure 7 is a flow diagram of the specialized BIOS of the client according to one embodiment of the invention;
Figure 8 is a flow diagram showing operation of the client virtual boot record according to one embodiment of the invention; Figure 9 is a flow diagram showing operation of the disk drive controller proxy according to one embodiment of the invention; and
Figure 10 is a block diagram of the system according to one embodiment of the invention.
Detailed Description The following describes a system for providing network-based storage to clients on a network. For purposes of explanation, specific embodiments are set forth to provide a thorough understanding of the invention. However, it will be understood by one skilled in the art that the invention may be practiced without the particular details described herein. Further, although the embodiments are described in terms of hard disk drive storage media, the illustrated system may be applied to other non-volatile media, including, but not limited to compact disk read-only memories (CD ROMs), optical storage media, tape drives, flash memories, and option ROMs. Moreover, well-known elements, devices, process steps, and the like, are not set forth in detail in order to avoid obscuring the invention.
In accordance with the several embodiments described herein, a server system on a network may support a plurality of diskless clients. Where possible, software on the server, and thus storage space, is shared between the diskless clients, providing an efficient network. Using specialized firmware on the client, the client is able to communicate with software on the server to redirect disk accesses. By providing a proxy for the disk drive controller on the client, operating system and other software operate as though the client has a local hard disk drive.
In Figure 1, a system 100 includes a server 50 and a client 30, both connected to a network 40. The server 50 and the client 30 are both processor- based systems. In one embodiment, the server 50 and client 30 are connected to the network 40 using a fast Ethernet Local Area Network (LAN) protocol, which supports data transfer rates of 100 megabits per second.
In one embodiment, the server 50 includes a plurality of software 60. For example, a client request processor 150 receives network packets indicating hard disk drive requests from the client 30. The client request processor 150 translates these requests such that other software 60 on the server 50 may process them. The client request processor 150 further translates results from operations performed on the server 50 such that they may be transmitted back to the client
30. The client request processor 150, according to one embodiment, is described in connection with Figure 2, below.
The server software 60 further includes drive apportionment software 200. The drive apportionment software 200 receives hard disk drive access requests from the client request processor 150. The drive apportionment software 200 services the requests based upon a particular mapping of the hard disk drive on the server 50. The drive apportionment software 200, according to several embodiments, is described in more detail in connection with Figures 3 through 6, below. The server 50 further includes one or more client virtual boot records 250.
The client virtual boot record or records 250, in one embodiment, may be downloaded to the client 30 following a request by the client 30 to boot. In some embodiments, the server 50 includes more than one client virtual boot record 250, such that the various clients 30 on the network 40 may be flexibly configured. The client virtual boot record 250, according to one embodiment, is discussed in connection with Figure 8, below.
In one embodiment, the server 50 further includes a client drive redirection driver 300. The client drive redirection driver 300, like the client virtual boot record 250, is downloaded to the client 30 during initialization. The drive redirection driver 300 receives requests to access a hard disk drive, which is not present on the client 30. Once loaded on the client 30, the drive redirection driver 300 transmit the requests over the network 40 to the server 50, such that the requests may be serviced on the hard disk drive of the server 50. The client drive redirection driver 300, according to one embodiment, is also described in connection with Figure 8, below.
In addition to the software 60 on the server 50, the client or clients 30 include both a specialized basic input/output system (BIOS) 350 and a disk drive controller proxy 400. In one embodiment, the specialized BIOS 350 is part of a memory 62, such as a read-only memory (ROM). The specialized BIOS 350 assists the client 300 during power on such that the client 30 is connected to the network 40, receives the attention of the server 50, receives the virtual boot
record 250 from the server 50, and receives the drive redirection driver 300 from the server 50. Together, these operations permit the diskless client 30 to nevertheless service disk drive requests. The specialized BIOS 350, according to one embodiment, is discussed in connection with Figure 7, below. The disk driver controller proxy 400, in one embodiment, causes software running on the client 30 to believe that a disk drive controller is present on the client 30. Thus, software operating on the client 30 sends a disk drive request to the disk drive controller proxy 400, just as the software would send a request to a real disk drive controller. However, in some embodiments, the disk drive controller proxy 400 may operate differently than a disk drive controller. Instead, in one embodiment, the disk drive controller proxy 400 encapsulates the disk requests in packets such that they may be transmitted over the network 40, and ultimately serviced by the drive apportionment software 200 of the server 50.
The disk drive controller proxy 400 may also receive packets from the network 40 and return the drive results to the requesting software. In one embodiment, the disk drive controller proxy 400 is hardware-based, embedded in a multi-purpose chip, such as a southbridge controller chip. In a second embodiment, the disk drive controller proxy 400 is firmware, stored in the ROM 62 of the client 30. The disk drive controller proxy 400, according to one embodiment, is discussed in more detail in connection with Figure 9, below.
The client request processor 150 may receive disk requests from the client 30 encapsulated as network packets, as well as transmit responses to the disk operations back to the client 30. In Figure 2, the operation of the client request processor 150, according to one embodiment, begins by receiving by a packet over the network 40 on behalf of the client 30 (block 152). In one embodiment, the packet is an Ethernet packet. However, the packet may be transmitted using other protocols.
From the packet, the client request processor 150 extracts a drive request from the encapsulated packet (block 154). The client request processor 150 then calls the drive apportionment software 200 (block 156). The drive apportionment software 200, in one embodiment, translates the drive request by the client 30
into a local storage request, that is, a request for the hard disk drive of the server 50.
After servicing the drive request, the client request processor 150 receives one or more results from the drive apportionment software 200 (block 158). Reversing the prior operations, the client request processor 150 encapsulates the drive results into packets (block 160), such that the packets may be transmitted over the network 40 (block 162). Thus, the operation of the client request processor 150, according to one embodiment, is complete.
When the drive apportionment software 200 is called by the client request processor 150, the request may be serviced in a number of ways. For example, the server 50 may allocate a distinct portion of its storage space to each requesting client 30 on the network 40. On a typical network, each client 30 may boot the same operating system or may use the same application software as another client on the network 40. Thus, according to one embodiment, the system 100 allocates server 50 storage with this common usage model in mind.
In Figure 3, according to one embodiment, the server 50 includes a nonvolatile storage, such as a hard disk drive 80, including a common storage region 82, as well as private storage regions 84 for each of the clients 30 on the network 40. In a second embodiment, the non-volatile storage is an optical disk. In Figure 3, the network 40 includes client 30a, client 30b, and client 30c.
Accordingly, the hard disk drive 80 includes client private storage 84a, client private storage 84b, and client private storage 84c.
Further, the server 50 includes a memory 20, which stores an information database, such as a table, which keeps track of which client is using which portion of the hard disk drive 80. In one embodiment, the memory 20 stores a bitmap 22, one for each client 30, to supply this information. For each client 30a, 30b, and 30c, the memory 20 includes distinct bitmaps: a client bitmap 22a, a client bitmap 22b, and a client bitmap 22c. These bitmaps 22 assist the drive apportionment software 200 to determine where on the hard disk drive 80 to service requests from the client 30.
Each bitmap 22 is comprised of a plurality of bits. In one embodiment, each bit represents a sector of disk storage available to each client 30, as between the common storage 82 and the private storage 84. Further, in one embodiment, the location of the bit 12 in the bitmap 22 represents the location of the sectors in the common storage 82.
In Figure 4, the client bitmap 22a, according to one embodiment, is located in the memory 20 of the server 50, and includes a plurality of bits 12. In one embodiment, the number of bits 12 in the client bitmap 22a represents the number of sectors available to the client 30a, while the number of bits 12 set to a "1" represents the number of private sectors available to the client 30a.
In Figure 4, the common storage 82 of the server 50 includes a plurality of sectors 86. Each bit 12 of the client bitmap 22a initially represents each sector 86 of the common storage 82. However, in one embodiment, each time a bit 12 is set in the client bitmap 22a, a copy of the sector 86 is copied from the common storage 82 to the client private storage 84a as a private sector 88.
In addition to identifying the number of sectors available to the client 30a, the bitmap 22a further may identify the location of the requested sector 86 on the hard disk drive 80. For example, in one embodiment, a "0" in the bitmap 22a for bit 12 in the third position directs the drive apportionment software 200 to retrieve the third sector from the common storage 82 of the hard disk drive 80.
By contrast, a "1" in the bitmap 22a for the ninth bit, bit 12a in Figure 4, indicates that the sector to retrieve was originally in the ninth position, sector 86a in Figure 4, of the common storage 82. However, the sector 86a has previously been copied to the client private storage 84a as private sector 88a. Because bit 12a is the first non-zero occurrence in the client bitmap 22a, the first private sector 88a is retrieved from the client private storage 84a. Thus, the "1" in bit 12a of the bitmap 22a directs the drive apportionment software 200 to retrieve the first private sector 88a from the private storage 84a on behalf of the requesting client 30a. Programmers of ordinary skill in the art recognize this as but one of several possible implementations of the client bitmap 22a.
Thus, when all bits 12 in the bitmap 22a are set to λλ0", the drive apportionment software 200 retrieves all sector read requests from the common storage 82 of the hard disk drive 80. For some systems 100, the common storage area 82 may service the vast majority of accesses by the clients 30 on the network 40. For example, the loading of operating system and other application software typically involves only read operations. Accordingly, client 30a and client 30c may share identical operating system software located in the common storage area 82 of the hard disk drive 80.
Occasionally, the client 30 may perform write operations to its virtual storage. In one embodiment, when a sector write operation is requested by the client 30, the bit 12 corresponding to the requested sector 86 is set to a "1" (that is, if the bit 12 has not already been set). The relevant sector 86 is copied to the client private storage 84 as private sector 88, where the write operation may subsequently be performed. Once a write to a sector is made, all subsequent accesses to the sector are from the private storage 84 of the client 30.
In Figure 4, four bits of the client 30 bitmap 22a are set to a "1", 12a, 12b, 12c and 12d. Likewise, four sectors 86 of the common storage 82, 86a, 86b, 86c, and 86d, are copied to the client private storage 84a, as private sectors 88a, 88b, 88c, and 88d. Thus, in one embodiment, the number of bits 12 which are set to "1" in the client bitmap 22 represents the number of private sectors 88 located in the client private storage 84.
In Figure 5, the drive apportionment software 200, according to one embodiment, begins by receiving the hard disk drive 80 access request, such as from the client request processor 150, on behalf of the client 30 (block 202). In one embodiment, all read requests are initially serviced from the common storage 82 of the hard disk drive 80. However, once a sector 86 is copied as a private sector 88 and the private sector 88 is written to, subsequent read requests of the sector 86 are directed to the private storage 84 of the client 30 and the associated private sector 88. Accordingly, the drive apportionment software 200 determines whether a read or a write request is being made (diamond 204).
If a read request is made, the drive apportionment software 200, running on the server 50, accesses the bitmap 22 of the client 30 from the memory 20 of the server 50. The bit 12 for the requested sector is tested (diamond 206). If the relevant bit 12 is set, a prior write operation to the private sector 88 may be presumed. If the bit 12 is not set, the requested sector 86 is retrieved from the common storage 82 of the server 50 (block 208). The requested sector 86 is then sent to the client request processor 150 (block 210). From there, according to one embodiment, the client request processor 150 sends the result across the network 40 to the client 30. If the relevant bit 12 of the client's bitmap 22 is set (diamond 206), then, although this is a read request, the private sector 88 is retrieved from the private storage 84 of the client 30 (block 212). The retrieved private sector 88 is then sent to the client request processor 150, where it is packetized and transmitted to the client 30 over the network 40. To process write requests (diamond 204), the drive apportionment software 200 reads the bitmap 22 of the client 30 to determine if the relevant bit 12 is set (diamond 214). If so, the requested sector is already in the private storage 84 of the client 30 as private sector 88. Accordingly, the private sector 88 is identified (block 220), then a write operation is performed on the private sector 88 in the client private storage 84 (block 222).
If, however, the relevant bit 12 is not set in the client bitmap 22, the requested sector 86 is copied from the common storage 82 and stored as private sector 88 in the private storage 84 of the client 30 by the drive apportionment software 200 (block 216). The relevant bit 12 is then set in the client bitmap 22 (block 218). This ensures that subsequent retrievals are to the private sector 88 in the private storage 84 of the client 30, not from the sector 86 in the common storage 82. The requested private sector 88 is then written to in the private storage 84 of the hard disk drive 80 of the server 50 (block 222). Thus, the operation of the drive apportionment software 200, according to one embodiment, is complete.
In some embodiments, much of the frequently used portions of the common storage 82 of the server 50 may be stored in a cache, such as a memory cache. In this way, read requests from the client 30 may be honored from in- memory cache, rather from the common storage 82 of the hard disk drive 80. In other embodiments, a processor cache is used to store commonly used portions of the common storage 82. The use of a cache may further speed operations such as booting or launching a productivity application by the client 30.
In Figure 6, the server 50 includes the common storage 82, just as in Figure 4. The server 50 further includes a cache bitmap 24, which, like the client bitmaps 22, may be located in the memory 20 of the server 50.
The memory 20 may further include a server cache 44. Like the client private storage 84 of Figure 4, the server cache may be used to transfer the sectors 86 from the common storage 82 as new sectors 92, such as for sectors which are frequently used. Unlike the client private storage 84, which is located on the hard disk drive 80, the server cache 44 is located in the memory 20. Thus, in one embodiment, accesses to the sectors 92 in the server cache 44 may be serviced at a substantially higher speed than those to the hard disk drive 80 of the server 50. In Figure 6, the frequently used sectors, 86e through 861, as indicated by the bits 12e through 121 in the cache bitmap 24, are accessible in the server cache 44 as sectors 92e through 921.
In another embodiment, the client bitmaps 22 as well as the cache bitmap 24 may identify blocks different from a sector in size for movement to the client private storage 84 or the server cache 44. A hard disk drive sector is typically 512 bytes in length. However, modern file systems often deal with block sizes of seven sectors. The larger block size is used so that the file system runs more efficiently. So, for example, if a single sector is changed in the middle of a seven sector file system block, according to one embodiment, all seven sectors may be copied into the private storage 84 of the client 30 (or into the server cache 44). Accordingly, the bit 12 corresponding to the seven sector block may be set. In yet another embodiment, the client bitmaps 22 and the cache bitmap 24 may be reset such that all bits 12 in the bitmap 22 are cleared to "0". For a client
bitmap 22, this state may be identified as a default state of the client 30. The ability to reset the client 30 to a default state may be particularly useful in environments where the user of the client 30 changes. For example, in an educational setting, a student user may change on the client 30 every six months to a year.
Looking back to Figure 1, the client request processor 150 and the drive apportionment software 200, in one embodiment, are not run until the client 30 establishes a connection to the server 50. The specialized BIOS 350 of the client 30 engages the server 50, such that the client 30 may receive the virtual boot record 250 as well as the one or more client drive redirection driver 300 from the server 50. Additionally, the disk drive controller proxy 400, "fakes out" any application software executed by the client 30, such that hard disk drive operations may be successfully performed on the client 30 even though the client 30 itself has no hard disk drive. In short, according to one embodiment, the specialized BIOS 350 of the client 30, soon after the client 30 powers on, retrieves the virtual boot record 250 and the client drive redirection driver 300 from the server 50. Once received, the virtual boot record 250 is executed on the client 30. During this time, any hard disk drive accesses, which are attempted by the virtual boot record 250, are redirected by the drive redirection driver 300 such that the disk drive controller proxy 400 of the client 30 is instead accessed. Then, the disk drive controller proxy 400 performs disk operations by accessing the hard disk drive 80 of the server 50.
As illustrated in Figure 2, the packets comprising disk requests sent by the disk drive controller proxy 400 to the server 50 are handled by the client request processor 150. The disk drive controller proxy 400 operates from the perspective of the requesting boot record 250 (or any other application software), as though the drive operations are performed locally. Thus, the client 30 may support operating system and other software, some of which depend upon the presence of a hard disk drive, while, in fact, having no hard disk drive.
In Figure 7, the operation of the specialized BIOS 350, according to one embodiment, starts by connecting the client 30 to the network 40, for access to the server 50 (block 352). The specialized BIOS 350 next broadcasts a request for the server 50 (block 354). In one embodiment, the client 30 and the server 50 operate as part of a pre-boot execution environment (PXE). The PXE Specification (Version 2.1, September 20, 1999) is available from Intel Corporation, Santa Clara, California. The client 30 initiates the PXE protocol by broadcasting a particular command that identifies the request as coming from a client that implements the PXE protocol. After several intermediate steps, the server 50 sends the client 30 a list of appropriate boot servers. The client 30 then discovers a boot server of the type selected and receives the name of an executable file on the chosen boot server.
In one embodiment, the executable file received by the client 30 is the client virtual boot record 250 (block 356). Further, the specialized BIOS 350 retrieves the client drive redirection driver 300, also from the server 50 (block 358). Finally, the specialized BIOS 350 executes the virtual boot record 250 (block 360). For example, the specialized BIOS 350 may perform a BIOS function INT19h, in order to execute the virtual boot record 250, which is loaded in a client memory. Thus, the operation of the specialized BIOS 350, according to one embodiment, is complete.
Now that the client virtual boot record 250 is loaded in the client memory, the virtual boot record 250 may be executed. Note that when the boot record is located on the hard disk drive, it is typically loaded into memory and executed. Although retrieved from a remote hard disk drive, the virtual boot record 250 thus also operates in this manner.
In Figure 8, the operation of the client virtual boot record 250, according to one embodiment, begins by issuing a command to load an operating system core (block 252). In the prior art, the operating system is typically loaded by issuing INT13h functions for retrieving the operating system from the hard disk drive. Here, instead, the drive redirection driver 300, received from the server 50 by the specialized BIOS 350, sends a request to the disk drive controller proxy 400 (block
254). In other embodiments, instead of using a driver to intercept the INT13h functions, the specialized BIOS 350 may itself redirect all INT13h functions to the disk drive controller proxy 400.
To process the request, in one embodiment, the disk drive controller proxy 400 retrieves the operating system from the server 50, using a protocol as described in connection with Figure 9, below (block 256). The operating system core is then loaded into the client memory (block 258). Once loaded, the operating system is executed (block 260). For all subsequent disk drive requests (e.g., run time requests), the drive redirection driver 300 may service the requests (block 262). Thus, according to one embodiment, the operation of the client virtual boot record 250 is complete.
The disk drive controller proxy 400, invoked by the client virtual boot record 250 may look likes a normal disk drive controller to software running on the client 30, in one embodiment. This means that the disk drive controller proxy 400 receives disk access requests. The operation of disk drive controller proxy 400, according to one embodiment, is described in connection with Figure 9.
First, the disk drive controller proxy 400 receives a request from operating system or other software loaded on the client 30, to access a hard disk drive (block 402). The disk drive controller proxy 400 encapsulates this disk request into packets suitable for transmission across the network 40 (block 404).
In one embodiment, the client 30 employs a protocol for requesting sector accesses across the network 40 to the server 50. The protocol includes a private command and reply structure, including a command header section, a data content section, and an error handling section. Because the client request processor 150 is available for receiving the requests by the server 50, the commands from the disk drive controller proxy 400 may be simple, like "read sectors a, b, c, d" or "write sectors e, f, g, h".
Next, according to one embodiment, the disk drive controller proxy 400 sends the packet with the encapsulated disk request over the network 40 to the server 50 (block 406). The disk drive controller proxy 400 then waits for a response from the server 50.
At some point, the disk drive controller proxy 400 receives the response packet, including the disk result, from the client request processor 150 of the server 50 (block 408). The disk drive controller proxy 400 next de-encapsulates the drive result from the packet as drive result data (block 410). The result data may then be stored in the memory of the client 30 (block 412), such that the data may be retrieved by the requesting application. Thus, the operation of the disk drive controller proxy 400, according to one embodiment, is complete.
Referring to Figure 10, the server 50 includes a processor 10, and the memory 20, including the cache 44, as described in connection with Figure 6, above. The processor 10 may comprise a microcontroller, an X86 microprocessor, an Advanced RISC Machine (ARM) microprocessor, or a Pentium-based microprocessor, to name a few. The memory 20 may comprise various types of random access memories, such as dynamic random access memories (DRAMs), synchronous DRAMs (SDRAMs), static RAMs (SRAMs), single in-line memory modules (SIMMs), or double in-line memory modules (DIMMs), to name a few.
Both the processor 10 and the memory 20 are connected via a system bus 14. In one embodiment, a bridge chip 16 is also coupled to the system bus 14, such as for including a second bus 18, such as a peripheral component interconnect (PCI) bus 18. The PCI specification is available from the PCI Special Interest Group, Portland, Oregon 97214.
In one embodiment, the bridge chip 16 further supports the hard disk drive 80, including the software 60 as shown in connection with Figure 1. The PCI bus 18 may also support a network interface card (NIC) 42, for connecting the server 50 to the network 40. The client 30 is also a processor-based system, including a processor 58, a memory 64, and the ROM 62, each coupled to a system bus 56. Like the processor 10 on the server 50, the processor 58 may include any of a variety of processors. Also like the server 50, the memory 64 may be any of a variety of random-access memories.
As in Figure 1, the ROM 62 includes the specialized BIOS 350. The ROM 62 may be a programmable ROM (PROM), an erasable programmable ROM (EPROM), and electrically erasable PROM, and so on.
Like the server 50, the client 30 includes a bridge chip 52 coupled between the system bus 56 and a PCI bus 48. The PCI bus 48 may also support a NIC card 46, for connection to the network 40. In one embodiment, the disk drive controller proxy 400, as in Figure 1, may be part of the multi-function bridge chip 52. In other embodiments, the disk drive controller proxy 400 may be a separate and distinct component, coupled to the system bus 56 or the PCI bus 48. The embodiment of Figure 10 thus represents one of many possible implementations of the illustrated system 100.
Thus, in some embodiments, a system for supporting diskless clients exploits the redundant needs of the clients on a network. Where possible, the client may share accesses to the hard disk drive of the server. The clients nevertheless may maintain private storage on the server, where needed. In some embodiments, the client includes a specialized BIOS for establishing connection to the server and downloading an operating system. In some embodiments, the client further includes a drive controller proxy for re-routing drive requests to the server. The server likewise includes specialized software, for servicing the remote requests, for allocating drive space to the various clients, and for downloading operating system and driver software to the client, in some embodiments.
While the present invention has been described with respect to a limited number of embodiments, those skilled in the art will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention.