GB2413872A - An integrated network of processors - Google Patents

An integrated network of processors Download PDF

Info

Publication number
GB2413872A
GB2413872A GB0514859A GB0514859A GB2413872A GB 2413872 A GB2413872 A GB 2413872A GB 0514859 A GB0514859 A GB 0514859A GB 0514859 A GB0514859 A GB 0514859A GB 2413872 A GB2413872 A GB 2413872A
Authority
GB
United Kingdom
Prior art keywords
network
processing unit
auxiliary
host
processing units
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0514859A
Other versions
GB2413872B (en
GB0514859D0 (en
Inventor
Gary D Hicok
Robert A Alfieri
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nvidia Corp
Original Assignee
Nvidia Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/144,658 external-priority patent/US20030212735A1/en
Application filed by Nvidia Corp filed Critical Nvidia Corp
Publication of GB0514859D0 publication Critical patent/GB0514859D0/en
Publication of GB2413872A publication Critical patent/GB2413872A/en
Application granted granted Critical
Publication of GB2413872B publication Critical patent/GB2413872B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/382Information transfer, e.g. on bus using universal interface adapter
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/10Program control for peripheral devices
    • G06F13/102Program control for peripheral devices where the programme performs an interfacing function, e.g. device driver
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer And Data Communications (AREA)

Abstract

A network architecture that integrates the functions of an internet protocol (IP) router into a network processing unit (NPU) that resides in a host computer's chipset such that the host computer's resources are perceived as separate network appliances. The NPU appears logically separate from the host computer even though, in one embodiment, it is sharing the same chip. The host computer's resources include Auxiliary Processing Units (XPUs) such as media engines and Storage Processing Units (SPUs). Each XPU includes a first software driver which manages the host-side connection to the XPU and a second software driver which manages the remotely-accessed component of the XPU.

Description

241 3872 Method and Apparatus For Providing An Integrated Network of
Processors The present invention relates to a novel network architecture. More specifically, the present invention integrates the functions of an internet protocol (IP) router into a network processing unit that resides in a host computer's chipset such that the host computer's resources are perceived as separate network appliances.
BACKGROUND OF THE DISCLOSURE
FIG. 1 illustrates traditional internal content sources and data pipes where the data routing function is performed by a host central processing unit (CPU) and its operating system (OS) 110. Namely, the host computer may comprise a number of storage devices 120, a plurality of media engines 130, and a plurality of other devices that are accessible via inpuVoutput ports 140, e.g., universal serial bus (USB) and the like. In turn, the host computer may access a network 150 via application programming interfaces (APls) and a media access controller (MAC).
However, a significant drawback of this data routing architecture is that the host computer's resources or devices are only accessible with the go involvement of the host CPU/OS. Typically, accessing the host resources from external computers is either prohibited or it is necessary to request access through the host computer using high-level protocols. If the host CPU/OS is overtaxed, a substantial latency will exist where data flow may be stuck in the OS stacks.
Therefore, a need exists for a novel network architecture that allows a host computer's resources to be perceived as separate network appliances and are accessible without the interference of the host computer's CPU/OS.
SUMMARY OF THE INVENTION
o The present invention is a novel network architecture. More specifically, the present invention integrates the functions of an internet protocol (IP) router into a network processing unit (NPU) that resides in a host computer's chipset such that the host computer's resources are perceived as separate network appliances. The NPU appears logically separate from the host computer even though, in one embodiment, it is sharing the same chip. A host computer's "chipset" is one or more integrated circuits coupled to a CPU that provide various interfaces (e.g., main memory, hard disks, floppy, USB, PCI, etc), exemplified by Intel's Northbridge and Southbridge integrated circuits.
In operation, the host computer has a virtual port (i.e., host MAC) that is in communication with the network processing unit and communicates with the NPU as if it is an external network appliance using standard networking protocols. In one embodiment, the host computer communicates via the NPU with one or more auxiliary or dedicated processing units that are deployed to perform dedicated tasks. These auxiliary processing units can be part of the host or can be deployed separate from the host to meet different application requirements. For example, some of these auxiliary processing units include, but are not limited to, a graphics processing unit (GPU), an audio processing unit (APU), a video processing unit (VPU), a storage processing unit (SPU), and a physics processing unit (PPU). The present disclosure refers to these auxiliary processing units as XPU, where the "X" is replaced to signify a particular function performed by the processing unit. Finally, the network processing unit itself is an XPU because it can, in addition to routing packets go among XPUs, perform various processing accelerations on these packets, such as authentication, encryption, compression, TOP, IPSecNPNlPPP encapsulation and so on.
One unique aspect of Me present Invention is that the XPUs have logically direct attachments to the NPU which effectively serves as an integrated router, thereby allowing XPUs to be seen as separate network appliances.
Since these auxiliary processing units have first-class status in this logical network architecture, they are allowed to communicate with each other or with any external computer (e.g., via another NPU) directly using standard internet protocols such as IP' TOP, UDP and the like without the involvement of the host :3 o CPU/OS. Using this novel architecture, the NPU provides both local (or host) access and remote access acceleration in a distributed computing environment.
Furthermore, by virtualizing the remaining resources of the host computer, such as its physical memory, ROM, real-time clocks, interrupts, and the like, the present invention allows a single chipset to provide multiple, virtual host computers with each being attached to this NPU. Each of these virtual computers or virtual host may run its own copy of an identical or different operating system, and may communicate with other virtual computers and integrated networked appliances using standard networking protocols.
Effectively, the present invention embodies its own hardware-level operating system and graphical user interface (GUI) that reside below the standard host operating system and host computer definition, and allow the computer user to easily configure the network or to switch from one virtual computer to another without changing the standard definition of that host computer.
BRIEF DESCRIPTION OF THE DRAWINGS
The teachings of the present invention can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which: FIG. 1 illustrates a block diagram of conventional internal content sources and data pipes; FIG. 2 illustrates a block diagram of novel internal content sources and data pipes of the present invention; no FIG. 3 illustrates a block diagram where a network of host computers are in communication with each other via a plurality of network processing units; FIG. 4 illustrates a block diagram where a host computer's resources are networked via a network processing unit of the present invention; and FIG. 5 illustrates a block diagram of a network of virtual personal computers in communication with a network processing unit of the present invention.
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures.
DETAILED DESCRIPTION
FIG. 2 illustrates a block diagram of novel internal content sources and data pipes 200 of the present invention. Unlike FIG. 1, the present network t architecture has a network processing unit 210 of the present invention at the center of the internal content sources and data pipes. The host CPU/OS 250 is no longer central to the data routing scheme. One advantage of this new architecture is that the NPU 210 provides both local or host access and remote access acceleration.
An operating system is any software platform for application programs; typical examples are Microsoft Windows, Unix, and Apple Macintosh OS. An operating system can be run on top of another operating system (an example of a virtual operating system) or another underlying software platform, possibly as lo an application program.
In operation, the host CPU/OS 250 has a virtual port (i.e., host MAC) that is in communication with the network processing unit 210 and communicates with the NPU as if it is an external network appliance using standard networking protocols, e.g., TCP/IP protocols. In one embodiment, the host computer communicates via the NPU with one or more auxiliary or dedicated processing units 220, 230 that are deployed to perform dedicated tasks. These auxiliary processing units can be part of the host or can be deployed separate from the host to meet different application requirements.
For example, some of these auxiliary processing units include, but are go not limited to, a graphics processing unit (GPU), an audio processing unit (APU), a video processing unit (VPU), a physics processing unit (PPU) and a storage processing unit (SPU) 220. Some of these auxiliary processing units can be deployed as part of the media engines 230, whereas the SPU 220 is deployed with the storage devices of the host. Finally, the network processing as unit itself is an XPU because it can, in addition to routing packets among XPUs, perForrn various processing accelerations on these packets, such as authentication, encryption, compression, TOP, IPSecNPNlPPP encapsulation and so on.
In one embodiment, the NPU 210 is a network router appliance that 3 0 resides inside the same "box" or chassis as the host computer 250, i. e., typically within the same chipset. The NPU sexes to connect various other "XPUs' that performed dedicated functions such as: 1) Storage Processing Unit (SPU) is an auxiliary processing unit that implements a file system, where the file system can be accessed locally by the host or remotely via the NPU's connection to the outside world.
The SPU is a special XPU because it behaves as an endpoint for data s storage. Streams can originate from an SPU file or terminate at an SPU file.
2) Audio Processing Unit (APU) is an auxiliary processing unit that implements audio affects on individual "voices" and mixes them down to a small number of channels. APU also performs encapsulation/decapsulation of audio packets that are transmitted/received over the network via the NPU.
3) Video Processing Unit (VPU) is an auxiliary processing unit that is similar to the APU except that it operates on compressed video packets (e.g., MPEG-2 compressed), either compressing them or uncompressing them. The VPU also performs encapsulations into bitstreams or network video packets.
so 4) Graphics Processing Unit (GPU) is an auxiliary processing unit that takes graphics primitives and produces (partial) frame buffers. The GPU is a special XPU because it acts as an endpoint for rendered graphics primitives. Streams can terminate at a GPU frame buffer or originate as raw pixels from a frame buffer.
5) Physics Processing Unit (PPU) is an auxiliary processing unit that takes object positions, current velocity vectors, and force equations, and produces new positions, velocity vectors, and collision information.
so 6) Network Processing Unit (NPU) is itself an XPU because it can, in addition to routing packets among XPUs, perform various processing accelerations on these packets, such as authentication, encryption, compression, TOP, IPSecNPN/PPP encapsulation and the like.
Some of the above XPUs have a number of commonalities with respect to their association with the host 250 and the NPU 210. First, an XPU can be accessed directly by the host CPU and OIS 250 directly as a local resource.
s Namely, communication is effected by using direct local communication channels.
Second, an XPU can be placed on the network via the NPU and accessed remotely from other network nodes (as shown in FIG. 3 below). This indicates that an XPU is capable of processing information that is encapsulated in network packets.
Third, an XPU can be accessed as a "remote" node even from the local host. Namely, communication is effected via the NPU by using network protocols.
Fourth, an XPU is always in an "on" state (like most appliances) even when the host (CPU+O/S) is in the "off' state. This unique feature allows the XPUs to operate without the involvement of the host CPU/OS, e.g., extracting data from a disk drive of the host without the involvement of the host. More importantly, the host's resources are sUII available even though the CPU/OS may be in a dormant state, e.g., in a sleep mode.
go Fifth, an XPU has at least two sets of processing queues, one for non real-time packets and at least one for real-time packets. This duality of queues combined with similar real-time queues in the NPU, allows the system of NPU and XPUs to guarantee latencies and bandwidth for real-time streams.
Sixth, an XPU has two software (SW) drivers, one that manages the hostside connection to the XPU, and one that manages the remotely-accessed component of the XPU. In operation, the SW drivers communicate with the XPU using abstract command queues, called push buffers (PBs). Each driver has at least one PB going from the driver to the XPU and at least one PB going from the XPU to the driver. Push buffers are described in US Patent 6,092,124, so and is herein incorporated herein by reference.
Seventh, an XPU can also be accessed on the host side directly by a userlevel application. Namely, this involves lazy-pinning of user-space buffers by the O/S. Lazy-pinning means to lock the virtual-to-physical address
I
translations of memory pages on demand, i.e., when the translations are needed by the particular XPU. When the translations are no longer needed, they can be unlocked, allowing the operating system to page out those pages.
The virtual-to-physical mappings of these buffers are passed to the XPU. A separate pair of PBs are linked into the user's address space and the O/S driver coordinates context switches with the XPU.
Although the present invention discloses the use of a network processing unit 210 to perform routing functions without the involvement of the CPU/OS, the CPU/OS 250 nevertheless still has an alternate direct communication channel 255 with its resources, e.g., storage devices. This provides the host CPU/OS with the option of communicating with its resources or media engines via the NPU or directly via local access channels 255 or 257.
In fact, although the CPU/OS is not involved with the general routing function, in one embodiment of the present invention, exception routing issues are resolved by the host CPU/OS. For example, it the NPU receives a packet that it is unable to process, the NPU will forward the packet to the host CPU/OS for resolution. This limited use of the CPU/OS serves to accelerate host processing, while retaining the option to more judiciously use the processing power of the host CPU/OS to resolve difficult issues.
go Additionally, the host resources may also be accessed via the NPU without the involvement of the host CPU/OS 250 via inpuVoutput communication channel 240, e.g., via an USB. For example, the present architecture can virtualize the remaining resources of the host computer 250, such as its physical memory, read only memory (ROM), real-time clocks, interrupts, and so on, thereby allowing a single chipset to provide multiple virtual hosts with each host being attached to the NPU 210.
One unique aspect of the present Invention is that the XPUs have logically direct attachments to the NPU that effectively serves as an integrated router, thereby allowing XPUs to be seen as separate network appliances.
to Since these auxiliary processing units have first-class status in this logical network architecture, they are allowed to communicate with each other or with any external computer (e.g., via another NPU) directly using standard internet protocols such as IP, TOP, UDP and the like without the involvement of the host ! . CPU/OS. Using this novel architecture, the NPU provides both local (or host) access and remote access acceleration in a distributed computing environment.
FIG. 3 illustrates a block diagram where a network of host computers 300a-n are in communication with each other via a plurality of network processing units 310a-n. This unique configuration provides both host access and remote access acceleration. The accelerated functions can be best understood by viewing the present invention in terms of packetized streams.
It is best to view this system of NPU and XPUs in the context of streams of packetized data that flow within this system. There are various types of to streams that are allowed by the system. In this discussion, the term "host" means the combination of host CPU and memory in the context of the O/S kernel or a user-level process. The term "node" refers to a remote networked host or device that is attached to the NPU via a wired or wireless connection to a MAC that is directly connected to the NPU (e. g., as shown in FIG. 4 below).
A host-to-XPU stream is a stream that flows directly from the host 350a to the XPU 330a. This is a typical scenario for a dedicated XPU (e.g., a dedicated GPU via communication path 357). The stream does not traverse through the NPU 310a.
An XPU-to-host stream is a stream that flows directly from the XPU to the go host. One example is a local file being read from the SPU 320a via path 355.
The stream does not traverse through the NPU 31 Oa.
A host-to-XPU-to-host stream is a stream that flows from host 350a to an XPU 330a for processing then back to the host 350a. One example is where the host forwards voice data directly to the APU for processing of voices into final mix buffers that are subsequently returned to the host via path 357. The stream does not traverse through the NPU 310a.
A host-to-NPU-to-XPU stream is a networked stream that flows from the host 350a via NPU 310a to an XPU 330a or 320a. The three parties transfer packetized data using standard networking protocols, e.g., TCP/IP.
An XPU-to-NPU-to-Host is a networked stream that flows from an XPU 330a or 320a via the NPU 310a to the host 350a. The three parties transfer packetized data using standard networking protocols, e.g., TCPlIP. - 9 -
A host-to-NPU-to-XPU-to-NPU-to-host is a networked stream that is the combination of the previous two streams. The three parties transfer packetized data using standard networking protocols, e.g., TCP/IP.
A host-to-NPU-to-Node is a networked stream that flows from the host 350a via the NPU 310a to a remote node (e.g., NPU 310b). This allows a local host 350a to communicate and access XPUs 330b of another host via a second NPU 31 Ob.
A Node-to-NPU-to-Host is a reverse networked stream where the stream flows from a remote node (e.g., NPU 310b) via the NPU 310a to the host 350a.
This allows a remote NPU 350b to communicate with a local host 350a via a local NPU 310a.
A Node-to-NPU-to-XPU is a networked stream that flows from a remote node 350b via the NPU 350a to an XPU 330a where it terminates. This allows a remote NPU 310b to communicate with a local XPU 330a via a local NPU 31 Oa.
An XPU-to-NPU-to-Node is a networked stream that flows from an XPU 330a where it originates to a remote node (e.g., NPU 310b) via local NPU 310a.
A NodeO-to-NPU-to-XPU-to-NPU-to-Node1 is a combination of the previous two streams. It should be noted that NodeO and Node1 may be the go same or different. For example, NodeO is 310a; NPU is 310b; XPU is 330b; NPU is 310b; and Node1 is 310n. Altematively, NodeO is 310a; NPU is 310b; XPU is 330b; NPU is 310b; and Node1 is 310a.
A {Host,NodeO, XPUO}-to-NPU-to-XPU1-to-NPU-to-XPU2-to-NPU-to {Host,Node1,XPU3} is a stream that originates from the host, a remote node, or an XPU, passes through the NPU to another XPU for some processing, then passes through the NPU to another XPU for some additional processing, then terminates at the host, another remote node, or another XPU. It should be clear that the present architecture of a network of integrated processing units provides a powerful and flexible distributed processing environment, where both o host access and remote access acceleration are greatly enhanced.
Under the present architecture, numerous advantages are achieved.
First, it is beneficial to tightly integrate other computers and network appliances into the same chipset. Second, it is very advantageous to offload a host computer's l/O functions into a distributed network of intelligent processors, where traditional latencies associated with overtaxed CPU/OS are resolved.
Third, it is advantageous to provide these auxiliary l/O processors with first- class network-appliance status within the chipset (optionally illustrated in FIG. 2 with dash lines) without changing the definition of the host computer. Fourth, it is advantageous to allow these auxiliary l/O processors to be shared among the host computer, external computers, and internal and external network appliances. Fifth, it is advantageous to allow the remaining resources of the host computer to be virtualized so that multiple virtual copies of the host computer may be embodied in the same chipset, while sharing the network of intelligent auxiliary l/O processors. Finally, it is advantageous to use a hardware-level operating system and graphical user interface (GUI) that allow the user to configure the network and seamlessly switch among virtual copies of the host computer or virtual host.
In one embodiment of the present invention, real-time media streaming is implemented using the above described network of integrated processing units.
Specifically, media streaming typically involves multiple software isyers. Thus, latencies can be unpredictable, particularly when the software runs on a general-purpose computer. More importantly, media streaming typically has a go severe adverse impact on other applications running on the host computer.
However, by attaching media devices such as an APU or GPU to an NPU+SPU combination, it is now possible to minimize and guarantee latencies as well as offload the main host CPU. For example, referring to FIG. 3, control requests may arrive from a remote recipient 350b (typically attached wireless).
as These control requests may include play, stop, rewind, forward, pause, select title, and so on. Once the stream is set up, the raw data can be streamed directly from a disk managed by the SPU 320a through the NPU 31 Oa to the destination client. Alternatively, the data may get preprocessed by the GPU 330a or APU 330a prior to being sent out via the NPU 31 Oa. One important o aspect again is that real-time media streaming can take place without host CPU 350a involvement. Dedicated queuing throughout the system will guarantee latencies and bandwidth.
This media streaming embodiment clearly demonstrates the power and flexibility of the present invention. One practical implementation of this real-time media streaming embodiment is within the home environment, where a centralized multimedia host server or computer has a large storage device that contains a library of stored media streams or it may simply be connected to a DVD player, a "PVR" (personal video recorder) or "DVR" (digital video recorder). If there are other client devices throughout the home, it is efficient to use the above network architecture to implement real-time media streaming, where a media stream from a storage device of the host computer can be transmitted to another host computer or a television set in a different part of the home. Thus, the real-time media streaming is implemented without the involvement of the host computer and with guaranteed latencies and bandwidth.
FIG. 4 illustrates a block diagram where a host computer's resources are networked via a network processing unit 410 of the present invention.
Specifically, a host 450 communicates with the NPU 410 via a MAC 415 (i.e. , a host MAC). In turn, a plurality of XPUs and other host resources 430a are connected to the NPU via a plurality of MACs 425 that interface with a MAC Interface (Ml) (not shown) of the NPU. One example of an NPU is disclosed in US patent application entitled "A Method And Apparatus For Performing go Network Processing Functions" with attorney docket NVDA/P000413.
FIG. 5 illustrates a block diagram of a network of virtual personal computers or virtual hosts that are in communication with a network processing unit 520 of the present invention. More specifically, FIG. 5 illustrates a network of virtual personal computers (VPCs) in a single system (or a single chassis) 500, where the system may be a single personal computer, a set top box, a video game console or the like.
In operation, FIG. 5 illustrates a plurality of virtual hosts 510a-e, which may comprise a plurality of different operating systems (e.g., Microsoft Corporation's Windows (two separate copies 510a and 510b), and Linux 510c), so a raw video game application 510d or other raw applications 510e, where the virtual hosts treat the storage processing unit 530 as a remote file server having a physical storage 540. In essence, one can perceive FIG. 5 as illustrating a "network of VPCs in a box".
ln one embodiment, the NPU 520 manages multiple IP addresses inside the system for each VPC. For example, the NPU 520 may be assigned a public IP address, whereas each of the VPCs is assigned a private IP address, e.g., in accordance with Dynamic Host Configuration Protocol (DHCP). Thus, each of the VPCs can communicate with each other and the SPU using standard networking protocols. Standard network protocols include, but are not limited to. TOP; TCP/IP; UDP; NFS; HTTP; SMTP; POP; FTP; NNTP; CGI; DHCP; and ARP (to name only a few that are know in the art).
It should be understood that the XPUs of the present invention can be implemented as one or more physical devices that are coupled to the host CPU through a communication channel. Alternatively, the XPUs can be represented and provided by one or more software applications (or even a combination of software and hardware, e.g., using application specific integrated circuits (ASIC)), where the software is loaded from a storage medium, (e.g., a ROM, a magnetic or optical drive or diskette) and operated in the memory of the computer. As such, the XPUs (including associated methods and data structures) of the present invention can be stored and provided on a computer readable medium, e.g., ROM or RAM memory, magnetic or optical drive or diskette and the like. Alternatively, the XPUs can be represented by Field o Programmable Gate Arrays (FPGA) having control bits.
Although various embodiments which incorporate the teachings of the present invention have been shown and described in detail herein, those skilled in the art can readily devise many other varied embodiments that still incorporate these teachings. In the claims, elements of method claims are listed in a particular order, but no order for practicing of the invention is implied, even if elements of the claims are numerically or alphabetically enumerated,

Claims (166)

1. A network of processing units, said network comprising: a network processing unit (NPU); and at least one host, wherein said at least one host comprises a central processing unit (CPU) and a plurality of auxiliary processing units (XPU), each of the XPUs including first and second software drivers, the first driver managing connection of the XPU to the host, the second driver managing connection of the XPU to another of the XPUs, wherein said central processing unit comprises a host operating system and wherein said plurality of auxiliary processing units bypass said host operating system and communicate directly with each other via said network processing unit and the second software driver of each of the XPUs.
2. A network as claimed in claim 1, wherein each of said plurality of auxiliary processing units employ a network protocol to communicate with said network processing unit.
3. A network as claimed in claim 2, wherein each of said plurality of auxiliary processing units communicates with said network processing unit via a media access controller.
4. A network as claimed in claim 1, wherein each of the XPUs is coupled to the network by the NPU, enabling the XPU to process network packets of information.
5. A network as claimed in claim 1, wherein each of said plurality of auxiliary processing units is perceived as a separate network appliance.
6. A network as claimed in claim 1, wherein each of the XPUs is in an on state even when the host is in an off state, allowing each of the XPUs to operate without involvement of the host.
7. A network as claimed in claim 1, wherein each of the XPUs includes a push buffer associated with each of the software drivers to enable the connections of the XPU with the host and with another XPU.
8. A network as claimed in claim 7, wherein each of the XPUs receives virtual-to-physical mappings of the push buffers under control of the host operating system.
9. A network as claimed in claim 8, wherein address translations of the virtual-to-physical mappings are locked until needed by the XPU.
10. A network as claimed in claim 1, wherein each of the XPUs includes a first queue for non-real time packets and a second queue for real-time packets, at least some of the XPUs communicating with each other via the first or the second queue.
11. A network as claimed in claim 1, wherein said at least one host comprises a plurality of virtual hosts, where at least two of said plurality of virtual hosts are loaded with a separate operating system.
12. A network as claimed in claim 11, wherein each of said plurality of virtual hosts is capable of accessing said plurality of auxiliary processing units via said network processing unit.
13. A network as claimed in claim 1, wherein said plurality of auxiliary processing units comprise one or more of an auxiliary storage processing unit, an auxiliary audio processing unit, an auxiliary graphics processing unit, and an auxiliary video processing unit.
14. A network as claimed in claim 1, wherein said network processing unit is implemented on a chipset.
15. A network as claimed in claim 14, wherein at least one of said plurality of auxiliary processing units is implemented on the chipset.
16. A network of processing units and host resources, said network comprising: a first network processing unit; a first host comprising a first central processing unit including a first host operating system and a plurality of first host resources including at least one auxiliary processing unit (XPU); a second network processing unit; and a second host comprising a second central processing unit including a second host operating system and a plurality of second host resources including at least one auxiliary processing unit (XPU), each of the XPUs associated with the first or second host including first and second software drivers, the first driver managing connection of the XPU to the first or second host, the second driver managing connection of the XPU to another of the XPUs, wherein each of said XPUs is accessible via said first and second network processing units by bypassing said first and second host operating systems and utilizing the second software driver of each of the XPUs.
17. A network as claimed in claim 16, wherein each of said plurality of second host resources is accessible via said first and second network processing units by bypassing said second host operating system.
18. A network as claimed in claim 16, wherein one of said plurality of first host resources forwards a media stream in real time to said second host operating system.
19. A network as claimed in claim 16, wherein each of the XPUs includes a push buffer associated with each of software drivers to enable the connections of the XPU with the host and with another XPU.
20. A network as claimed in claim 19, wherein each of the XPUs receivers virtual-to-physical mappings of the push buffers under control of the host operating system. i
21. A network as claimed in claim 20, wherein address translations of the virtual-to-physical mappings are locked until needed by the XPU.
22. A network as claimed in claim 16, wherein each of the XPUs includes a first queue for non-real time packets and a second queue for real-time packets, at least some of the XPUs communicating with each other via the first or the second queue.
23. A method for operating a network, the network including: a first network processing unit; and a first host comprising a first central processing unit loaded with a first host operating system and a plurality of first host resources including a plurality of auxiliary processing unit (XPUs), each of the XPUs including first and second software drivers, the first driver managing connection of the XPU to the host, the second driver managing connection of the XPU to another of the XPUs, the method including: accessing each of said XPUs either directly utilizing said first software driver or via said first and second network processing units by bypassing said first and second host operating systems utilizing the second software driver of each of the XPUs.
24. A method as claimed in claim 23, including directly accessing each of the XPUs from the host by locking virtual-to-physical address translations of memory pages until demanded by the accessed XPU.
25. A method as claimed in claim 23, further comprising: forwarding a media stream in real time from one of said plurality of first host resources to said second host operating system.
26. A method as claimed in claim 23, wherein exception routing issues between the XPUs are resolved by either the first or second host.
27. A method as claimed in claim 23, wherein the resources of the first host or the second host can be accessed by either of the first or second network processing units. l
28. A method as claimed in claim 23 including providing a logically direct attachment of each of the XPUs to each of the first and second network processing units.
29. A method as claimed in claim 23 including transferring some of the packets from the first host to the one XPU for processing and then returning the processed packets to the host.
30. A method as claimed in claim 23 including transferring some of the packets of data from one of the XPUs via one of the network processing units to a remote node.
31. A method as claimed in claim 23, wherein each of the XPUs includes a push buffer associated with each of software drivers, the connections of each of the XPUs with the host and with another XPU being enabled thru the push buffer.
32. A method as claimed in claim 31, wherein each of the XPUs receives virtual-to-physical mappings of the push buffers under control of the host operating system.
33. A method as claimed in claim 32, wherein address translations of the virtual-to-physical mappings are locked until needed by the XPU.
34. A method as claimed in claim 23, wherein each of the XPUs includes a first queue for non-real time packets and a second queue for real-time packets, the XPUs communicating with each other via the first or the second queue.
35. A distributed network of processing units, said network comprising: a network processing unit; and at least one host, wherein said at least one host comprises a central processing unit and a plurality of auxiliary processing units, wherein said central processing unit is loaded with a host operating system and wherein said plurality of auxiliary processing units bypass said host operating system and communicate directly with each other via said network processing unit.
36. A network as claimed in claim 35, wherein said plurality of auxiliary processing units employ a network protocol to communicate with said network processing unit.
37. A network as claimed in claim 36, wherein each of said plurality of auxiliary processing units communicates with said network processing unit via a media access controller.
38. A network as claimed in claim 36 or 37, wherein said network protocol is Transmission Control Protocol/lnternet Protocol.
39. A network as claimed in claim 36 or 37, wherein said network protocol is User Datagram Protocol (IJDP).
40. A network as claimed in claim 35, wherein each of said plurality of auxiliary processing units is perceived as a separate network appliance.
41. A network as claimed in claim 35, wherein one of said plurality of auxiliary processing units is an auxiliary storage processing unit.
42. A network as claimed in claim 35, wherein one of said plurality of auxiliary processing units is an auxiliary audio processing unit.
43. A network as claimed in claim 35, wherein one of said plurality of auxiliary processing units is an auxiliary graphics processing unit.
44. A network as claimed in claim 3 5, wherein one of said plurality of auxiliary processing units is an auxiliary video processing unit.
45. A network as claimed in claim 35, wherein one of said plurality of auxiliary processing units is an auxiliary physics processing unit.
46. A network as claimed in claim 35, wherein said at least one host is a virtual host.
47. A network as claimed in claim 35, wherein said at least one host comprises a plurality of virtual hosts, where at least two of said plurality of virtual hosts are loaded with a separate operating system.
48. A network as claimed in claim 47, wherein each of said plurality of virtual hosts is capable of accessing said plurality of auxiliary processing units via said network processing unit.
49. A network as claimed in claim 35, wherein said plurality of auxiliary processing units comprise an auxiliary storage processing unit, an auxiliary audio processing unit, an auxiliary graphics processing unit, and an auxiliary video processing unit.
50. A network as claimed in claim 35, wherein said network processing unit is implemented on a chipset.
51. A network as claimed in claim 50, wherein at least one of said plurality of auxiliary processing units is implemented on a chipset.
52. A distributed network of processing units and host resources, said network comprlsmg: a first network processing unit; a first host comprising a first central processing unit loaded with a first host operating system and a plurality of first host resources; a second network processing unit; and a second host comprising a second central processing unit loaded with a second host operating system and a plurality of second host resources, wherein each of said plurality of first host resources is accessible via said first and second network processing units by bypassing said first host operating system.
53. A network as claimed in claim 52, wherein each of said plurality of second host resources is accessible via said first and second network processing units by bypassing said second host operating system.
54. A network as claimed in claim 52, wherein one of said plurality of first host resources forwards a media stream in real time to said second host operating system.
55. A network as claimed in claim 52, wherein said plurality of first host resources comprise at least one auxiliary processing unit.
56. A network as claimed in claim 55. wherein said at least one auxiliary processing unit is an auxiliary storage processing unit.
57. A network as claimed in claim 55, wherein said at least one auxiliary processing unit is an auxiliary audio processing unit.
58. A network as claimed in claim 55, wherein said at least one auxiliary processing unit is an auxiliary graphics processing unit.
59. A network as claimed in claim 55, wherein said at least one auxiliary processing unit is an auxiliary video processing unit.
60. A network as claimed in claim 55, wherein said at least one auxiliary processing unit is an auxiliary physics processing unit.
61. A method for providing a distributed network of processing units and host resources, said method comprising: a) providing a first network processing unit; b) providing a first host comprising a first central processing unit loaded with a first host operating system and a plurality of first host resources; c) providing a second network processing unit; and d) providing a second host comprising a second central processing unit loaded with a second host operating system and a plurality of second host resources, wherein each of said plurality of first host resources is accessible via said first and second network processing units by bypassing said first host operating system.
62. A method as claimed in claim 61, wherein each of said plurality of second host resources is accessible via said first and second network processing units by bypassing said second host operating system.
63. A method as claimed in claim 61, further comprising: e) forwarding a media stream in real time from one of said plurality of first host resources to said second host operating system.
64. A method as claimed in claim 61, wherein said plurality of first host resources comprise at least one auxiliary processing unit.
65. A method as claimed in claim 64, wherein said at least one auxiliary processing unit is an auxiliary storage processing unit.
66. A method as claimed in claim 64, wherein said at least one auxiliary processing unit is an auxiliary audio processing unit.
67. A method as claimed in claim 64, wherein said at least one auxiliary processing unit is an auxiliary graphics processing unit.
68. A method as claimed in claim 64, wherein said at least one auxiliary processing unit is an auxiliary video processing unit.
69. A method as claimed in claim 64, wherein said at least one auxiliary processing unit is an auxiliary physics processing unit.
70. A method for providing a distributed network of processing units, said method comprising: a) providing a network processing unit; and b) providing at least one host, wherein said at least one host comprises a central processing unit and a plurality of auxiliary processing units, wherein said central processing unit is loaded with a host operating system and wherein said plurality of auxiliary processing units bypass said host operating system and communicate directly with each other via said network processing unit.
71. A method as claimed in claim 70, wherein said plurality of auxiliary processing units employ a network protocol to communicate with said network processing unit.
72. A method as claimed in claim 71, wherein each of said plurality of auxiliary processing units communicates with said network processing unit via a media access controller (MAC).
73. A method as claimed in claim 71, wherein said network protocol is Transmission Control Protocol/Internet Protocol (TCP/IP).
74. A method as claimed in claim 71, wherein said network protocol is User Datagram Protocol.
75. A method as claimed in claim 70, wherein each of said plurality of auxiliary processing units is perceived as a separate network appliance.
76. A method as claimed in claim 70, wherein one of said plurality of auxiliary processing units is an auxiliary storage processing unit.
77. A method as claimed in claim 70, wherein one of said plurality of auxiliary processing units is an auxiliary audio processing unit.
78. A method as claimed in claim 70, wherein one of said plurality of auxiliary processing units is an auxiliary graphics processing unit.
79. A method as claimed in claim 70, wherein one of said plurality of auxiliary processing units is an auxiliary video processing unit.
80. A method as claimed in claim 70, wherein one of said plurality of auxiliary processing units is an auxiliary physics processing unit.
81. A method as claimed in claim 70, wherein said at least one host is a virtual host.
82. A method as claimed in claim 70, wherein said at least one host comprises a plurality of virtual hosts, where at least two of said plurality of virtual hosts are loaded with a separate operating system.
83. A method as claimed in claim 82, wherein each of said plurality of virtual hosts is capable of accessing said plurality of auxiliary processing units via said network processing unit.
84. A method as claimed in claim 70, wherein said plurality of auxiliary processing units comprise an auxiliary storage processing unit, an auxiliary audio processing unit, an auxiliary graphics processing unit, and an auxiliary video processing unit.
85. A method for providing a distributed network of processing units, said method comprlsmg: a) providing a network processing unit; b) providing at least one host, wherein said at least one host comprises a central processing unit loaded with a host operating system; and c) providing a plurality of auxiliary processing units, wherein said plurality of auxiliary processing units bypass said host operating system and communicate directly with each other via said network processing unit.
86. A method as claimed in claim 85, wherein said plurality of auxiliary processing units employ a network protocol to communicate with said network processing unit.
87. A method as claimed in claim 86, wherein each of said plurality of auxiliary processing units communicates with said network processing unit via a media access controller (MAC).
88. A method as claimed in claim 86, wherein said network protocol is Transmission Control Protocol/Internet Protocol (TCP/IP).
89. A method as claimed in claim 86, wherein said network protocol is User Datagram Protocol (UDP).
90. A method as claimed in claim 85, wherein each of said plurality of auxiliary processing units is perceived as a separate network appliance.
91. A method as claimed in claim 85, wherein one of said plurality of auxiliary processing units is an auxiliary storage processing unit.
92. A method as claimed in claim 85, wherein one of said plurality of auxiliary processing units is an auxiliary audio processing unit.
93. A method as claimed in claim 85, wherein one of said plurality of auxiliary processing units is an auxiliary graphics processing unit.
94. A method as claimed in claim 85, wherein one of said plurality of auxiliary processing units is an auxiliary video processing unit.
95. A method as claimed in claim 85, wherein one of said plurality of auxiliary processing units is an auxiliary physics processing unit.
96. A method as claimed in claim 85, wherein said at least one host is a virtual host.
97. A method as claimed in claim 85, wherein said at least one host comprises a plurality of virtual hosts, where at least two of said plurality of virtual hosts are loaded with a separate operating system.
98. A method as claimed in claim 97, wherein each of said plurality of virtual hosts is capable of accessing said plurality of auxiliary processing units via said network processing unit.
99. A distributed network of processing units, said network comprising: a network processing unit at least one host, wherein said at least one host comprises a central processing unit loaded with a host operating system; and a plurality of auxiliary processing units, wherein said plurality of auxiliary processing units bypass said host operating system and communicate directly with each other via said network processing unit.
100. The network of claim 99, wherein said plurality of auxiliary processing units employ a network protocol to communicate with said network processing unit.
101. The network of claim l GO, wherein each of said plurality of auxiliary processing units communicates with said network processing unit via a media access controller.
102. the network of claim 99, wherein said network protocol is Transmission Control Protocol/Internet Protocol.
103. The network of claim 99, wherein said network protocol is User Datagram Protocol.
104. The network of claim 99, wherein each of said plurality of auxiliary processing units is perceived as a separate network appliance.
105. The network of claim 99, wherein one of said plurality of auxiliary processing units is an auxiliary storage processing unit.
106. The network of claim 99, wherein one of said plurality of auxiliary processing units is an auxiliary audio processing unit.
107. The network of claim 99, wherein one of said plurality of auxiliary processing units is an auxiliary graphics processing unit.
108. The network of claim 99, wherein one of said plurality of auxiliary processing units is an auxiliary video processing unit.
109. I he network of claim 99, wherein one of said plurality of auxiliary processing units is an auxiliary physics processing unit.
l 10. The network of claim 99, wherein said at least one host is a virtual host.
The network of claim 99, wherein said at least one host comprises a plurality of virtual hosts, where at least two of said plurality of virtual hosts are loaded with a separate operating system.
l 12. The network of claim 1 1 1, wherein each of said plurality of virtual hosts is capable of accessing said plurality of auxiliary processing units via said network processing unit.
113. The network of claim 99, wherein said network processing unit is implemented on a chipset.
1 14. The network of claim I l 3, wherein at least one of said plurality of auxiliary processing units is implemented on a chipset.
115. Method for providing a distributed network of processing units and host resources, said method comprising: a) providing a network processing unit; and b) providing at least one host, wherein said at least one host comprises a central processing unit loaded with a host operating system and a plurality of host resources, wherein each of said plurality of host resources is accessible directly by said central processing unit and via said network processing unit.
1 16. The method of claim 1 15, wherein one of said plurality of host resources is a storage device.
1 17. The method of claim 1 15, wherein one of said plurality of host resources is a read only memory (ROM).
1 18. The method of claim 1 15, wherein one of said plurality of host resources is a random access memory (RAM).
1 19. A distributed network of processing units and host resources, said network comprlsmg: a network processing unit; and at least one host, wherein said at least one host comprises a central processing unit loaded with a host operating system and a plurality of host resources, wherein each ot said plurality of host resources is accessible directly by said central processing unit and via said network processing unit.
120. The network of claim 1 15, wherein one of said plurality of host resources is a storage device.
121. The network of claim 1 15, wherein one of said plurality of host resources is a read only memory (ROM).
122. The network of claim 1 15, wherein one of said plurality of host resources is a random access memory.
123. Method for providing a distributed network of processing units, said method comprising: a) providing a network processing unit; and b) providing at least one host, wherein said at least one host comprises a central processing unit and at least one auxiliary processing unit, wherein said central processing unit is loaded with a host operating system and wherein said at least one auxiliary processing unit bypasses said host operating system and communicates directly with said network processing unit.
124. The method of claim 123, wherein said at least one auxiliary processing unit employs a network protocol to communicate with said network processing unit.
125. The method of claim 124, wherein said at least one auxiliary processing unit communicates with said network processing unit via a media access controller (MAC),
126. A distributed network of processing units, said network comprising: a network processing unit; and at least one host, wherein said at least one host comprises a central processing unit and at least one auxiliary processing unit, wherein said central processing unit is loaded with a host operating system and wherein said at least one auxiliary processing unit bypasses said host operating system and communicates directly with said network processing unit.
127. The network of claim 126, wherein said at least one auxiliary processing unit employs a network protocol to communicate with said network processing unit.
128. The network of claim 127, wherein said at least one auxiliary processing unit communicates with said network processing unit via a media access controller (MAC). d
129. Method for providing a distributed network of processing units for interacting with at least one host that comprises a central processing unit (CPU) and, wherein said central processing unit is loaded with a host operating system, said method comprlsmg: a) providing a network processing unit; and b) providing at least one auxiliary processing unit, wherein said network processing unit and said at least one auxiliary processing unit bypass said host operating system and communicate directly with each other.
130. The method of claim 129, wherein said at least one auxiliary processing unit comprises two auxiliary processing units that bypass said host operating system and communicate directly with each other through said network processing unit.
131. The method of claim 129, wherein said at least one auxiliary processing unit employs a network protocol to communicate with said network processing unit.
132. The method of claim 131, wherein said at least one auxiliary processing unit communicates with said network processing unit via a media access controller (MAC).
133. The method of claim 131, wherein said network protocol is Transmission Control Protocol/lnternet Protocol (TCP/IP).
134. The method of claim 131, wherein said network protocol is User Datagram Protocol (UDP).
135. The method of claim 129, wherein said at least one auxiliary processing unit is perceived as a separate network appliance.
136. The method of claim 129, wherein said at least one auxiliary processing unit is an auxiliary storage processing unit.
137. the method of claim 129, wherein said at least one auxiliary processing unit is an auxiliary audio processing unit.
138. The method of claim 129, wherein said at least one auxiliary processing unit is an auxiliary graphics processing unit.
139. The method of claim 129, wherein said at least one auxiliary processing unit is an auxiliary video processing unit.
140. The method of claim 129, wherein said at least one auxiliary processing unit is an auxiliary physics processing unit.
141. The method of claim 129, wherein said at least one host is a virtual host.
142. The method of claim 129, wherein said at least one host comprises a plurality of virtual hosts, where at least two of said plurality of virtual hosts are loaded with a separate operating system.
143. The method of claim 142, wherein each of said plurality of virtualhosts is capable of accessing said plurality of auxiliary processing units via said network processing unit.
144. The method of claim 129, wherein said at least one auxiliary processing unit comprises an auxiliary storage processing unit, an auxiliary audio processing unit, an auxiliary graphics processing unit, and an auxiliary video processing unit.
145. A distributed network of processing units for interacting with at least one host that comprises a central processing unit and, wherein said central processing unit is loaded with a host operating system, said network comprising: a network processing unit; and at least one auxiliary processing unit, wherein said network processing unit and said at least one auxiliary processing unit bypass said host operating system and communicate directly with each other.
146. The network of claim 145, wherein said at least one auxiliary processing unit comprises two auxiliary processing units that bypass said host operating system and communicate directly with each other through said network processing unit.
147. The network of claim 145, wherein said at least one auxiliary processing unit employs a network protocol to communicate with said network processing unit.
148. The network of claim 147, wherein said at least one auxiliary processing unit communicates with said network processing unit via a media access controller (MAC).
149. The network of claim 147, wherein said network protocol is Transmission Control Protocol/lnternet Protocol (TCP/IP).
150. The network of claim 147, wherein said network protocol is User Datagram Protocol (UDP).
151. The network of claim 145, wherein said at least one auxiliary processing unit is perceived as a separate network appliance.
152. The network of claim 145, wherein said at least one auxiliary processing unit is an auxiliary storage processing unit.
153. The network of claim 145, wherein said at least one auxiliary processing unit is an auxiliary audio processing unit.
154. The network of claim 145, wherein said at least one auxiliary processing unit is an auxiliary graphics processing unit.
155. The network of claim 145, wherein said at least one auxiliary processing unit is an auxiliary video processing unit.
156. The network of claim 145, wherein said at least one auxiliary processing unit is an auxiliary physics processing unit.
157. The network of claim 145, wherein said at least one host is a virtual host.
158. The network of claim 145, wherein said at least one host comprises a plurality of virtual hosts, where at least two of said plurality of virtual hosts are loaded with a separate operating system.
159. The network of claim 158, wherein each of said plurality of virtual hosts is capable of accessing said at least one auxiliary processing unit via said network processing unit.
160. The network of claim 145, wherein said at least one auxiliary processing unit comprises an auxiliary storage processing unit, an auxiliary audio processing unit, an auxiliary graphics processing unit, and an auxiliary video processing unit.
161. The network of claim 145, wherein said network processing unit is implemented on a chipset.
162. The network of claim 161, wherein said at least one of said plurality of auxiliary processing units is implemented on a chipset.
163. Method for providing a distributed network of processing units, said method comprlsmg: a) providing a network processing unit; and b) providing at least one host, wherein said at least one host comprises a central processing unit (CPU) and a plurality of auxiliary processing units, wherein said central processing unit is loaded with a host operating system and wherein said plurality of auxiliary processing units bypass said host operating system and communicate directly with each other via said network processing unit, wherein said plurality of auxiliary processing units employ Transmission Control Protocol/Internet Protocol (TCP/IP) to communicate with said network processing unit.
164. Method for providing a distributed network of processing units, said method comprising: a) providing a network processing unit; and b) providing at least one host, wherein said at least one host comprises a central processing unit (CPU) and a plurality of auxiliary processing units, wherein said central processing unit is loaded with a host operating system and wherein said plurality of auxiliary processing units bypass said host operating system and communicate directly with each other via said network processing unit, wherein said plurality of auxiliary processing units employ User Datagram Protocol (UDP) to communicate with said network processing unit.
165. A distributed network of processing units, said network comprising: a network processing unit; and at least one host, wherein said at least one host comprises a central processing unit and a plurality of auxiliary processing units, wherein said central processing unit is loaded with a host operating system and wherein said plurality of auxiliary processing units bypass said host operating system and communicate directly with each other via said network processing unit, wherein said plurality of auxiliary processing units employ Transmission Control Protocol/lnternet Protocol (TCP/IP) to communicate with said network processing unit.
166. A distributed network of processing units, said network comprising: a network processing unit; and at least one host, wherein said at least one host comprises a central processing unit and a plurality of auxiliary processing units, wherein said central processing unit is loaded with a host operating system and wherein said plurality of auxiliary processing units bypass said host operating system and communicate directly with each other via said network processing unit, wherein said plurality of auxiliary processing units employ User Datagram Protocol to communicate with said network processing unit.
GB0514859A 2002-05-13 2003-05-12 Method and apparatus for providing an integrated network of processors Expired - Fee Related GB2413872B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US10/144,658 US20030212735A1 (en) 2002-05-13 2002-05-13 Method and apparatus for providing an integrated network of processors
GB0425574A GB2405244B (en) 2002-05-13 2003-05-12 Method and apparatus for providing an integrated network of processors

Publications (3)

Publication Number Publication Date
GB0514859D0 GB0514859D0 (en) 2005-08-24
GB2413872A true GB2413872A (en) 2005-11-09
GB2413872B GB2413872B (en) 2006-03-01

Family

ID=35115774

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0514859A Expired - Fee Related GB2413872B (en) 2002-05-13 2003-05-12 Method and apparatus for providing an integrated network of processors

Country Status (1)

Country Link
GB (1) GB2413872B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5974496A (en) * 1997-01-02 1999-10-26 Ncr Corporation System for transferring diverse data objects between a mass storage device and a network via an internal bus on a network card
WO2001065799A1 (en) * 2000-02-29 2001-09-07 Partec Ag Method for controlling the communication of individual computers in a multicomputer system
US6343086B1 (en) * 1996-09-09 2002-01-29 Natural Microsystems Corporation Global packet-switched computer network telephony server
EP1193940A2 (en) * 1994-03-21 2002-04-03 Avid Technology, Inc. Apparatus and computer-implemented process for providing real-time multimedia data transport in a distributed computing system
EP1284561A2 (en) * 2001-08-14 2003-02-19 Siemens Aktiengesellschaft Method and apparatus for controlling data packets

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1193940A2 (en) * 1994-03-21 2002-04-03 Avid Technology, Inc. Apparatus and computer-implemented process for providing real-time multimedia data transport in a distributed computing system
US6343086B1 (en) * 1996-09-09 2002-01-29 Natural Microsystems Corporation Global packet-switched computer network telephony server
US5974496A (en) * 1997-01-02 1999-10-26 Ncr Corporation System for transferring diverse data objects between a mass storage device and a network via an internal bus on a network card
WO2001065799A1 (en) * 2000-02-29 2001-09-07 Partec Ag Method for controlling the communication of individual computers in a multicomputer system
US20030041177A1 (en) * 2000-02-29 2003-02-27 Thomas Warschko Method for controlling the communication of individual computers in a multicomputer system
EP1284561A2 (en) * 2001-08-14 2003-02-19 Siemens Aktiengesellschaft Method and apparatus for controlling data packets
US20030035431A1 (en) * 2001-08-14 2003-02-20 Siemens Aktiengesellschaft Method and arrangement for controlling data packets

Also Published As

Publication number Publication date
GB2413872B (en) 2006-03-01
GB0514859D0 (en) 2005-08-24

Similar Documents

Publication Publication Date Title
US8051126B2 (en) Method and apparatus for providing an integrated network of processors
US7120653B2 (en) Method and apparatus for providing an integrated file system
EP1570361B1 (en) Method and apparatus for performing network processing functions
US7924868B1 (en) Internet protocol (IP) router residing in a processor chipset
US8094670B1 (en) Method and apparatus for performing network processing functions
US8103785B2 (en) Network acceleration techniques
US8156230B2 (en) Offload stack for network, block and file input and output
US8671152B2 (en) Network processor system and network protocol processing method
US8041875B1 (en) Resource virtualization switch
US7613767B2 (en) Resolving a distributed topology to stream data
US8713180B2 (en) Zero-copy network and file offload for web and application servers
US7983266B2 (en) Generalized serialization queue framework for protocol processing
US7188250B1 (en) Method and apparatus for performing network processing functions
US6920484B2 (en) Method and apparatus for providing an integrated virtual disk subsystem
JP2008541605A (en) High speed data processing / communication method and apparatus for embedded system
JP2005085284A (en) Multiple offload of network condition object supporting failover event
US20080189392A1 (en) Computer system with lan-based i/o
GB2413872A (en) An integrated network of processors
Lim et al. A single-chip storage LSI for home networks
Crowley et al. Network acceleration techniques

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 20160512