US20180314670A1 - Peripheral component - Google Patents
Peripheral component Download PDFInfo
- Publication number
- US20180314670A1 US20180314670A1 US16/027,163 US201816027163A US2018314670A1 US 20180314670 A1 US20180314670 A1 US 20180314670A1 US 201816027163 A US201816027163 A US 201816027163A US 2018314670 A1 US2018314670 A1 US 2018314670A1
- Authority
- US
- United States
- Prior art keywords
- bus
- component
- bridge
- communication
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/42—Bus transfer protocol, e.g. handshake; Synchronisation
- G06F13/4282—Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/14—Handling requests for interconnection or transfer
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/14—Digital output to display device ; Cooperation and interconnection of the display device with other functional units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/60—Memory management
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/36—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of a graphic pattern, e.g. using an all-points-addressable [APA] memory
- G09G5/363—Graphics controllers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2213/00—Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F2213/0026—PCI express
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2330/00—Aspects of power supply; Aspects of display protection and defect management
- G09G2330/02—Details of power systems and of start or stop of display operation
- G09G2330/021—Power management, e.g. power saving
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2360/00—Aspects of the architecture of display systems
- G09G2360/06—Use of more than one graphics processor to process data before displaying to one or more screens
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
- G09G5/003—Details of a display terminal, the details relating to the control arrangement of the display terminal and to the interfaces thereto
- G09G5/006—Details of the interface to the display terminal
Definitions
- the invention is in the field of data transfer in computer and other digital systems.
- Data to be transferred include signals representing data, commands, or any other signals.
- Speed and efficiency of data transfer is particularly critical in systems that run very data-intensive applications, such as graphics applications.
- graphics processing capability is provided as a part of the central processing unit (CPU) capability, or provided by a separate special purpose processor such as a graphics processing unit (GPU) that communicates with the CPU and assists in processing graphics data for applications such as video games, etc.
- graphics processing unit GPU
- One or more GPUs may be included in a system.
- a bridged host interface for example a PCI express (PCIe®) bus
- PCIe® PCI express
- FIG. 1 is a block diagram of a prior art system 100 that includes a root 102 .
- a typical root 102 is a computer chipset, including a central processing unit (CPU), a host bridge 104 , and two endpoints EP0 106 a and EP1 106 b .
- Endpoints are bus endpoints and can be various peripheral components, for example special purpose processors such as graphics processing units (GPUs).
- the root 102 is coupled to the bridge 104 by one or more buses to communicate with peripheral components.
- Some peripheral component endpoints (such as GPUs) require a relatively large amount of bandwidth on the bus because of the large amount of data involved in their functions.
- bridge integrated circuits ICs
- the size of a typical bridge IC is comparable to the size of a graphics processing unit (GPU) which requires additional printed circuit board area and could add to layer counts.
- Bridge ICs also require additional surrounding components for power, straps, clock and possibly read only memory (ROM).
- FIG. 1 is a block diagram of a prior art processing system with peripheral components.
- FIG. 2 is a block diagram of portions of a multi-processor system with a multiplexed peripheral component bus, according to an embodiment.
- FIG. 3 is a block diagram of portions of a processing system with peripheral components, according to an embodiment.
- FIG. 4 is a more detailed block diagram of a processing system with peripheral components, according to an embodiment.
- FIG. 5 is a block diagram of an embodiment in which one bus endpoint includes an internal bridge.
- FIG. 6 is a block diagram of an embodiment that includes more than two bus endpoints, each including an internal bridge.
- FIG. 7 is a block diagram illustrating views of memory space from the perspectives of various components in a system, according to an embodiment.
- Embodiments of a multi-processor architecture and method are described herein. Embodiments provide alternatives to the use of an external bridge integrated circuit (IC) architecture. For example, an embodiment multiplexes a peripheral bus such that multiple processors can use one peripheral interface slot without requiring an external bridge IC. Other embodiments include a system with multiple bus endpoints coupled to a bus root via a host bus bridge that is internal to at least one bus endpoint. In addition, the bus endpoints are directly coupled to each other. Embodiments are usable with known bus protocols.
- IC integrated circuit
- FIG. 2 is a block diagram of portions of a multi-processor system 700 with a multiplexed peripheral component bus, according to an embodiment.
- a master GPU 702 A there are two GPUs, a master GPU 702 A and a slave GPU 702 B.
- Each GPU 702 has 16 peripheral component interconnect express (PCIe®)) transmit (TX) lanes and 16 PCIe® receive (RX) lanes.
- PCIe® peripheral component interconnect express
- TX transmit
- RX PCIe® receive
- Each of GPUs 702 includes a respective data link layer 706 and a respective physical layer (PHY) 704 . Eight of the TX/RX lanes of GPU 702 A are connected to half of TX/RX lanes of a X16 PCIe® connector, or slot 708 .
- TX/RX lanes of GPU 702 B are connected to the remaining TX/RX lanes of the X16 PCIe® connector or slot 708 .
- the remaining TX/RX lanes of each of GPU 702 A and GPU 702 B are connected to each other, providing a direct, high-speed connection between the GPUs 702 .
- the PCIe® x16 slot 708 (which normally goes to one GPU) is split into two parts. Half of the slot is connected to GPU 702 A and the other half is connected to GPU 702 B. Each GPU 702 basically echoes back the other half of the data to the other GPU 702 . That is, data received by either GPU is forwarded to the other. Each GPU 702 sees the all of the data received by the PCIe® bus, and internally each GPU 702 decides whether it is supposed to answer the request or comments. Each GPU 702 then appropriately responds, or does nothing. Some data or commands, such as “Reset” are applicable to all of the GPUs 702 .
- PCIe® load (device) on the PCI® bus.
- Either GPU 702 A or GPU 702 B is accessed based on address. For example, for Address Domain Access, master GPU 702 A can be assigned to one half of the address domain and slave GPU 702 B can assigned to the other half.
- the system can operate in a Master/Slave mode or in a Single/Multi GPU modes, and the modes can be identified by straps.
- a reference clock (REF CLK) path is indicated by 711 .
- An 8-lane RX-2 path is indicated by 709 .
- An 8-lane RX-1 path is indicated by 713 .
- An 8-lane TX-1 path is indicated by 715 .
- Control signals 710 are non-PCIe® signals such as straps.
- the (PHY) 704 in each GPU 702 echoes the data to the proper lane or channel. Lane connection can be done in the order, which helps to optimize silicon design and/or to support PCIe® slots with less than 16 lanes.
- Two GPUs are shown as an example of a system, but the architecture is scalable to n-GPUs.
- GPUs 702 are one example of a peripheral component that can be coupled as described. Any other peripheral components that normally communicate with a peripheral component bus in a system could be similarly coupled.
- FIG. 3 is a block diagram of portions of a processing system 200 with peripheral components, according to an embodiment.
- System 200 includes a bus root 202 that is similar to the bus root 102 of FIG. 1 .
- the bus root 202 in an embodiment is a chipset including a CPU and system memory.
- the root 202 is coupled via a bus 209 to an endpoint EP0 206 a that includes an internal bridge 205 a .
- the bus 209 in an embodiment is a PCI express (PCIe®) bus, but embodiments are not so limited.
- EP0 206 a is coupled to another endpoint EP1 206 b .
- EP1 206 b includes an internal bridge 205 b .
- EP0 205 a and EP1 205 B are through their respective bridges via a bus 207 .
- EP1 206 b is coupled through its bridge 205 b to the root 202 via a bus 211 .
- Each of endpoints EP0 206 a and EP1 206 b includes respective local memories 208 a and 208 b . From the perspective of the root 202 , 209 and 211 make up transmit and receive lanes respectively of a standard bidirectional point to point data link.
- EP0 206 a and EP1 206 b are identical.
- bridge 205 b is not necessary, but is included for the purpose of having one version of an endpoint, such as one version of a GPU, rather than manufacturing two different versions.
- EP0 may be used standalone by directly connecting it to root 202 via buses 209 and 207 ; similarly EP1 may be used standalone by directly connecting it to root 202 via buses 207 and 211 .
- bridge 205 eliminates the need for an external bridge such as bridge 104 of FIG. 1 when both EP0 and EP1 are present.
- system 200 moves data in a loop (in this case in a clockwise direction).
- the left endpoint EP0 can send data directly to the right endpoint EP1.
- the return path from EP1 to EP0 is through the root 202 .
- the root has the ability to reflect a packet of data coming in from EP1 back out to EP0.
- the architecture provides the appearance of a peer-to-peer transaction on the same pair of wires as is used for endpoint to root transactions.
- EP0 206 a and EP1 206 b are also configurable to operate in the traditional configuration. That is, EP0 206 a and EP1 206 b are each configurable to communicate directly with the root 202 via buses 209 and 211 , which are each bidirectional in such a configuration.
- FIG. 4 is a more detailed block diagram of a processing system with peripheral components, according to an embodiment.
- System 300 is similar to system 200 , but additional details are shown.
- System 300 includes a bus root 302 coupled to a system memory 303 .
- the bus root 302 is further coupled to an endpoint 305 a via a bus 309 .
- endpoints 305 a and 305 b are GPUs, but embodiments are not so limited.
- GPU0 305 a includes multiple clients. Clients include logic, such as shader units and decoder units, for performing tasks.
- the clients are coupled to an internal bridge through bus interface (I/F) logic, which control all of the read operations and write operations performed by the GPU.
- I/F bus interface
- GPU0 305 a is coupled to a GPU1 305 b via a bus 307 from the internal bridge of GPU0 305 a to the internal bridge of GPU1 305 b .
- GPU1 305 b is identical to GPU0 305 a and includes multiple clients, an internal bridge and I/F logic. Each GPU typically connects to a dedicated local memory unit often implemented as GDDR DRAM.
- GPU1 305 b is coupled to the bus root 302 via a bus 311 .
- data and other messages such as read requests and completions flow in a clockwise loop from the bus root 302 to GPU0 305 a to GPU1 305 b.
- one of the GPUs 305 does not include a bridge.
- data flows counterclockwise rather than clockwise.
- the protocol that determines data routing is communicated with in such as ways as to make the architecture appears the same as the architecture of FIG. 1 .
- the bridge in 305 b must appear on link 307 to bridge 305 a as an upstream port, whereas the corresponding attach point on the bridge in 305 a must appear on link 309 to root 302 as a downstream port.
- the embedded bridge must be able to see its outgoing link as a return path for all requests it receives on its incoming link, even though the physical routing of the two links is different. This is achieved by setting the state of a Chain Mode configuration strap for each GPU. If the strap is set to zero, the bridge assumes both transmit and receive links are to an upstream port, either a root complex or a bridge device. If the strap is set to one, the bridge assumes a daisy-chain configuration.
- the peer to peer bridging function of the root is a two-step process according to which GPU1 305 b writes data to the system memory 303 , or buffer. Then as a separate operation GPU0 305 a reads the data back via the bus root 302 .
- the bus root 302 responds to requests normally, as if the internal bridge were an external bridge (as in FIG. 1 ).
- the bridge of GPU0 305 a is configured to be active, while the bridge of GPU1 305 b is configured to appear as a wire, and simply pass data through. This allows the bus root 302 to see buses 309 and 311 as a normal peripheral interconnect bus.
- this bridge sends the data to pass through the bridge of GPU1 305 b and return to the bus root 302 as if the data came directly from GPU0 305 a.
- FIG. 5 is a block diagram of a system 400 in which one of the multiple bus endpoints includes an internal bridge.
- System 400 includes a bus root 402 , and an EP0 406 a that includes a bridge 405 a .
- EP0 406 a is coupled to the root 402 through the bridge 405 a via a bus 409 , and also to EP1 b 406 b through the bridge 405 a via a bus 407 .
- Each of endpoints EP0 406 a and EP1 406 b includes respective local memories 408 a and 408 b.
- FIG. 6 is a block diagram of a system 500 including more than two bus endpoints, each including an internal bridge.
- System 500 includes a bus root 502 , and an EP0 506 a that includes a bridge 505 a and a local memory 508 a .
- System 500 further includes an EP1 506 b that includes a bridge 505 b and a local memory 508 b , and an EP1 506 c that includes a bridge 505 c and an internal memory 508 c.
- EP0 506 a is coupled to the root 502 through the bridge 505 a via a bus 509 , and also to EP1 b 506 b through the bridge 506 b via a bus 507 a .
- EP0 506 b is coupled to EP1 c 506 c through the bridge 506 c via a bus 507 b .
- Other embodiments include additional endpoints that are added into the ring configuration.
- the system includes more than two endpoints 506 , but the rightmost endpoint does not include an internal bridge.
- the flow of data is counterclockwise as opposed clockwise, as shown in the figures.
- the bus root 302 may perform write operations by sending requests on bus 309 .
- a standard addressing scheme indicates to the bridge to send the request to the bus I/F. If the request is for GPU1 305 b , the bridge routes the request to bus 307 . So in an embodiment, the respective internal bridges of GPU0 305 a and GPU1 305 b are programmed differently.
- FIG. 7 is a block diagram illustrating the division of bus address ranges and the view of memory space from the perspective of various components.
- 602 is a view of memory from the perspective of the bus root, or Host processor 302 .
- 604 is a view of memory from the perspective of the GPU0 305 a internal bridge.
- 606 is a view of memory from the perspective of the GPU1 305 b internal bridge.
- the bus address range is divided into ranges for GPU0 305 a , GPU1 305 b , and system 302 memory spaces.
- the GPU0 305 a bridge is set up so that incoming requests to the GPU0 305 a range are routed to its own local memory.
- Incoming requests from the root or from GPU0 305 a itself to GPU1 305 b or system 302 ranges are routed to the output port of GPU0 305 a .
- the GPU1 305 b bridge is set up slightly differently so that incoming requests to the GPU1 305 b range are routed to its own local memory. Requests from GPU0 305 a or from GPU1 305 b itself to root or GPU0 305 a ranges are routed to the output port of GPU1 305 b.
- the host sees the bus topology as being like the topology of FIG. 1 .
- GPU1 305 b can make its own request to the host processor 302 through its own bridge and it will pass through to the host processor 302 .
- the host processor 302 When the host processor 302 is returning a request, it goes through the bridge of GPU0 305 a , which has logic for determining where requests and data are to be routed.
- GPU1 305 b writes operations from GPU1 305 b to GPU0 305 a in two passes.
- GPU1 305 b sends data to a memory location in the system memory 303 .
- GPU0 305 a reads the data after it learns that the data is in the system memory 303 .
- Completion messages for read data requests and other split-transaction operations must travel along the wires in the same direction as the requests. Therefore in addition to the address-based request routing described above, device-based routing must be set up in a similar manner. For example, the internal bridge of GPU0 305 a recognizes that the path for both requests and completion messages is via bus 307 .
- An embodiment includes power management to improve power usage in lightly loaded usage cases. For example in a usage case with little graphics processing, the logic of GPU1 305 b is powered off and the bridging function in GPU1 305 b is reduced to a simple passthrough function from input port to output port. Furthermore, the function of GPU0 305 a is reduced to not process transfers routed from the input port to the output port. In an embodiment, there is a separate power supply for the bridging function in GPU1 305 b . Software detects the conditions under which to power down. Embodiments include a separate power regulator and/or separate internal power sources for bridges that are to be powered down separately from the rest of the logic on the device.
- system 300 is practical in a system that includes multiple slots for add-in circuit boards.
- system 300 is a soldered system, such as on a mobile device.
- Buses 307 , 309 and 311 can be PCIe® buses or any other similar peripheral interconnect bus.
- Any circuits described herein could be implemented through the control of manufacturing processes and maskworks which would be then used to manufacture the relevant circuitry.
- Such manufacturing process control and maskwork generation are known to those of ordinary skill in the art and include the storage of computer instructions on computer readable media including, for example, Verilog, VHDL or instructions in other hardware description languages.
- PLDs programmable logic devices
- FPGAs field programmable gate arrays
- PAL programmable array logic
- ASICs application specific integrated circuits
- microcontrollers with memory such as electronically erasable programmable read only memory (EEPROM), Flash memory, etc.
- embedded microprocessors firmware, software, etc.
- aspects of the embodiments may be embodied in microprocessors having software-based circuit emulation, discrete logic (sequential and combinatorial), custom devices, fuzzy (neural) logic, quantum devices, and hybrids of any of the above device types.
- MOSFET metal-oxide semiconductor field-effect transistor
- CMOS complementary metal-oxide semiconductor
- ECL emitter-coupled logic
- polymer technologies e.g., silicon-conjugated polymer and metal-conjugated polymer-metal structures
- mixed analog and digital etc.
- processor as used in the specification and claims includes a processor core or a portion of a processor. Further, although one or more GPUs and one or more CPUs are usually referred to separately herein, in embodiments both a GPU and a CPU are included in a single integrated circuit package or on a single monolithic die. Therefore a single device performs the claimed method in such embodiments.
- the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in a sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number, respectively. Additionally, the words “herein,” “hereunder,” “above,” “below,” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. When the word “or” is used in reference to a list of two or more items, that word covers all of the following interpretations of the word, any of the items in the list, all of the items in the list, and any combination of the items in the list.
- some or all of the hardware and software capability described herein may exist in a printer, a camera, television, a digital versatile disc (DVD) player, a DVR or PVR, a handheld device, a mobile telephone or some other device.
- DVD digital versatile disc
- PVR personal computer
- Such computer readable media may store instructions that are to be executed by a computing device (e.g., personal computer, personal digital assistant, PVR, mobile device or the like) or may be instructions (such as, for example, Verilog or a hardware description language) that when executed are designed to create a device (GPU, ASIC, or the like) or software application that when operated performs aspects described above.
- a computing device e.g., personal computer, personal digital assistant, PVR, mobile device or the like
- instructions such as, for example, Verilog or a hardware description language
- the claimed invention may be embodied in computer code (e.g., HDL, Verilog, etc.) that is created, stored, synthesized, and used to generate GDSII data (or its equivalent). An ASIC may then be manufactured based on this data.
- computer code e.g., HDL, Verilog, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Information Transfer Systems (AREA)
- Bus Control (AREA)
- Multi Processors (AREA)
Abstract
Description
- This application is a continuation of U.S. patent application Ser. No. 15/374,739 filed on Dec. 9, 2016, which is a continuation of U.S. patent application Ser. No. 13/764,775 filed on Feb. 11, 2013 which is a continuation of U.S. application Ser. No. 12/340,510 filed on Dec. 19, 2008 now U.S. Pat. No. 8,373,709 which is a continuation in part of U.S. patent application Ser. No. 12/245,686 filed on Oct. 3, 2008 now U.S. Pat. No. 8,892,804, each of which are incorporated by reference as if fully set forth herein.
- The invention is in the field of data transfer in computer and other digital systems.
- As computer and other digital systems become more complex and more capable, methods and hardware to enhance the transfer of data between system components or elements continually evolve. Data to be transferred include signals representing data, commands, or any other signals. Speed and efficiency of data transfer is particularly critical in systems that run very data-intensive applications, such as graphics applications. In typical systems, graphics processing capability is provided as a part of the central processing unit (CPU) capability, or provided by a separate special purpose processor such as a graphics processing unit (GPU) that communicates with the CPU and assists in processing graphics data for applications such as video games, etc. One or more GPUs may be included in a system. In conventional multi-GPU systems, a bridged host interface (for example a PCI express (PCIe®) bus) interface must share bandwidth between peer to peer traffic and host traffic. Traffic consists primarily of memory data transfers but may often include commands.
FIG. 1 is a block diagram of aprior art system 100 that includes aroot 102. Atypical root 102 is a computer chipset, including a central processing unit (CPU), ahost bridge 104, and twoendpoints EP0 106 a andEP1 106 b. Endpoints are bus endpoints and can be various peripheral components, for example special purpose processors such as graphics processing units (GPUs). Theroot 102 is coupled to thebridge 104 by one or more buses to communicate with peripheral components. Some peripheral component endpoints (such as GPUs) require a relatively large amount of bandwidth on the bus because of the large amount of data involved in their functions. It would be desirable to provide an architecture that reduced the number of components and yet provided efficient data transfer between components. For example, the cost of bridge integrated circuits (ICs) is relatively high. In addition, the size of a typical bridge IC is comparable to the size of a graphics processing unit (GPU) which requires additional printed circuit board area and could add to layer counts. Bridge ICs also require additional surrounding components for power, straps, clock and possibly read only memory (ROM). -
FIG. 1 is a block diagram of a prior art processing system with peripheral components. -
FIG. 2 is a block diagram of portions of a multi-processor system with a multiplexed peripheral component bus, according to an embodiment. -
FIG. 3 is a block diagram of portions of a processing system with peripheral components, according to an embodiment. -
FIG. 4 is a more detailed block diagram of a processing system with peripheral components, according to an embodiment. -
FIG. 5 is a block diagram of an embodiment in which one bus endpoint includes an internal bridge. -
FIG. 6 is a block diagram of an embodiment that includes more than two bus endpoints, each including an internal bridge. -
FIG. 7 is a block diagram illustrating views of memory space from the perspectives of various components in a system, according to an embodiment. - Embodiments of a multi-processor architecture and method are described herein. Embodiments provide alternatives to the use of an external bridge integrated circuit (IC) architecture. For example, an embodiment multiplexes a peripheral bus such that multiple processors can use one peripheral interface slot without requiring an external bridge IC. Other embodiments include a system with multiple bus endpoints coupled to a bus root via a host bus bridge that is internal to at least one bus endpoint. In addition, the bus endpoints are directly coupled to each other. Embodiments are usable with known bus protocols.
-
FIG. 2 is a block diagram of portions of amulti-processor system 700 with a multiplexed peripheral component bus, according to an embodiment. In this example system, there are two GPUs, amaster GPU 702A and aslave GPU 702B. Each GPU 702 has 16 peripheral component interconnect express (PCIe®)) transmit (TX) lanes and 16 PCIe® receive (RX) lanes. Each of GPUs 702 includes a respective data link layer 706 and a respective physical layer (PHY) 704. Eight of the TX/RX lanes ofGPU 702A are connected to half of TX/RX lanes of a X16 PCIe® connector, orslot 708. Eight of the TX/RX lanes ofGPU 702B are connected to the remaining TX/RX lanes of the X16 PCIe® connector orslot 708. The remaining TX/RX lanes of each ofGPU 702A andGPU 702B are connected to each other, providing a direct, high-speed connection between the GPUs 702. - The PCIe® x16 slot 708 (which normally goes to one GPU) is split into two parts. Half of the slot is connected to
GPU 702A and the other half is connected toGPU 702B. Each GPU 702 basically echoes back the other half of the data to the other GPU 702. That is, data received by either GPU is forwarded to the other. Each GPU 702 sees the all of the data received by the PCIe® bus, and internally each GPU 702 decides whether it is supposed to answer the request or comments. Each GPU 702 then appropriately responds, or does nothing. Some data or commands, such as “Reset” are applicable to all of the GPUs 702. - From the system level point of view, or from the view of the peripheral bus, there is only one PCIe® load (device) on the PCI® bus. Either GPU 702A or GPU 702B is accessed based on address. For example, for Address Domain Access,
master GPU 702A can be assigned to one half of the address domain andslave GPU 702B can assigned to the other half. The system can operate in a Master/Slave mode or in a Single/Multi GPU modes, and the modes can be identified by straps. - Various data paths are identified by reference numbers. A reference clock (REF CLK) path is indicated by 711. An 8-lane RX-2 path is indicated by 709. An 8-lane RX-1 path is indicated by 713. An 8-lane TX-1 path is indicated by 715. Control signals 710 are non-PCIe® signals such as straps. The (PHY) 704 in each GPU 702 echoes the data to the proper lane or channel. Lane connection can be done in the order, which helps to optimize silicon design and/or to support PCIe® slots with less than 16 lanes. Two GPUs are shown as an example of a system, but the architecture is scalable to n-GPUs. In addition, GPUs 702 are one example of a peripheral component that can be coupled as described. Any other peripheral components that normally communicate with a peripheral component bus in a system could be similarly coupled.
-
FIG. 3 is a block diagram of portions of aprocessing system 200 with peripheral components, according to an embodiment.System 200 includes abus root 202 that is similar to thebus root 102 ofFIG. 1 . Thebus root 202 in an embodiment is a chipset including a CPU and system memory. Theroot 202 is coupled via abus 209 to anendpoint EP0 206 a that includes aninternal bridge 205 a. Thebus 209 in an embodiment is a PCI express (PCIe®) bus, but embodiments are not so limited.EP0 206 a is coupled to anotherendpoint EP1 206 b.EP1 206 b includes aninternal bridge 205 b.EP0 205 a and EP1 205B are through their respective bridges via abus 207.EP1 206 b is coupled through itsbridge 205 b to theroot 202 via abus 211. Each ofendpoints EP0 206 a andEP1 206 b includes respectivelocal memories 208 a and 208 b. From the perspective of theroot - In an embodiment,
EP0 206 a andEP1 206 b are identical. As further explained below, in various embodiments,bridge 205 b is not necessary, but is included for the purpose of having one version of an endpoint, such as one version of a GPU, rather than manufacturing two different versions. Note that EP0 may be used standalone by directly connecting it to root 202 viabuses buses - The inclusion of a bridge 205 eliminates the need for an external bridge such as
bridge 104 ofFIG. 1 when both EP0 and EP1 are present. In contrast to the “Y” or “T” formation ofFIG. 1 ,system 200 moves data in a loop (in this case in a clockwise direction). The left endpoint EP0 can send data directly to the right endpoint EP1. The return path from EP1 to EP0 is through theroot 202. As such, the root has the ability to reflect a packet of data coming in from EP1 back out to EP0. In other words, the architecture provides the appearance of a peer-to-peer transaction on the same pair of wires as is used for endpoint to root transactions. -
EP0 206 a andEP1 206 b are also configurable to operate in the traditional configuration. That is,EP0 206 a andEP1 206 b are each configurable to communicate directly with theroot 202 viabuses -
FIG. 4 is a more detailed block diagram of a processing system with peripheral components, according to an embodiment.System 300 is similar tosystem 200, but additional details are shown.System 300 includes abus root 302 coupled to asystem memory 303. Thebus root 302 is further coupled to anendpoint 305 a via abus 309. For purposes of illustrating a particular embodiment,endpoints GPU0 305 a includes multiple clients. Clients include logic, such as shader units and decoder units, for performing tasks. The clients are coupled to an internal bridge through bus interface (I/F) logic, which control all of the read operations and write operations performed by the GPU. -
GPU0 305 a is coupled to aGPU1 305 b via abus 307 from the internal bridge ofGPU0 305 a to the internal bridge ofGPU1 305 b. In an embodiment,GPU1 305 b is identical toGPU0 305 a and includes multiple clients, an internal bridge and I/F logic. Each GPU typically connects to a dedicated local memory unit often implemented as GDDR DRAM.GPU1 305 b is coupled to thebus root 302 via abus 311. In one embodiment, as the arrows indicate, data and other messages such as read requests and completions flow in a clockwise loop from thebus root 302 toGPU0 305 a toGPU1 305 b. - In other embodiments, one of the GPUs 305 does not include a bridge. In yet other embodiments, data flows counterclockwise rather than clockwise.
- In one embodiment, the protocol that determines data routing is communicated with in such as ways as to make the architecture appears the same as the architecture of
FIG. 1 . In particular, the bridge in 305 b must appear onlink 307 to bridge 305 a as an upstream port, whereas the corresponding attach point on the bridge in 305 a must appear onlink 309 to root 302 as a downstream port. Furthermore, the embedded bridge must be able to see its outgoing link as a return path for all requests it receives on its incoming link, even though the physical routing of the two links is different. This is achieved by setting the state of a Chain Mode configuration strap for each GPU. If the strap is set to zero, the bridge assumes both transmit and receive links are to an upstream port, either a root complex or a bridge device. If the strap is set to one, the bridge assumes a daisy-chain configuration. - In another embodiment, the peer to peer bridging function of the root is a two-step process according to which
GPU1 305 b writes data to thesystem memory 303, or buffer. Then as aseparate operation GPU0 305 a reads the data back via thebus root 302. - The
bus root 302 responds to requests normally, as if the internal bridge were an external bridge (as inFIG. 1 ). In an embodiment, the bridge ofGPU0 305 a is configured to be active, while the bridge ofGPU1 305 b is configured to appear as a wire, and simply pass data through. This allows thebus root 302 to seebuses GPU0 305 a, this bridge sends the data to pass through the bridge ofGPU1 305 b and return to thebus root 302 as if the data came directly fromGPU0 305 a. -
FIG. 5 is a block diagram of asystem 400 in which one of the multiple bus endpoints includes an internal bridge.System 400 includes abus root 402, and anEP0 406 a that includes abridge 405 a.EP0 406 a is coupled to theroot 402 through thebridge 405 a via abus 409, and also to EP1b 406 b through thebridge 405 a via abus 407. Each ofendpoints EP0 406 a andEP1 406 b includes respectivelocal memories -
FIG. 6 is a block diagram of asystem 500 including more than two bus endpoints, each including an internal bridge.System 500 includes abus root 502, and anEP0 506 a that includes abridge 505 a and alocal memory 508 a.System 500 further includes anEP1 506 b that includes abridge 505 b and a local memory 508 b, and anEP1 506 c that includes abridge 505 c and aninternal memory 508 c. -
EP0 506 a is coupled to theroot 502 through thebridge 505 a via abus 509, and also to EP1b 506 b through thebridge 506 b via abus 507 a.EP0 506 b is coupled toEP1c 506 c through thebridge 506 c via abus 507 b. Other embodiments include additional endpoints that are added into the ring configuration. In other embodiments, the system includes more than two endpoints 506, but the rightmost endpoint does not include an internal bridge. In yet other embodiments the flow of data is counterclockwise as opposed clockwise, as shown in the figures. - Referring again to
FIG. 4 , there are two logical ports on the internal bridge according to an embodiment. One port is “on” in the bridge ofGPU0 305 a, and one port is “off” in the bridge ofGPU1 305 b. Thebus root 302 may perform write operations by sending requests onbus 309. A standard addressing scheme indicates to the bridge to send the request to the bus I/F. If the request is forGPU1 305 b, the bridge routes the request tobus 307. So in an embodiment, the respective internal bridges ofGPU0 305 a andGPU1 305 b are programmed differently. -
FIG. 7 is a block diagram illustrating the division of bus address ranges and the view of memory space from the perspective of various components. With reference also toFIG. 4, 602 is a view of memory from the perspective of the bus root, orHost processor 302. 604 is a view of memory from the perspective of theGPU0 305 a internal bridge. 606 is a view of memory from the perspective of theGPU1 305 b internal bridge. The bus address range is divided into ranges forGPU0 305 a,GPU1 305 b, andsystem 302 memory spaces. TheGPU0 305 a bridge is set up so that incoming requests to theGPU0 305 a range are routed to its own local memory. Incoming requests from the root or fromGPU0 305 a itself to GPU1 305 b orsystem 302 ranges are routed to the output port ofGPU0 305 a. The GPU1305 b bridge is set up slightly differently so that incoming requests to theGPU1 305 b range are routed to its own local memory. Requests fromGPU0 305 a or fromGPU1 305 b itself to root orGPU0 305 a ranges are routed to the output port ofGPU1 305 b. - The host sees the bus topology as being like the topology of
FIG. 1 . -
GPU1 305 b can make its own request to thehost processor 302 through its own bridge and it will pass through to thehost processor 302. When thehost processor 302 is returning a request, it goes through the bridge ofGPU0 305 a, which has logic for determining where requests and data are to be routed. - Write operations from
GPU1 305 b toGPU0 305 a can be performed in two passes.GPU1 305 b sends data to a memory location in thesystem memory 303. Then separately,GPU0 305 a reads the data after it learns that the data is in thesystem memory 303. - Completion messages for read data requests and other split-transaction operations must travel along the wires in the same direction as the requests. Therefore in addition to the address-based request routing described above, device-based routing must be set up in a similar manner. For example, the internal bridge of
GPU0 305 a recognizes that the path for both requests and completion messages is viabus 307. - An embodiment includes power management to improve power usage in lightly loaded usage cases. For example in a usage case with little graphics processing, the logic of
GPU1 305 b is powered off and the bridging function inGPU1 305 b is reduced to a simple passthrough function from input port to output port. Furthermore, the function ofGPU0 305 a is reduced to not process transfers routed from the input port to the output port. In an embodiment, there is a separate power supply for the bridging function inGPU1 305 b. Software detects the conditions under which to power down. Embodiments include a separate power regulator and/or separate internal power sources for bridges that are to be powered down separately from the rest of the logic on the device. - Even in embodiments that do not include the power management described above, system board area is conserved because an external bridge (as in
FIG. 1 ) is not required. The board area and power required for the external bridge and its pins are conserved. On the other hand, it is not required that each of the GPUs have its own internal bridge. In another embodiment,GPU1 305 b does not have an internal bridge, as described with reference toFIG. 5 . - The architecture of
system 300 is practical in a system that includes multiple slots for add-in circuit boards. Alternatively,system 300 is a soldered system, such as on a mobile device. -
Buses - Any circuits described herein could be implemented through the control of manufacturing processes and maskworks which would be then used to manufacture the relevant circuitry. Such manufacturing process control and maskwork generation are known to those of ordinary skill in the art and include the storage of computer instructions on computer readable media including, for example, Verilog, VHDL or instructions in other hardware description languages.
- Aspects of the embodiments described above may be implemented as functionality programmed into any of a variety of circuitry, including but not limited to programmable logic devices (PLDs), such as field programmable gate arrays (FPGAs), programmable array logic (PAL) devices, electrically programmable logic and memory devices, and standard cell-based devices, as well as application specific integrated circuits (ASICs) and fully custom integrated circuits. Some other possibilities for implementing aspects of the embodiments include microcontrollers with memory (such as electronically erasable programmable read only memory (EEPROM), Flash memory, etc.), embedded microprocessors, firmware, software, etc. Furthermore, aspects of the embodiments may be embodied in microprocessors having software-based circuit emulation, discrete logic (sequential and combinatorial), custom devices, fuzzy (neural) logic, quantum devices, and hybrids of any of the above device types. Of course the underlying device technologies may be provided in a variety of component types, e.g., metal-oxide semiconductor field-effect transistor (MOSFET) technologies such as complementary metal-oxide semiconductor (CMOS), bipolar technologies such as emitter-coupled logic (ECL), polymer technologies (e.g., silicon-conjugated polymer and metal-conjugated polymer-metal structures), mixed analog and digital, etc.
- The term “processor” as used in the specification and claims includes a processor core or a portion of a processor. Further, although one or more GPUs and one or more CPUs are usually referred to separately herein, in embodiments both a GPU and a CPU are included in a single integrated circuit package or on a single monolithic die. Therefore a single device performs the claimed method in such embodiments.
- Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in a sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number, respectively. Additionally, the words “herein,” “hereunder,” “above,” “below,” and words of similar import, when used in this application, refer to this application as a whole and not to any particular portions of this application. When the word “or” is used in reference to a list of two or more items, that word covers all of the following interpretations of the word, any of the items in the list, all of the items in the list, and any combination of the items in the list.
- The above description of illustrated embodiments of the method and system is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments of, and examples for, the method and system are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention, as those skilled in the relevant art will recognize. The teachings of the disclosure provided herein can be applied to other systems, not only for systems including graphics processing or video processing, as described above. The various operations described may be performed in a very wide variety of architectures and distributed differently than described. In addition, though many configurations are described herein, none are intended to be limiting or exclusive.
- In other embodiments, some or all of the hardware and software capability described herein may exist in a printer, a camera, television, a digital versatile disc (DVD) player, a DVR or PVR, a handheld device, a mobile telephone or some other device. The elements and acts of the various embodiments described above can be combined to provide further embodiments. These and other changes can be made to the method and system in light of the above detailed description.
- In general, in the following claims, the terms used should not be construed to limit the method and system to the specific embodiments disclosed in the specification and the claims, but should be construed to include any processing systems and methods that operate under the claims. Accordingly, the method and system is not limited by the disclosure, but instead the scope of the method and system is to be determined entirely by the claims.
- While certain aspects of the method and system are presented below in certain claim forms, the inventors contemplate the various aspects of the method and system in any number of claim forms. For example, while only one aspect of the method and system may be recited as embodied in computer-readable medium, other aspects may likewise be embodied in computer-readable medium. Such computer readable media may store instructions that are to be executed by a computing device (e.g., personal computer, personal digital assistant, PVR, mobile device or the like) or may be instructions (such as, for example, Verilog or a hardware description language) that when executed are designed to create a device (GPU, ASIC, or the like) or software application that when operated performs aspects described above. The claimed invention may be embodied in computer code (e.g., HDL, Verilog, etc.) that is created, stored, synthesized, and used to generate GDSII data (or its equivalent). An ASIC may then be manufactured based on this data.
- Accordingly, the inventors reserve the right to add additional claims after filing the application to pursue such additional claim forms for other aspects of the method and system.
Claims (14)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/027,163 US20180314670A1 (en) | 2008-10-03 | 2018-07-03 | Peripheral component |
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/245,686 US8892804B2 (en) | 2008-10-03 | 2008-10-03 | Internal BUS bridge architecture and method in multi-processor systems |
US12/340,510 US8373709B2 (en) | 2008-10-03 | 2008-12-19 | Multi-processor architecture and method |
US13/764,775 US20130147815A1 (en) | 2008-10-03 | 2013-02-11 | Multi-processor architecture and method |
US15/374,739 US10467178B2 (en) | 2008-10-03 | 2016-12-09 | Peripheral component |
US16/027,163 US20180314670A1 (en) | 2008-10-03 | 2018-07-03 | Peripheral component |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/374,739 Continuation US10467178B2 (en) | 2008-10-03 | 2016-12-09 | Peripheral component |
Publications (1)
Publication Number | Publication Date |
---|---|
US20180314670A1 true US20180314670A1 (en) | 2018-11-01 |
Family
ID=41426838
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/340,510 Active US8373709B2 (en) | 2008-10-03 | 2008-12-19 | Multi-processor architecture and method |
US13/764,775 Abandoned US20130147815A1 (en) | 2008-10-03 | 2013-02-11 | Multi-processor architecture and method |
US15/374,739 Active US10467178B2 (en) | 2008-10-03 | 2016-12-09 | Peripheral component |
US16/027,163 Abandoned US20180314670A1 (en) | 2008-10-03 | 2018-07-03 | Peripheral component |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/340,510 Active US8373709B2 (en) | 2008-10-03 | 2008-12-19 | Multi-processor architecture and method |
US13/764,775 Abandoned US20130147815A1 (en) | 2008-10-03 | 2013-02-11 | Multi-processor architecture and method |
US15/374,739 Active US10467178B2 (en) | 2008-10-03 | 2016-12-09 | Peripheral component |
Country Status (6)
Country | Link |
---|---|
US (4) | US8373709B2 (en) |
EP (1) | EP2342626B1 (en) |
JP (1) | JP2012504835A (en) |
KR (1) | KR101533761B1 (en) |
CN (2) | CN102227709B (en) |
WO (1) | WO2010040144A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10795851B2 (en) | 2018-04-18 | 2020-10-06 | Fujitsu Client Computing Limited | Relay device and information processing system |
US20210250285A1 (en) * | 2020-02-11 | 2021-08-12 | Fungible, Inc. | Scaled-out transport as connection proxy for device-to-device communications |
Families Citing this family (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8892804B2 (en) | 2008-10-03 | 2014-11-18 | Advanced Micro Devices, Inc. | Internal BUS bridge architecture and method in multi-processor systems |
US8373709B2 (en) * | 2008-10-03 | 2013-02-12 | Ati Technologies Ulc | Multi-processor architecture and method |
US8751720B2 (en) | 2010-11-08 | 2014-06-10 | Moon J. Kim | Computationally-networked unified data bus |
CN102810085A (en) * | 2011-06-03 | 2012-12-05 | 鸿富锦精密工业(深圳)有限公司 | PCI-E expansion system and method |
US10817043B2 (en) * | 2011-07-26 | 2020-10-27 | Nvidia Corporation | System and method for entering and exiting sleep mode in a graphics subsystem |
CN102931546A (en) * | 2011-08-10 | 2013-02-13 | 鸿富锦精密工业(深圳)有限公司 | Connector assembly |
CN103105895A (en) * | 2011-11-15 | 2013-05-15 | 辉达公司 | Computer system and display cards thereof and method for processing graphs of computer system |
CN103631549A (en) * | 2012-08-22 | 2014-03-12 | 慧荣科技股份有限公司 | Picture processing device and external connection picture device |
US8996781B2 (en) * | 2012-11-06 | 2015-03-31 | OCZ Storage Solutions Inc. | Integrated storage/processing devices, systems and methods for performing big data analytics |
US20140149528A1 (en) * | 2012-11-29 | 2014-05-29 | Nvidia Corporation | Mpi communication of gpu buffers |
WO2015016843A1 (en) * | 2013-07-30 | 2015-02-05 | Hewlett-Packard Development Company, L.P. | Connector for a computing assembly |
US9582904B2 (en) | 2013-11-11 | 2017-02-28 | Amazon Technologies, Inc. | Image composition based on remote object data |
US9805479B2 (en) | 2013-11-11 | 2017-10-31 | Amazon Technologies, Inc. | Session idle optimization for streaming server |
US9578074B2 (en) | 2013-11-11 | 2017-02-21 | Amazon Technologies, Inc. | Adaptive content transmission |
US9641592B2 (en) | 2013-11-11 | 2017-05-02 | Amazon Technologies, Inc. | Location of actor resources |
US9634942B2 (en) | 2013-11-11 | 2017-04-25 | Amazon Technologies, Inc. | Adaptive scene complexity based on service quality |
US9596280B2 (en) | 2013-11-11 | 2017-03-14 | Amazon Technologies, Inc. | Multiple stream content presentation |
US9604139B2 (en) | 2013-11-11 | 2017-03-28 | Amazon Technologies, Inc. | Service for generating graphics object data |
US10261570B2 (en) * | 2013-11-27 | 2019-04-16 | Intel Corporation | Managing graphics power consumption and performance |
US10535322B2 (en) | 2015-07-24 | 2020-01-14 | Hewlett Packard Enterprise Development Lp | Enabling compression of a video output |
US10311013B2 (en) * | 2017-07-14 | 2019-06-04 | Facebook, Inc. | High-speed inter-processor communications |
CN107562674B (en) * | 2017-08-28 | 2020-03-20 | 上海集成电路研发中心有限公司 | Bus protocol asynchronous logic circuit implementation device embedded into processor |
JP6579255B1 (en) | 2018-12-28 | 2019-09-25 | 富士通クライアントコンピューティング株式会社 | Information processing system and relay device |
JP6573046B1 (en) | 2019-06-05 | 2019-09-11 | 富士通クライアントコンピューティング株式会社 | Information processing apparatus, information processing system, and information processing program |
US20230394204A1 (en) * | 2022-06-07 | 2023-12-07 | Dell Products L.P. | Lcs orchestrator device/expansion device secondary circuit board system |
CN115981853A (en) * | 2022-12-23 | 2023-04-18 | 摩尔线程智能科技(北京)有限责任公司 | GPU (graphics processing Unit) interconnection architecture, method for realizing GPU interconnection architecture and computing equipment |
Citations (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5835738A (en) * | 1994-06-20 | 1998-11-10 | International Business Machines Corporation | Address space architecture for multiple bus computer systems |
US5913045A (en) * | 1995-12-20 | 1999-06-15 | Intel Corporation | Programmable PCI interrupt routing mechanism |
US6173374B1 (en) * | 1998-02-11 | 2001-01-09 | Lsi Logic Corporation | System and method for peer-to-peer accelerated I/O shipping between host bus adapters in clustered computer network |
US6317813B1 (en) * | 1999-05-18 | 2001-11-13 | Silicon Integrated Systems Corp. | Method for arbitrating multiple memory access requests in a unified memory architecture via a non unified memory controller |
US20020027557A1 (en) * | 1998-10-23 | 2002-03-07 | Joseph M. Jeddeloh | Method for providing graphics controller embedded in a core logic unit |
US6473086B1 (en) * | 1999-12-09 | 2002-10-29 | Ati International Srl | Method and apparatus for graphics processing using parallel graphics processors |
US6560688B1 (en) * | 1998-10-01 | 2003-05-06 | Advanced Micro Devices, Inc. | System and method for improving accelerated graphics port systems |
US20030188076A1 (en) * | 2002-03-29 | 2003-10-02 | International Business Machines | Opaque memory region for I/O adapter transparent bridge |
US20030188073A1 (en) * | 2002-04-01 | 2003-10-02 | Zatorski Richard A. | System and method for controlling multiple devices via general purpose input/output (GPIO) hardware |
US20040064621A1 (en) * | 2000-06-30 | 2004-04-01 | Dougherty Michael J. | Powering a notebook across a USB interface |
US20040160449A1 (en) * | 2003-02-18 | 2004-08-19 | Microsoft Corporation | Video memory management |
US20040257369A1 (en) * | 2003-06-17 | 2004-12-23 | Bill Fang | Integrated video and graphics blender |
US20050050282A1 (en) * | 2003-09-02 | 2005-03-03 | Vantalon Nicolas P. | Memory reallocation and sharing in electronic systems |
US20050060490A1 (en) * | 2003-09-02 | 2005-03-17 | Wei-Chi Lu | Apparatus for multiple host access to storage medium |
US6874042B2 (en) * | 2003-03-11 | 2005-03-29 | Dell Products L.P. | System and method for using a switch to route peripheral and graphics data on an interconnect |
US20050193171A1 (en) * | 2004-02-26 | 2005-09-01 | Bacchus Reza M. | Computer system cache controller and methods of operation of a cache controller |
US20050197977A1 (en) * | 2003-12-09 | 2005-09-08 | Microsoft Corporation | Optimizing performance of a graphics processing unit for efficient execution of general matrix operations |
US7009618B1 (en) * | 2001-07-13 | 2006-03-07 | Advanced Micro Devices, Inc. | Integrated I/O Remapping mechanism |
US20060230210A1 (en) * | 2005-03-31 | 2006-10-12 | Intel Corporation | Method and apparatus for memory interface |
US20060282604A1 (en) * | 2005-05-27 | 2006-12-14 | Ati Technologies, Inc. | Methods and apparatus for processing graphics data using multiple processing circuits |
US7206883B2 (en) * | 2003-10-15 | 2007-04-17 | Via Technologies, Inc. | Interruption control system and method |
US20070245046A1 (en) * | 2006-03-27 | 2007-10-18 | Ati Technologies, Inc. | Graphics-processing system and method of broadcasting write requests to multiple graphics devices |
US20080055321A1 (en) * | 2006-08-31 | 2008-03-06 | Ati Technologies Inc. | Parallel physics simulation and graphics processing |
US20080267256A1 (en) * | 2007-04-27 | 2008-10-30 | Kabushiki Kaisha Toshiba | Information processing apparatus and control method of processor circuit |
US20090167771A1 (en) * | 2007-12-28 | 2009-07-02 | Itay Franko | Methods and apparatuses for Configuring and operating graphics processing units |
US20090235048A1 (en) * | 2006-01-16 | 2009-09-17 | Sony Corporation | Information processing apparatus, signal transmission method, and bridge |
US7626418B1 (en) * | 2007-05-14 | 2009-12-01 | Xilinx, Inc. | Configurable interface |
US8555099B2 (en) * | 2006-05-30 | 2013-10-08 | Ati Technologies Ulc | Device having multiple graphics subsystems and reduced power consumption mode, software and methods |
Family Cites Families (77)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5712664A (en) * | 1993-10-14 | 1998-01-27 | Alliance Semiconductor Corporation | Shared memory graphics accelerator system |
US6359624B1 (en) * | 1996-02-02 | 2002-03-19 | Kabushiki Kaisha Toshiba | Apparatus having graphic processor for high speed performance |
US5999183A (en) * | 1997-07-10 | 1999-12-07 | Silicon Engineering, Inc. | Apparatus for creating a scalable graphics system with efficient memory and bandwidth usage |
JP2000222590A (en) * | 1999-01-27 | 2000-08-11 | Nec Corp | Method and device for processing image |
US6662257B1 (en) * | 2000-05-26 | 2003-12-09 | Ati International Srl | Multiple device bridge apparatus and method thereof |
US6587905B1 (en) * | 2000-06-29 | 2003-07-01 | International Business Machines Corporation | Dynamic data bus allocation |
US6606614B1 (en) * | 2000-08-24 | 2003-08-12 | Silicon Recognition, Inc. | Neural network integrated circuit with fewer pins |
US6802021B1 (en) * | 2001-01-23 | 2004-10-05 | Adaptec, Inc. | Intelligent load balancing for a multi-path storage system |
US7340555B2 (en) * | 2001-09-28 | 2008-03-04 | Dot Hill Systems Corporation | RAID system for performing efficient mirrored posted-write operations |
US20030158886A1 (en) * | 2001-10-09 | 2003-08-21 | Walls Jeffrey J. | System and method for configuring a plurality of computers that collectively render a display |
US6700580B2 (en) * | 2002-03-01 | 2004-03-02 | Hewlett-Packard Development Company, L.P. | System and method utilizing multiple pipelines to render graphical data |
US6567880B1 (en) * | 2002-03-28 | 2003-05-20 | Compaq Information Technologies Group, L.P. | Computer bridge interfaces for accelerated graphics port and peripheral component interconnect devices |
US7068278B1 (en) * | 2003-04-17 | 2006-06-27 | Nvidia Corporation | Synchronized graphics processing units |
US7093033B2 (en) * | 2003-05-20 | 2006-08-15 | Intel Corporation | Integrated circuit capable of communicating using different communication protocols |
US7119808B2 (en) * | 2003-07-15 | 2006-10-10 | Alienware Labs Corp. | Multiple parallel processor computer graphics system |
TWI284275B (en) * | 2003-07-25 | 2007-07-21 | Via Tech Inc | Graphic display architecture and control chip set therein |
US6956579B1 (en) * | 2003-08-18 | 2005-10-18 | Nvidia Corporation | Private addressing in a multi-processor graphics processing system |
US7171499B2 (en) * | 2003-10-10 | 2007-01-30 | Advanced Micro Devices, Inc. | Processor surrogate for use in multiprocessor systems and multiprocessor system using same |
US7782325B2 (en) * | 2003-10-22 | 2010-08-24 | Alienware Labs Corporation | Motherboard for supporting multiple graphics cards |
CN1890660A (en) * | 2003-11-19 | 2007-01-03 | 路西德信息技术有限公司 | Method and system for multiple 3-d graphic pipeline over a PC bus |
US7119810B2 (en) * | 2003-12-05 | 2006-10-10 | Siemens Medical Solutions Usa, Inc. | Graphics processing unit for simulation or medical diagnostic imaging |
US7289125B2 (en) * | 2004-02-27 | 2007-10-30 | Nvidia Corporation | Graphics device clustering with PCI-express |
US7424564B2 (en) * | 2004-03-23 | 2008-09-09 | Qlogic, Corporation | PCI—express slot for coupling plural devices to a host system |
US7246190B2 (en) * | 2004-04-21 | 2007-07-17 | Hewlett-Packard Development Company, L.P. | Method and apparatus for bringing bus lanes in a computer system using a jumper board |
US6985152B2 (en) * | 2004-04-23 | 2006-01-10 | Nvidia Corporation | Point-to-point bus bridging without a bridge controller |
US7663633B1 (en) * | 2004-06-25 | 2010-02-16 | Nvidia Corporation | Multiple GPU graphics system for implementing cooperative graphics instruction execution |
US7062594B1 (en) * | 2004-06-30 | 2006-06-13 | Emc Corporation | Root complex connection system |
US7721118B1 (en) * | 2004-09-27 | 2010-05-18 | Nvidia Corporation | Optimizing power and performance for multi-processor graphics processing |
TWM264547U (en) * | 2004-11-08 | 2005-05-11 | Asustek Comp Inc | Main board |
TWI274255B (en) * | 2004-11-08 | 2007-02-21 | Asustek Comp Inc | Motherboard |
US7598958B1 (en) * | 2004-11-17 | 2009-10-06 | Nvidia Corporation | Multi-chip graphics processing unit apparatus, system, and method |
US7576745B1 (en) * | 2004-11-17 | 2009-08-18 | Nvidia Corporation | Connecting graphics adapters |
US7633505B1 (en) * | 2004-11-17 | 2009-12-15 | Nvidia Corporation | Apparatus, system, and method for joint processing in graphics processing units |
US7477256B1 (en) * | 2004-11-17 | 2009-01-13 | Nvidia Corporation | Connecting graphics adapters for scalable performance |
US8066515B2 (en) * | 2004-11-17 | 2011-11-29 | Nvidia Corporation | Multiple graphics adapter connection systems |
US7451259B2 (en) | 2004-12-06 | 2008-11-11 | Nvidia Corporation | Method and apparatus for providing peer-to-peer data transfer within a computing environment |
US7275123B2 (en) * | 2004-12-06 | 2007-09-25 | Nvidia Corporation | Method and apparatus for providing peer-to-peer data transfer within a computing environment |
US7545380B1 (en) | 2004-12-16 | 2009-06-09 | Nvidia Corporation | Sequencing of displayed images for alternate frame rendering in a multi-processor graphics system |
US7372465B1 (en) * | 2004-12-17 | 2008-05-13 | Nvidia Corporation | Scalable graphics processing for remote display |
US7383412B1 (en) * | 2005-02-28 | 2008-06-03 | Nvidia Corporation | On-demand memory synchronization for peripheral systems with multiple parallel processors |
US7616207B1 (en) * | 2005-04-25 | 2009-11-10 | Nvidia Corporation | Graphics processing system including at least three bus devices |
US7793029B1 (en) * | 2005-05-17 | 2010-09-07 | Nvidia Corporation | Translation device apparatus for configuring printed circuit board connectors |
US7539801B2 (en) | 2005-05-27 | 2009-05-26 | Ati Technologies Ulc | Computing device with flexibly configurable expansion slots, and method of operation |
US7613346B2 (en) | 2005-05-27 | 2009-11-03 | Ati Technologies, Inc. | Compositing in multiple video processing unit (VPU) systems |
US7649537B2 (en) * | 2005-05-27 | 2010-01-19 | Ati Technologies, Inc. | Dynamic load balancing in multiple video processing unit (VPU) systems |
US7663635B2 (en) * | 2005-05-27 | 2010-02-16 | Ati Technologies, Inc. | Multiple video processor unit (VPU) memory mapping |
US8054314B2 (en) * | 2005-05-27 | 2011-11-08 | Ati Technologies, Inc. | Applying non-homogeneous properties to multiple video processing units (VPUs) |
JP2007008679A (en) * | 2005-06-30 | 2007-01-18 | Toshiba Corp | Paper delivering device |
US20070016711A1 (en) | 2005-07-13 | 2007-01-18 | Jet Way Information Co., Ltd. | Interfacing structure for multiple graphic |
US20070038794A1 (en) * | 2005-08-10 | 2007-02-15 | Purcell Brian T | Method and system for allocating a bus |
US7629978B1 (en) * | 2005-10-31 | 2009-12-08 | Nvidia Corporation | Multichip rendering with state control |
US7525548B2 (en) * | 2005-11-04 | 2009-04-28 | Nvidia Corporation | Video processing with multiple graphical processing units |
US8294731B2 (en) | 2005-11-15 | 2012-10-23 | Advanced Micro Devices, Inc. | Buffer management in vector graphics hardware |
US8412872B1 (en) * | 2005-12-12 | 2013-04-02 | Nvidia Corporation | Configurable GPU and method for graphics processing using a configurable GPU |
US7325086B2 (en) * | 2005-12-15 | 2008-01-29 | Via Technologies, Inc. | Method and system for multiple GPU support |
US7340557B2 (en) * | 2005-12-15 | 2008-03-04 | Via Technologies, Inc. | Switching method and system for multiple GPU support |
US7623131B1 (en) * | 2005-12-16 | 2009-11-24 | Nvidia Corporation | Graphics processing systems with multiple processors connected in a ring topology |
US7461195B1 (en) * | 2006-03-17 | 2008-12-02 | Qlogic, Corporation | Method and system for dynamically adjusting data transfer rates in PCI-express devices |
TW200737034A (en) * | 2006-03-23 | 2007-10-01 | Micro Star Int Co Ltd | Connector module of graphic card and the device of motherboard thereof |
US8130227B2 (en) | 2006-05-12 | 2012-03-06 | Nvidia Corporation | Distributed antialiasing in a multiprocessor graphics system |
US7535433B2 (en) * | 2006-05-18 | 2009-05-19 | Nvidia Corporation | Dynamic multiple display configuration |
US7480757B2 (en) * | 2006-05-24 | 2009-01-20 | International Business Machines Corporation | Method for dynamically allocating lanes to a plurality of PCI Express connectors |
JP4439491B2 (en) * | 2006-05-24 | 2010-03-24 | 株式会社ソニー・コンピュータエンタテインメント | Multi-graphics processor system, graphics processor and data transfer method |
US8103993B2 (en) * | 2006-05-24 | 2012-01-24 | International Business Machines Corporation | Structure for dynamically allocating lanes to a plurality of PCI express connectors |
US7500041B2 (en) * | 2006-06-15 | 2009-03-03 | Nvidia Corporation | Graphics processing unit for cost effective high performance graphics system with two or more graphics processing units |
US7562174B2 (en) * | 2006-06-15 | 2009-07-14 | Nvidia Corporation | Motherboard having hard-wired private bus between graphics cards |
US7412554B2 (en) * | 2006-06-15 | 2008-08-12 | Nvidia Corporation | Bus interface controller for cost-effective high performance graphics system with two or more graphics processing units |
US7619629B1 (en) * | 2006-06-15 | 2009-11-17 | Nvidia Corporation | Method and system for utilizing memory interface bandwidth to connect multiple graphics processing units |
US7616206B1 (en) * | 2006-06-16 | 2009-11-10 | Nvidia Corporation | Efficient multi-chip GPU |
JP4421593B2 (en) * | 2006-11-09 | 2010-02-24 | 株式会社ソニー・コンピュータエンタテインメント | Multiprocessor system, control method thereof, program, and information storage medium |
US20080200043A1 (en) * | 2007-02-15 | 2008-08-21 | Tennrich International Corp. | Dual display card connection means |
EP2132231B1 (en) | 2007-03-07 | 2011-05-04 | BENEO-Orafti S.A. | Natural rubber latex preservation |
US8161209B2 (en) * | 2008-03-31 | 2012-04-17 | Advanced Micro Devices, Inc. | Peer-to-peer special purpose processor architecture and method |
CN101639930B (en) * | 2008-08-01 | 2012-07-04 | 辉达公司 | Method and system for processing graphical data by a series of graphical processors |
US8892804B2 (en) * | 2008-10-03 | 2014-11-18 | Advanced Micro Devices, Inc. | Internal BUS bridge architecture and method in multi-processor systems |
US8373709B2 (en) * | 2008-10-03 | 2013-02-12 | Ati Technologies Ulc | Multi-processor architecture and method |
KR20110088538A (en) * | 2008-10-30 | 2011-08-03 | 엘에스아이 코포레이션 | Storage controller data redistribution |
-
2008
- 2008-12-19 US US12/340,510 patent/US8373709B2/en active Active
-
2009
- 2009-10-05 CN CN200980147694.9A patent/CN102227709B/en active Active
- 2009-10-05 CN CN201510187635.1A patent/CN105005542B/en active Active
- 2009-10-05 WO PCT/US2009/059594 patent/WO2010040144A1/en active Application Filing
- 2009-10-05 JP JP2011530294A patent/JP2012504835A/en active Pending
- 2009-10-05 KR KR1020117010206A patent/KR101533761B1/en active IP Right Grant
- 2009-10-05 EP EP09737270.0A patent/EP2342626B1/en active Active
-
2013
- 2013-02-11 US US13/764,775 patent/US20130147815A1/en not_active Abandoned
-
2016
- 2016-12-09 US US15/374,739 patent/US10467178B2/en active Active
-
2018
- 2018-07-03 US US16/027,163 patent/US20180314670A1/en not_active Abandoned
Patent Citations (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5835738A (en) * | 1994-06-20 | 1998-11-10 | International Business Machines Corporation | Address space architecture for multiple bus computer systems |
US5913045A (en) * | 1995-12-20 | 1999-06-15 | Intel Corporation | Programmable PCI interrupt routing mechanism |
US6173374B1 (en) * | 1998-02-11 | 2001-01-09 | Lsi Logic Corporation | System and method for peer-to-peer accelerated I/O shipping between host bus adapters in clustered computer network |
US6560688B1 (en) * | 1998-10-01 | 2003-05-06 | Advanced Micro Devices, Inc. | System and method for improving accelerated graphics port systems |
US20020027557A1 (en) * | 1998-10-23 | 2002-03-07 | Joseph M. Jeddeloh | Method for providing graphics controller embedded in a core logic unit |
US6317813B1 (en) * | 1999-05-18 | 2001-11-13 | Silicon Integrated Systems Corp. | Method for arbitrating multiple memory access requests in a unified memory architecture via a non unified memory controller |
US6473086B1 (en) * | 1999-12-09 | 2002-10-29 | Ati International Srl | Method and apparatus for graphics processing using parallel graphics processors |
US20040064621A1 (en) * | 2000-06-30 | 2004-04-01 | Dougherty Michael J. | Powering a notebook across a USB interface |
US7009618B1 (en) * | 2001-07-13 | 2006-03-07 | Advanced Micro Devices, Inc. | Integrated I/O Remapping mechanism |
US20030188076A1 (en) * | 2002-03-29 | 2003-10-02 | International Business Machines | Opaque memory region for I/O adapter transparent bridge |
US20030188073A1 (en) * | 2002-04-01 | 2003-10-02 | Zatorski Richard A. | System and method for controlling multiple devices via general purpose input/output (GPIO) hardware |
US20040160449A1 (en) * | 2003-02-18 | 2004-08-19 | Microsoft Corporation | Video memory management |
US6874042B2 (en) * | 2003-03-11 | 2005-03-29 | Dell Products L.P. | System and method for using a switch to route peripheral and graphics data on an interconnect |
US20040257369A1 (en) * | 2003-06-17 | 2004-12-23 | Bill Fang | Integrated video and graphics blender |
US20050050282A1 (en) * | 2003-09-02 | 2005-03-03 | Vantalon Nicolas P. | Memory reallocation and sharing in electronic systems |
US20050060490A1 (en) * | 2003-09-02 | 2005-03-17 | Wei-Chi Lu | Apparatus for multiple host access to storage medium |
US7206883B2 (en) * | 2003-10-15 | 2007-04-17 | Via Technologies, Inc. | Interruption control system and method |
US20050197977A1 (en) * | 2003-12-09 | 2005-09-08 | Microsoft Corporation | Optimizing performance of a graphics processing unit for efficient execution of general matrix operations |
US20050193171A1 (en) * | 2004-02-26 | 2005-09-01 | Bacchus Reza M. | Computer system cache controller and methods of operation of a cache controller |
US20060230210A1 (en) * | 2005-03-31 | 2006-10-12 | Intel Corporation | Method and apparatus for memory interface |
US20060282604A1 (en) * | 2005-05-27 | 2006-12-14 | Ati Technologies, Inc. | Methods and apparatus for processing graphics data using multiple processing circuits |
US20090235048A1 (en) * | 2006-01-16 | 2009-09-17 | Sony Corporation | Information processing apparatus, signal transmission method, and bridge |
US20070245046A1 (en) * | 2006-03-27 | 2007-10-18 | Ati Technologies, Inc. | Graphics-processing system and method of broadcasting write requests to multiple graphics devices |
US8555099B2 (en) * | 2006-05-30 | 2013-10-08 | Ati Technologies Ulc | Device having multiple graphics subsystems and reduced power consumption mode, software and methods |
US20080055321A1 (en) * | 2006-08-31 | 2008-03-06 | Ati Technologies Inc. | Parallel physics simulation and graphics processing |
US20080267256A1 (en) * | 2007-04-27 | 2008-10-30 | Kabushiki Kaisha Toshiba | Information processing apparatus and control method of processor circuit |
US7626418B1 (en) * | 2007-05-14 | 2009-12-01 | Xilinx, Inc. | Configurable interface |
US20090167771A1 (en) * | 2007-12-28 | 2009-07-02 | Itay Franko | Methods and apparatuses for Configuring and operating graphics processing units |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10795851B2 (en) | 2018-04-18 | 2020-10-06 | Fujitsu Client Computing Limited | Relay device and information processing system |
US20210250285A1 (en) * | 2020-02-11 | 2021-08-12 | Fungible, Inc. | Scaled-out transport as connection proxy for device-to-device communications |
US11637773B2 (en) * | 2020-02-11 | 2023-04-25 | Fungible, Inc. | Scaled-out transport as connection proxy for device-to-device communications |
US20230224247A1 (en) * | 2020-02-11 | 2023-07-13 | Fungible, Inc. | Scaled-out transport as connection proxy for device-to-device communications |
Also Published As
Publication number | Publication date |
---|---|
CN105005542B (en) | 2019-01-15 |
EP2342626A1 (en) | 2011-07-13 |
US8373709B2 (en) | 2013-02-12 |
WO2010040144A1 (en) | 2010-04-08 |
EP2342626B1 (en) | 2015-07-08 |
CN102227709B (en) | 2015-05-20 |
US20130147815A1 (en) | 2013-06-13 |
US20170235700A1 (en) | 2017-08-17 |
JP2012504835A (en) | 2012-02-23 |
KR101533761B1 (en) | 2015-07-03 |
CN105005542A (en) | 2015-10-28 |
CN102227709A (en) | 2011-10-26 |
KR20110067149A (en) | 2011-06-21 |
US20100088453A1 (en) | 2010-04-08 |
US10467178B2 (en) | 2019-11-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10467178B2 (en) | Peripheral component | |
US9977756B2 (en) | Internal bus architecture and method in multi-processor systems | |
US8161209B2 (en) | Peer-to-peer special purpose processor architecture and method | |
US7340557B2 (en) | Switching method and system for multiple GPU support | |
US7325086B2 (en) | Method and system for multiple GPU support | |
US8380943B2 (en) | Variable-width memory module and buffer | |
US6950910B2 (en) | Mobile wireless communication device architectures and methods therefor | |
KR100826740B1 (en) | Multi-graphics processor system, graphics processor and rendering method | |
US8417838B2 (en) | System and method for configurable digital communication | |
US20090300245A1 (en) | Providing a peripheral component interconnect (PCI)-compatible transaction level protocol for a system on a chip (SoC) | |
US10282341B2 (en) | Method, apparatus and system for configuring a protocol stack of an integrated circuit chip | |
CN103890745A (en) | Integrating intellectual property (Ip) blocks into a processor | |
US20130194881A1 (en) | Area-efficient multi-modal signaling interface | |
US20070233930A1 (en) | System and method of resizing PCI Express bus widths on-demand | |
KR20140078161A (en) | PCI express switch and computer system using the same | |
US6791554B1 (en) | I/O node for a computer system including an integrated graphics engine | |
US6857033B1 (en) | I/O node for a computer system including an integrated graphics engine and an integrated I/O hub | |
KR100773932B1 (en) | A data alignment chip for camera link board |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |