US20220103489A1 - Re-purposing byte enables as clock enables for power savings - Google Patents
Re-purposing byte enables as clock enables for power savings Download PDFInfo
- Publication number
- US20220103489A1 US20220103489A1 US17/548,398 US202117548398A US2022103489A1 US 20220103489 A1 US20220103489 A1 US 20220103489A1 US 202117548398 A US202117548398 A US 202117548398A US 2022103489 A1 US2022103489 A1 US 2022103489A1
- Authority
- US
- United States
- Prior art keywords
- packet
- partition
- given partition
- enable signal
- destination
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000005192 partition Methods 0.000 claims abstract description 180
- 239000004744 fabric Substances 0.000 claims abstract description 34
- 238000000034 method Methods 0.000 claims abstract description 34
- 238000004891 communication Methods 0.000 claims abstract description 22
- 230000004044 response Effects 0.000 claims description 51
- 238000012545 processing Methods 0.000 claims description 25
- 238000012546 transfer Methods 0.000 abstract description 13
- 230000015654 memory Effects 0.000 description 48
- 238000010586 diagram Methods 0.000 description 10
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 239000000523 sample Substances 0.000 description 6
- 229910052710 silicon Inorganic materials 0.000 description 6
- 239000010703 silicon Substances 0.000 description 6
- 239000000872 buffer Substances 0.000 description 5
- 230000007246 mechanism Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000001816 cooling Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000004806 packaging method and process Methods 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000004913 activation Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000003780 insertion Methods 0.000 description 1
- 230000037431 insertion Effects 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/40—Bus structure
- G06F13/4004—Coupling between buses
- G06F13/4022—Coupling between buses using switching circuits, e.g. switching matrix, connection or expansion network
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/325—Power saving in peripheral device
- G06F1/3253—Power saving in bus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/325—Power saving in peripheral device
- G06F1/3275—Power saving in memory, e.g. RAM, cache
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/26—Power supply means, e.g. regulation thereof
- G06F1/32—Means for saving power
- G06F1/3203—Power management, i.e. event-based initiation of a power-saving mode
- G06F1/3234—Power saving characterised by the action undertaken
- G06F1/3287—Power saving characterised by the action undertaken by switching off individual functional units in the computer system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/40—Bus structure
- G06F13/4004—Coupling between buses
- G06F13/4009—Coupling between buses with data restructuring
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F13/00—Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
- G06F13/38—Information transfer, e.g. on bus
- G06F13/40—Bus structure
- G06F13/4004—Coupling between buses
- G06F13/4009—Coupling between buses with data restructuring
- G06F13/4018—Coupling between buses with data restructuring with data-width conversion
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/42—Centralised routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/25—Routing or path finding in a switch fabric
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/30—Peripheral units, e.g. input or output ports
- H04L49/3072—Packet splitting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L49/00—Packet switching elements
- H04L49/35—Switches specially adapted for specific applications
- H04L49/356—Switches specially adapted for specific applications for storage area networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L69/00—Network arrangements, protocols or services independent of the application payload and not provided for in the other groups of this subclass
- H04L69/22—Parsing or analysis of headers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04Q—SELECTING
- H04Q3/00—Selecting arrangements
- H04Q3/64—Distributing or queueing
- H04Q3/68—Grouping or interlacing selector groups or stages
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- a variety of computing devices utilize heterogeneous integration, which integrates multiple types of processing units for providing system functionality.
- the multiple functions include audio/video (A/V) data processing, other high data parallel applications for the medicine and business fields, processing instructions of a general-purpose instruction set architecture (ISA), digital, analog, mixed-signal and radio-frequency (RF) functions, and so forth.
- SoC system-on-a-chip
- MCMs multi-chip modules
- Some computing devices include three-dimensional integrated circuits (3D ICs) that utilize die-stacking technology as well as silicon interposers, through silicon vias (TSVs) and other mechanisms to vertically stack and electrically connect two or more dies in a system-in-package (SiP).
- 3D ICs three-dimensional integrated circuits
- TSVs through silicon vias
- SiP system-in-package
- each of these processing units is a source in the computing system capable of generating read requests and write requests for data.
- each of the sources is also capable of being a targeted destination for requests.
- the data access requests and corresponding data, coherency probes, interrupts and other communication messages generated by sources for targeted destinations are typically transferred through a communication fabric (or fabric).
- the fabric reduces latency by having a relatively high number of physical wires available for transporting packets between sources and destinations. The data transport of packets across the wires of the fabric and the toggling of nodes within storage elements, queues, control logic and so on in the fabric increases power consumption for the computing system.
- FIG. 1 is a block diagram of one embodiment of a computing system.
- FIG. 2 is a block diagram of one embodiment of a computing system.
- FIG. 3 is a flow diagram of one embodiment of a method for efficient data transfer in a computing system by a routing component.
- FIG. 4 is a flow diagram of one embodiment of a method for efficient data transfer in a computing system by a source.
- FIG. 5 is a flow diagram of one embodiment of a method for identifying packet types for efficient data transfer in a computing system.
- FIG. 6 is a flow diagram of one embodiment of a method for efficient data transfer in a computing system by a destination.
- FIG. 7 is a block diagram of one embodiment of a computing system.
- a computing system includes one or more clients for processing applications.
- the clients include a general-purpose central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a multimedia engine, an input/output (I/O) device and so forth.
- Each of the clients is capable of generating data access requests.
- the clients are referred to as “sources” when the clients generate and send packets, which include data access requests, payload data, probe requests, coherency commands or other communication to send to a targeted destination.
- the clients and system memory are referred to as “destinations” when the clients and system memory are targets of packets generated by sources.
- the sources send packets to destinations through a communication fabric (or fabric).
- a communication fabric or fabric
- interconnections in the fabric are bus architectures, crossbar-based architectures, network-on-chip (NoC) communication subsystems, communication channels between dies, router switches with arbitration logic, repeaters, silicon interposers used to stack chips side-by-side, through silicon vias (TSVs) used to vertically stack special-purpose dies on top of processor dies, and so on.
- sources divide payload data into partitions such as byte, word, double-word, and so on.
- partition enable signals for the partitions of payload data are generated.
- the partition enable signals in field 156 are byte enable signals.
- the sources generate the partition enable signals (or enable signals) to indicate which partitions include valid data of the payload data.
- each of the partitions include valid data for a read response packet. Therefore, the source asserts each one of the enable signals corresponding to the multiple partitions of the payload data.
- each of the partitions include valid data for a full write data packet and a cache victim packet used to send previously cached data to system memory.
- the source negates one or more enable signals to indicate which partitions include invalid data of the payload data. For example, one or more of the partitions include invalid data for a partial write data packet.
- the source negates an enable signal for a particular partition when the source determines a type of the packet indicates the particular partition should have an associated asserted enable signal in the packet, but the source also determines the particular partition includes a particular data pattern. For example, a particular partition of read response payload data includes the particular pattern. Although a read response data packet typically has each one of the multiple enable signals asserted, the source negates the enable signal for the particular partition. Rather than transport the particular data pattern throughout the fabric, the negated enable signal indicates that the particular partition should store the particular pattern without the particular pattern actually being transported through the fabric.
- the destination receives the read response data packet, and the destination determines both the type of the packet and the particular partition has an associated negated enable signal in the packet.
- the type of the read response data packet indicates the particular partition should have an associated asserted enable signal in the packet. Therefore, the destination interprets the negated enable signal as indicating the particular partition should include the particular data pattern. Accordingly, the destination inserts the particular data pattern for the particular partition when storing the read response payload data.
- one or more routing components receive and send the packet within the fabric.
- the routing component are router switches, repeater blocks and so forth.
- the routing component receives the packet and determines one or more partitions of payload data have associated negated enable signals. Accordingly, the routing component disables a storage element of the routing component assigned to store data of partitions of the payload data associated with the negated enable signals. Later, when the routing component sends the packet to a next routing component or to the destination, the routing component sends the negated enable signal and a previous value stored in each storage element assigned to store data of partitions of the payload data associated with the negated enable signals. Since the clock signal was disabled, these storage elements did not load new values.
- clock gating logic in the routing component uses the partition enable signals directly as a conditional clock gating control signal.
- source 110 sends a packet 150 to the destination 140 through a routing component 120 .
- the source 110 is a client in the computing system 100 such as a general-purpose central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a multimedia engine, an input/output (I/O) device and so forth.
- the destination 140 is system memory or one of the examples of a client in the computing system 100 .
- Examples of the routing component 120 is a repeater block, a network switch, or other component of a communication fabric.
- the computing system 100 includes any number of each of these blocks in other implementations.
- the blocks 110 - 140 of the computing system 100 are individual dies on an integrated circuit (IC), such as a system-on-a-chip (SOC).
- the blocks 110 - 140 are individual dies in a system-in-package (SiP) or a multi-chip module (MCM).
- Other blocks are not shown for ease of illustration such as a power controller or a power management unit, clock generating sources, link interfaces for communication with any other processing nodes, and memory controllers for interfacing with system memory.
- the source 110 generates and sends packets.
- the source 110 and the destination 140 use packets for communicating data access requests, payload data, probe requests, coherency commands, and so forth.
- the packet generation logic 112 (or logic 112 ) generates the packet 150 .
- Packet 150 includes multiple fields 152 - 158 . Although the fields 152 - 158 are shown in a particular contiguous order, in other embodiments, the packet 150 uses another storage arrangement. In other embodiments, packet 150 includes one or more other fields not shown.
- the header 152 stores one or more of commands, source and destination identifiers, process and thread identifiers, timestamps, parity and checksum and other data integrity information, priority levels and/or quality of service parameters, and so forth. In other embodiments, one or more of these fields are separated from the header 152 and located elsewhere in the packet 150 . Examples of other fields not shown in the packet 150 are a virtual channel identifier, a response type for response packets, an indication of a transaction offset used when a read response for a large read request is divided into multiple data packets, an indication for response packets that represents whether the response packet includes a single response or multiple responses, and an indication of a number of credits for data packets, request packets and response packets. Other examples of fields stored in packet 150 are possible and contemplated in other embodiments.
- the address 154 stores an indication of a target address associated with the command in the header 152 .
- the field 156 stores enable signals associated with the partitions of the payload data stored in field 158 .
- the source 110 divides payload data into partitions. Examples of the partition size are a byte, a word (4 bytes), a dual-word (8 bytes), and so on.
- the partition enable logic 114 (or logic 114 ) generates the partition enable signals stored in field 156 . When the partition size is a byte, the partition enable signals in field 156 are byte enable signals. In some embodiments, the logic 114 generates the partition enable signals (or enable signals) to indicate which partitions include valid data of the payload data stored in field 158 .
- each of the partitions include valid data for a read response packet.
- the command and/or the packet type is stored in the header 152 . Therefore, the logic 114 asserts each one of the enable signals in the field 156 corresponding to the multiple partitions of the payload data in field 158 .
- each of the partitions include valid data for a full write data packet and a cache victim packet used to send previously cached data to system memory.
- the logic 114 negates one or more enable signals in field 156 to indicate which partitions include invalid data of the payload data in field 158 .
- one or more of the partitions include invalid data for a partial write data packet.
- a signal is considered to be “asserted” when the signal has a value used to enable logic and turn on transistors to cause the transistor to conduct current.
- an asserted value is a Boolean logic high value or a Boolean logic high level.
- NMOS n-type metal oxide semiconductor
- an asserted value is a Boolean logic low level.
- a p-type MOS (PMOS) transistor receives a Boolean logic low level on its gate terminal, the PMOS transistor is enabled, or otherwise turned on, and the PMOS transistor is capable of conducting current.
- a signal is considered to be “negated” when the signal has a value used to disable logic and turn off transistors.
- the logic 114 negates an enable signal in the field 156 for a particular partition in the field 158 when the logic 114 determines a type of the packet indicates the particular partition should have an associated asserted enable signal in the packet, but the logic 114 also determines the particular partition includes a particular data pattern. For example, a particular partition of read response payload data includes the particular pattern. One example of the particular pattern is all zeroes in the particular partition. When the partition size is a byte, the particular partition includes eight zeroes. Other data patterns are possible and contemplated. Although a read response data packet typically has each one of the multiple enable signals asserted, the logic 114 negates the enable signal in field 156 for the particular partition in field 158 . Rather than transport the particular data pattern through the routing component 120 , the negated enable signal indicates that the routing component 120 should transport the packet 150 without the actual value of the particular partition.
- the interface 122 of the routing component 120 receives the packet 150 .
- the interface 122 includes the storage elements 134 for receiving and storing the packet 150 .
- the interface 122 includes impedance matching circuitry when the distance from the source 110 is appreciable.
- the interface 122 includes wires to transfer the received packet to the storage elements 134 .
- the packet 124 generally represents a packet received by the routing component 120 such as packet 150 . Therefore, packet 124 includes the same fields described earlier for packet 150 .
- the routing component 120 includes the clock gating logic 132 (or logic 132 ) for enabling and disabling the clock signal 130 to one or more of the storage elements 134 .
- the storage elements 134 include one or more of registers, flip-flop circuits, content addressable memory (CAM), random access memory (RAM), and so forth.
- the logic 132 uses partition enable signals 126 from the packet 124 to conditionally enable and disable the clock signal 130 to one or more of the storage elements 134 . For example, the logic 132 disables the clock signal 130 for each one of the storage elements 134 assigned to store data of partitions of the payload data associated with negated enable signals of the partition enable signals 126 . Therefore, the logic 132 uses the partition enable signals 126 as clock enable signals. When the partition size is a byte, the logic 132 uses byte enable signals 126 as clock enable signals.
- the interface 136 sends the packet information stored in the storage elements 134 .
- the interface 136 includes simply wires, one or more logic gate buffers, or other circuitry for transmitting data.
- the interface 136 sends the negated enable signals of the partition enable signals 126 and a previous value stored in each storage element assigned to store data of partitions of the payload data associated with the negated enable signals. Since the logic 132 disabled the clock signal 130 for particular one of the storage elements 134 , these particular storage elements did not load new values. The previous values from an earlier clock cycle are still stored in these particular storage elements and the interface 136 sends these previous values to the destination 140 .
- the original values sent by the source 110 of the particular partitions of the payload data associated with negated partition enable signals of the signals 126 are not stored or transported by the routing component 120 . Additionally, these storage elements do not consume power associated with loading new values, and their conditional clock signals do not toggle. In contrast, other ones of the storage elements 134 store the header 152 , address 154 and the partition enable signals 156 of the packet 124 . These storage elements receive a version of the clock signal 130 , which is not qualified by the partition enable signals 126 .
- the destination 140 receives the packet information, and the destination 140 determines both the type of the packet and whether a particular partition of the payload data has an associated negated enable signal in the packet.
- the destination 140 determines that one or more of the partitions require insertion of a particular data pattern.
- the packet type is a full size write packet, a read response packet, or a cache victim packet, typically, each of the partition enable signals is asserted. Therefore, the destination 140 interprets a negated enable signal as indicating the particular partition should include the particular data pattern. Accordingly, the payload data assembler 142 (or assembler 142 ) inserts the particular data pattern for the particular partition when storing the read response payload data.
- each of the source 110 , the destination 140 , the routing component 120 , and the logic 112 , 114 and 132 and the assembler 142 is implemented with one of hardware circuitry, software, or a combination of hardware and software.
- the routing component includes multiple stages of storage elements in addition to multiple types of queues for storing packets based on packet type.
- the routing component 120 uses multiple queues for storing read response data, write data, read access requests, write access requests, probe requests, and so forth.
- the routing component 120 uses arbitration logic for determining an order for sending packets to the destination 140 via the interface 136 .
- FIG. 2 a generalized block diagram of one embodiment of a computing system 200 is shown. Circuitry and logic previously described is numbered identically. Although the storage elements 234 of the routing component 120 are shown as flip-flop circuits, other storage elements are possible and contemplated.
- the packet 224 generally represents a packet received by the routing component 120 such as packet 150 . Therefore, packet 224 includes the same fields described earlier for packet 150 .
- the clock gating logic 232 uses partition enable signals from the packet 224 to conditionally enable and disable the clock signal 130 to one or more of the storage elements 234 .
- the logic 232 disables the clock signal 130 for each one of the storage elements 234 assigned to store data of partitions of the payload data associated with negated enable signals of the partition enable signals.
- the logic 232 uses byte enable signals as clock enable signals.
- the packet 224 includes N partitions with N being a positive, non-zero integer.
- Each of the partitions includes M bits with M being a positive, non-zero integer.
- examples of the partition size are a byte, a word, a dual-word, and so on.
- the logic 232 uses the Partition N Enable signal to conditionally enable the clock signal 130 for associated ones of the storage elements 234 .
- the partition size is a byte
- the Partition N Enable is a byte enable signal for the M bits of the Partition N.
- the logic 232 uses the Partition N Enable signal as a clock enable signal for the payload data from Partition N, Bit 0 to Partition N, Bit M.
- FIG. 3 one embodiment of a method 300 for efficient data transfer in a computing system by a routing component is shown.
- the steps in this embodiment are shown in sequential order.
- one or more of the elements described are performed concurrently, in a different order than shown, or are omitted entirely.
- Other additional elements are also performed as desired. Any of the various systems or apparatuses described herein are configured to implement methods 300 and 400 - 600 .
- Sources generate packets and send the packets to destinations through a communication fabric of a computing system.
- the communication fabric includes one or more routing components.
- An interface of a particular routing component receives a packet (block 302 ).
- Control logic of the routing component is implemented by hardware circuitry, software, or a combination of hardware and software.
- the control logic analyzes the received packet. For example, the control logic decodes a command in the header. If the packet is a data packet storing payload data, then the control logic inspects partition enable signals corresponding to the partitions of the payload data.
- the control logic If no partitions have negated enable signals (“no” branch of the conditional block 304 ), then the control logic maintains a clock signal for storage elements assigned to store data of the partitions (block 306 ). However, if any of the partitions have a negated enable signal (“yes” branch of the conditional block 304 ), then the control logic disables a clock signal for storage elements assigned to store data of these partitions (block 308 ). Therefore, the control logic uses the partition enable signals as clock enable signals. Afterward, the control logic conveys the packet to a destination via an interconnect such as the communication fabric (block 310 ).
- a source of one or more sources in a computing system generates packets in a computing system (block 402 ).
- sources are a general-purpose central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a multimedia engine, an input/output (I/O) device and so forth.
- the source determines a type of the packet (block 404 ). Examples of packet types are read request packets, read response packets, full size write request packets, partial size write request packets, write data packets, cache victim packets, probe request packets, coherency command packets and so forth.
- sources divide write requests into a write control packet and a write data packet. The source inserts a write command into the write control packet and inserts write data in a separate write data packet corresponding to the write command. In an embodiment, sources insert a read request command in a read control packet. Later, the destination receives the read control packet and inserts a read response command in a read control packet. The destination also inserts response data in a separate read data packet.
- a partial size write request packet is one example of a packet type with sparse asserted enable signals for partitions of payload data. For example, when a source desires to update two words (8 bytes) of a 64-byte cache line, the partial size write request packet includes eight asserted partition enable signals for the eight bytes to be updated. The other 56 partition enable signals are negated. In another example, the source desires to update all of the 64-byte cache line except the last word (4 bytes). Therefore, the partial size write request packet includes 60 asserted partition enable signals for the sixty bytes to be updated. The remaining 4 partition enable signals are negated. For other packet types, such as read response packets, cache victim packets and full size write packets, each of the partition enable signals is asserted. There are no negated partition enable signals for these packet types.
- the source determines whether any partitions contain a particular data pattern.
- One example of the particular data pattern is all zeroes in the partition. Other examples of the particular data pattern are possible and contemplated.
- control flow of method 400 moves to block 408 where the source maintains the values for the partition enable signals. If the source determines any partition contains the particular data pattern (“yes” branch of the conditional block 410 ), then the source negates the enable signals for the partitions containing the particular data pattern (block 412 ). In some embodiments, the source asserts an indication in the packet header specifying that the packet has a packet type associated with no sparse asserted enable signals for partitions of the payload data.
- the destination uses the indication, rather than decode the packet command in the header, to determine whether the packet type is associated with no sparse asserted enable signals for partitions of the payload data.
- the source transmits the packet to a destination via an interconnect such as a communication fabric (block 414 ).
- Control logic receives a packet.
- the control logic is located within a source or a destination of a computing system.
- the control logic is implemented by hardware circuitry, software or a combination of hardware and software.
- the control logic inspects the packet (block 502 ).
- the control logic analyzes the header of the packet to determine the packet type.
- control logic determines that the packet type is a read response type (“yes” branch of the conditional block 504 ), or a full size write request type (“yes” branch of the conditional block 506 ), or a cache victim type (“yes” branch of the conditional block 508 ), then the control logic determines the packet does not include sparse enable signals for partitions of the packet (block 510 ). Otherwise, the control logic determines the packet does include sparse enable signals for partitions of the packet (block 512 ). These results are used later by the control logic for determining whether to update the partition enable signals as described earlier in method 400 .
- a destination of one or more destinations in a computing system receives packets in a computing system (block 602 ).
- destinations are system memory, a general-purpose central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a multimedia engine, an input/output (I/O) device and so forth.
- the destination determines a type of the packet (block 604 ).
- examples of packet types are read request packets, read response packets, full size write request packets, partial size write request packets, write data packets, cache victim packets, probe request packets, coherency command packets and so forth.
- the destination If the packet type indicates sparse asserted enable signals for partitions of payload data (“yes” branch of the conditional block 606 ), then the destination maintains the received data for the partitions of the packet (block 608 ). However, if the packet type does not indicate sparse asserted enable signals for partitions of payload data (“no” branch of the conditional block 606 ), then the destination determines whether any partitions have an associated enable signal that is negated. If there are no negated partition enable signals for the payload data (“no” branch of the conditional block 610 ), then control flow of method 600 moves to block 608 where the destination maintains the received data for the partitions of the packet.
- the destination determines there are any negated partition enable signals for the payload data (“yes” branch of the conditional block 610 ), then the destination replaces these partitions with a particular data pattern and asserts the corresponding enable signal (block 612 ).
- One example of the particular data pattern is all zeroes in the partition. Other examples of the particular data pattern are possible and contemplated.
- the destination processes the packet with the data in its valid partitions (block 614 ). For example, the destination performs a write operation for partitions with asserted partition enable signals.
- the data of these partitions update the data stored at memory locations at the destination pointed to by an address stored in the packet.
- the particular data pattern is used although this particular data pattern was not transported by the communication fabric between the source and the destination.
- the computing system 700 includes communication fabric 710 between memory controller 770 and clients 790 .
- Memory controller 770 is used for interfacing with memory 780 .
- the communication fabric 710 (or fabric 710 ) includes multiple types of blocks for routing control and data packets.
- fabric 710 includes multiple routing components 720 , 762 , 764 and 766 in addition to routing component 750 and routing network 760 .
- Each of the sources and destinations and fabric 710 in computing system 700 supports a particular interconnect protocol.
- Packets transported in fabric 710 include the same fields described earlier for packet 150 (of FIG. 1 ).
- One or more of the blocks in the fabric 710 include clock gating logic that uses partition enable signals as clock enable signals for disabling storage elements used to store partitions of payload data.
- routing component 720 is shown to include clock gating logic 742 , which receives partition enable signals 744 .
- the partition enable signals 744 are byte enable signals.
- the clock gating logic 742 reduces power consumption for the computing system 700 by disabling clock signals.
- the clock gating logic 742 has the equivalent functionality of clock gating logic 132 (of FIG. 1 ) and clock gating logic 232 (of FIG. 2 ).
- clients 790 are individual dies on an integrated circuit (IC), such as a system-on-a-chip (SOC). In other embodiments, clients 790 are individual dies in a system-in-package (SiP) or a multi-chip module (MCM). In yet other embodiments, clients 790 are individual dies or chips on a printed circuit board. In various embodiments, clients 790 are used in a smartphone, a tablet computer, a gaming console, a smartwatch, a desktop computer and so forth. Each of the clients 792 , 794 and 796 is a functional block or unit, a processor core or a processor.
- IC integrated circuit
- SiP system-in-package
- MCM multi-chip module
- clients 790 are individual dies or chips on a printed circuit board.
- clients 790 are used in a smartphone, a tablet computer, a gaming console, a smartwatch, a desktop computer and so forth.
- Each of the clients 792 , 794 and 796 is a functional block or
- the computing system 700 includes a general-purpose central processing unit (CPU) 792 , a highly parallel data architecture processor such as a graphics processing unit (GPU) 794 , and a multimedia engine 796 .
- CPU central processing unit
- GPU graphics processing unit
- multimedia engine 796 a multimedia engine 796 .
- other examples of clients are possible such as a display unit, one or more input/output (I/O) peripheral devices, and one or more hubs used for interfacing to a multimedia player, a display unit and other.
- the hubs are clients in computing system 700 .
- Memory controller 770 includes queues for storing requests and responses. Additionally, memory controller 770 includes control logic for grouping requests to be sent to memory 780 , sending the requests based on timing specifications of the memory 780 and supporting any burst modes. Memory controller 770 also includes status and control registers for storing control parameters. In various embodiments, each of routing component 720 and memory controller 770 reorders received memory access requests for efficient out-of-order servicing. The reordering is based on one or more of a priority level, a quality of service (QoS) parameter, an age of a packet for a memory access request, and so forth. Although a single memory controller 770 is shown, in other embodiments, computing system 700 includes multiple memory controllers, each supporting one or more memory channels.
- QoS quality of service
- memory 780 includes row buffers for storing the contents of a row of dynamic random access memory (DRAM) being accessed.
- DRAM dynamic random access memory
- an access of the memory 780 includes a first activation or an opening stage followed by a stage that copies the contents of an entire row into a corresponding row buffer. Afterward, there is a read or write column access in addition to updating related status information.
- memory 780 includes multiple banks. Each one of the banks includes a respective row buffer. The accessed row is identified by an address, such as a DRAM page address, in the received memory access request from one of the clients 790 .
- the row buffer stores a page of data. In some embodiments, a page is 4 kilobytes (KB) of contiguous storage of data. However, other page sizes are possible and contemplated.
- memory 780 includes multiple three-dimensional (3D) memory dies stacked on one another.
- Die-stacking technology is a fabrication process that enables the physical stacking of multiple separate pieces of silicon (integrated chips) together in a same package with high-bandwidth and low-latency interconnects.
- the die is stacked side by side on a silicon interposer, or vertically directly on top of each other.
- One configuration for the SiP is to stack one or more memory chips next to and/or on top of a processing unit.
- an up-to-date (most recent) copy of data is brought from the memory 780 into one or more levels of a cache memory subsystem of one of the clients 790 .
- the client updates the copy of the data and now contains the up-to-date (most recent) copy of the data.
- the client does not modify the data retrieved from memory 780 , but uses it to process instructions of one or more applications and update other data.
- the client fills its cache memory subsystem with other data as it processes instructions of other applications and evicts the particular data stored at the specified memory address.
- the copy of the data is returned from the corresponding one of the clients 790 to the memory 780 by a write access request to update the stored copy in the memory 780 .
- fabric 710 transfers data back and forth between clients 790 and between memory 780 and clients 790 .
- Routing components 762 , 764 and 766 support communication protocols with clients 792 , 794 and 796 , respectively.
- each one of routing components 720 , 750 , 762 , 764 and 766 communicates with a single client as shown.
- one or more of routing components 720 , 750 , 762 , 764 and 766 communicates with multiple clients and tracks packets with a client identifier.
- routing components 720 , 750 , 762 , 764 and 766 include at least queues for storing request packets and response packets, selection logic for arbitrating between received packets before sending packets to network 760 and logic for building packets, decoding packets and supporting a communication protocol with the routing network 760 .
- routing components 720 , 750 , 762 , 764 and 766 have updated mappings between address spaces and memory channels.
- routing components 720 , 750 , 762 , 764 and 766 and memory controller 770 include hardware circuitry and/or software for implementing algorithms to provide its desired functionality.
- fabric 710 includes control logic, status and control registers and other storage elements for queuing requests and responses, storing control parameters, following one or more communication and network protocols, and efficiently routing traffic between sources and destinations on one or more buses.
- routing network 760 utilizes multiple switches in a point-to-point (P2P) ring topology.
- P2P point-to-point
- routing network 760 utilizes network switches with programmable routing tables in a cluster topology.
- routing network 760 utilizes a combination of topologies.
- arbitration unit 730 includes read queue 732 , write queue 736 and selection logic 740 . Although two queues are shown, in various embodiments, arbitration unit 730 includes any number of queues for storing memory access responses. Selection logic 740 selects between selected read responses 734 and selected write responses 738 to send as selected responses 742 to a respective one of clients 790 via fabric 710 . In one embodiment, arbitration unit 730 receives memory access responses from memory controller 770 via interface 722 . In some embodiments, arbitration unit 730 stores received read responses in read queue 732 and stores received write responses in write queue 736 . In other embodiments, the received read responses and received write responses are stored in a same queue.
- arbitration unit 730 reorders the received memory access responses for efficient out-of-order servicing. Reordering is based on one or more of a priority level, a quality of service (QoS) parameter, an age of a packet for a memory access request, and so forth.
- QoS quality of service
- the reordering algorithm is used by logic (not shown) within or positioned next to read queue 732 and write queue 736 as well as selection logic 740 .
- the arbitration unit 730 includes programmable control registers and/or control logic to adapt algorithms used for selection and reordering of responses based on the characteristics of fabric 710 .
- each of the interfaces 722 and 724 include storage elements for storing received packets.
- the clock gating logic 742 receives partition enable signals 744 of data packets storing payload data divided into partitions.
- the clock gating logic 742 reduces power consumption for the computing system 700 by disabling clock signals to the storage elements in at least interfaces 722 and 724 .
- the computing system 700 processes applications with reduced power consumption.
- program instructions of a software application are used to implement the methods and/or mechanisms previously described.
- the program instructions describe the behavior of hardware in a high-level programming language, such as C.
- a hardware design language HDL
- the program instructions are stored on a non-transitory computer readable storage medium. Numerous types of storage media are available.
- the storage medium is accessible by a computing system during use to provide the program instructions and accompanying data to the computing system for program execution.
- the computing system includes at least one or more memories and one or more processors that execute program instructions.
Abstract
Systems, apparatuses, and methods for efficient data transfer in a computing system are disclosed. A source generates packets to send across a communication fabric (or fabric) to a destination. The source generates partition enable signals for the partitions of payload data. The source negates an enable signal for a particular partition when the source determines the packet type indicates the particular partition should have an associated asserted enable signal in the packet, but the source also determines the particular partition includes a particular data pattern. Routing components of the fabric disable clock signals to storage elements assigned to store the particular partition. The destination inserts the particular data pattern for the particular partition in the payload data.
Description
- This application is a continuation of U.S. patent application Ser. No. 16/725,901, entitled “RE-PURPOSING BYTE ENABLES AS CLOCK ENABLES FOR POWER SAVINGS”, filed Dec. 23, 2019, the entirety of which is incorporated herein by reference.
- A variety of computing devices utilize heterogeneous integration, which integrates multiple types of processing units for providing system functionality. The multiple functions include audio/video (A/V) data processing, other high data parallel applications for the medicine and business fields, processing instructions of a general-purpose instruction set architecture (ISA), digital, analog, mixed-signal and radio-frequency (RF) functions, and so forth. A variety of choices exist for system packaging to integrate the multiple types of processing units. In some computing devices, a system-on-a-chip (SoC) is used, whereas, in other computing devices, smaller and higher-yielding chips are packaged as large chips in multi-chip modules (MCMs). Some computing devices include three-dimensional integrated circuits (3D ICs) that utilize die-stacking technology as well as silicon interposers, through silicon vias (TSVs) and other mechanisms to vertically stack and electrically connect two or more dies in a system-in-package (SiP).
- In addition to input/output devices, each of these processing units is a source in the computing system capable of generating read requests and write requests for data. In addition to system memory, each of the sources is also capable of being a targeted destination for requests. Regardless of the chosen system packaging, the data access requests and corresponding data, coherency probes, interrupts and other communication messages generated by sources for targeted destinations are typically transferred through a communication fabric (or fabric). The fabric reduces latency by having a relatively high number of physical wires available for transporting packets between sources and destinations. The data transport of packets across the wires of the fabric and the toggling of nodes within storage elements, queues, control logic and so on in the fabric increases power consumption for the computing system.
- The power consumption of modern integrated circuits has become an increasing design issue with each generation of semiconductor chips. As power consumption increases, more costly cooling systems, such as larger fans and heat sinks, must be utilized in order to remove excess heat and prevent circuit failure. However, cooling systems increase system costs. The circuit power dissipation constraint is not only an issue for portable computers and mobile communication devices, but also for desktop computers and servers utilizing high-performance microprocessors.
- In view of the above, methods for efficient data transfer in a computing system are desired.
- The advantages of the methods and mechanisms described herein may be better understood by referring to the following description in conjunction with the accompanying drawings, in which:
-
FIG. 1 is a block diagram of one embodiment of a computing system. -
FIG. 2 is a block diagram of one embodiment of a computing system. -
FIG. 3 is a flow diagram of one embodiment of a method for efficient data transfer in a computing system by a routing component. -
FIG. 4 is a flow diagram of one embodiment of a method for efficient data transfer in a computing system by a source. -
FIG. 5 is a flow diagram of one embodiment of a method for identifying packet types for efficient data transfer in a computing system. -
FIG. 6 is a flow diagram of one embodiment of a method for efficient data transfer in a computing system by a destination. -
FIG. 7 is a block diagram of one embodiment of a computing system. - While the invention is susceptible to various modifications and alternative forms, specific embodiments are shown by way of example in the drawings and are herein described in detail. It should be understood, however, that drawings and detailed description thereto are not intended to limit the invention to the particular form disclosed, but on the contrary, the invention is to cover all modifications, equivalents and alternatives falling within the scope of the present invention as defined by the appended claims.
- In the following description, numerous specific details are set forth to provide a thorough understanding of the methods and mechanisms presented herein. However, one having ordinary skill in the art should recognize that the various embodiments may be practiced without these specific details. In some instances, well-known structures, components, signals, computer program instructions, and techniques have not been shown in detail to avoid obscuring the approaches described herein. It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements.
- Various systems, apparatuses, methods, and computer-readable mediums for efficient data transfer in a computing system are disclosed. In various embodiments, a computing system includes one or more clients for processing applications. Examples of the clients include a general-purpose central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a multimedia engine, an input/output (I/O) device and so forth. Each of the clients is capable of generating data access requests. The clients are referred to as “sources” when the clients generate and send packets, which include data access requests, payload data, probe requests, coherency commands or other communication to send to a targeted destination. The clients and system memory are referred to as “destinations” when the clients and system memory are targets of packets generated by sources.
- The sources send packets to destinations through a communication fabric (or fabric). Examples of interconnections in the fabric are bus architectures, crossbar-based architectures, network-on-chip (NoC) communication subsystems, communication channels between dies, router switches with arbitration logic, repeaters, silicon interposers used to stack chips side-by-side, through silicon vias (TSVs) used to vertically stack special-purpose dies on top of processor dies, and so on.
- In some embodiments, sources divide payload data into partitions such as byte, word, double-word, and so on. In addition, in various embodiments partition enable signals for the partitions of payload data are generated. For example, the partition enable signals in
field 156 are byte enable signals. In some embodiments, the sources generate the partition enable signals (or enable signals) to indicate which partitions include valid data of the payload data. For example, each of the partitions include valid data for a read response packet. Therefore, the source asserts each one of the enable signals corresponding to the multiple partitions of the payload data. Similarly, each of the partitions include valid data for a full write data packet and a cache victim packet used to send previously cached data to system memory. In other embodiments, the source negates one or more enable signals to indicate which partitions include invalid data of the payload data. For example, one or more of the partitions include invalid data for a partial write data packet. - In other embodiments, the source negates an enable signal for a particular partition when the source determines a type of the packet indicates the particular partition should have an associated asserted enable signal in the packet, but the source also determines the particular partition includes a particular data pattern. For example, a particular partition of read response payload data includes the particular pattern. Although a read response data packet typically has each one of the multiple enable signals asserted, the source negates the enable signal for the particular partition. Rather than transport the particular data pattern throughout the fabric, the negated enable signal indicates that the particular partition should store the particular pattern without the particular pattern actually being transported through the fabric.
- The destination receives the read response data packet, and the destination determines both the type of the packet and the particular partition has an associated negated enable signal in the packet. In this example, the type of the read response data packet indicates the particular partition should have an associated asserted enable signal in the packet. Therefore, the destination interprets the negated enable signal as indicating the particular partition should include the particular data pattern. Accordingly, the destination inserts the particular data pattern for the particular partition when storing the read response payload data.
- While transporting a packet between a source and a destination, one or more routing components receive and send the packet within the fabric. Examples of the routing component are router switches, repeater blocks and so forth. In various embodiments, the routing component receives the packet and determines one or more partitions of payload data have associated negated enable signals. Accordingly, the routing component disables a storage element of the routing component assigned to store data of partitions of the payload data associated with the negated enable signals. Later, when the routing component sends the packet to a next routing component or to the destination, the routing component sends the negated enable signal and a previous value stored in each storage element assigned to store data of partitions of the payload data associated with the negated enable signals. Since the clock signal was disabled, these storage elements did not load new values. The previous values are still stored in these storage elements. Additionally, these storage elements do not consume power associated with loading new values, and conditional clock signals do not toggle. In some embodiments, clock gating logic in the routing component uses the partition enable signals directly as a conditional clock gating control signal.
- Referring to
FIG. 1 , a generalized block diagram of one embodiment of acomputing system 100 is shown. As shown,source 110 sends apacket 150 to thedestination 140 through arouting component 120. Thesource 110 is a client in thecomputing system 100 such as a general-purpose central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a multimedia engine, an input/output (I/O) device and so forth. Thedestination 140 is system memory or one of the examples of a client in thecomputing system 100. Examples of therouting component 120 is a repeater block, a network switch, or other component of a communication fabric. Although asingle source 110, asingle routing component 120 and asingle destination 140 is shown, thecomputing system 100 includes any number of each of these blocks in other implementations. In some embodiments, the blocks 110-140 of thecomputing system 100 are individual dies on an integrated circuit (IC), such as a system-on-a-chip (SOC). In other embodiments, the blocks 110-140 are individual dies in a system-in-package (SiP) or a multi-chip module (MCM). Other blocks are not shown for ease of illustration such as a power controller or a power management unit, clock generating sources, link interfaces for communication with any other processing nodes, and memory controllers for interfacing with system memory. - The
source 110 generates and sends packets. Thesource 110 and thedestination 140 use packets for communicating data access requests, payload data, probe requests, coherency commands, and so forth. The packet generation logic 112 (or logic 112) generates thepacket 150.Packet 150 includes multiple fields 152-158. Although the fields 152-158 are shown in a particular contiguous order, in other embodiments, thepacket 150 uses another storage arrangement. In other embodiments,packet 150 includes one or more other fields not shown. - The
header 152 stores one or more of commands, source and destination identifiers, process and thread identifiers, timestamps, parity and checksum and other data integrity information, priority levels and/or quality of service parameters, and so forth. In other embodiments, one or more of these fields are separated from theheader 152 and located elsewhere in thepacket 150. Examples of other fields not shown in thepacket 150 are a virtual channel identifier, a response type for response packets, an indication of a transaction offset used when a read response for a large read request is divided into multiple data packets, an indication for response packets that represents whether the response packet includes a single response or multiple responses, and an indication of a number of credits for data packets, request packets and response packets. Other examples of fields stored inpacket 150 are possible and contemplated in other embodiments. - The
address 154 stores an indication of a target address associated with the command in theheader 152. Thefield 156 stores enable signals associated with the partitions of the payload data stored infield 158. For data packets, thesource 110 divides payload data into partitions. Examples of the partition size are a byte, a word (4 bytes), a dual-word (8 bytes), and so on. The partition enable logic 114 (or logic 114) generates the partition enable signals stored infield 156. When the partition size is a byte, the partition enable signals infield 156 are byte enable signals. In some embodiments, thelogic 114 generates the partition enable signals (or enable signals) to indicate which partitions include valid data of the payload data stored infield 158. For example, each of the partitions include valid data for a read response packet. The command and/or the packet type is stored in theheader 152. Therefore, thelogic 114 asserts each one of the enable signals in thefield 156 corresponding to the multiple partitions of the payload data infield 158. Similarly, each of the partitions include valid data for a full write data packet and a cache victim packet used to send previously cached data to system memory. - In other embodiments, the
logic 114 negates one or more enable signals infield 156 to indicate which partitions include invalid data of the payload data infield 158. For example, one or more of the partitions include invalid data for a partial write data packet. As used herein, a signal is considered to be “asserted” when the signal has a value used to enable logic and turn on transistors to cause the transistor to conduct current. For some logic, an asserted value is a Boolean logic high value or a Boolean logic high level. For example, when an n-type metal oxide semiconductor (NMOS) transistor receives a Boolean logic high level on its gate terminal, the NMOS transistor is enabled, or otherwise turned on. Accordingly, the NMOS transistor is capable of conducting current. For other logic, an asserted value is a Boolean logic low level. When a p-type MOS (PMOS) transistor receives a Boolean logic low level on its gate terminal, the PMOS transistor is enabled, or otherwise turned on, and the PMOS transistor is capable of conducting current. In contrast, a signal is considered to be “negated” when the signal has a value used to disable logic and turn off transistors. - In some embodiments, the
logic 114 negates an enable signal in thefield 156 for a particular partition in thefield 158 when thelogic 114 determines a type of the packet indicates the particular partition should have an associated asserted enable signal in the packet, but thelogic 114 also determines the particular partition includes a particular data pattern. For example, a particular partition of read response payload data includes the particular pattern. One example of the particular pattern is all zeroes in the particular partition. When the partition size is a byte, the particular partition includes eight zeroes. Other data patterns are possible and contemplated. Although a read response data packet typically has each one of the multiple enable signals asserted, thelogic 114 negates the enable signal infield 156 for the particular partition infield 158. Rather than transport the particular data pattern through therouting component 120, the negated enable signal indicates that therouting component 120 should transport thepacket 150 without the actual value of the particular partition. - In various embodiments, the
interface 122 of therouting component 120 receives thepacket 150. In some embodiments, theinterface 122 includes thestorage elements 134 for receiving and storing thepacket 150. In other embodiments, theinterface 122 includes impedance matching circuitry when the distance from thesource 110 is appreciable. In yet other embodiments, theinterface 122 includes wires to transfer the received packet to thestorage elements 134. Thepacket 124 generally represents a packet received by therouting component 120 such aspacket 150. Therefore,packet 124 includes the same fields described earlier forpacket 150. Therouting component 120 includes the clock gating logic 132 (or logic 132) for enabling and disabling theclock signal 130 to one or more of thestorage elements 134. Thestorage elements 134 include one or more of registers, flip-flop circuits, content addressable memory (CAM), random access memory (RAM), and so forth. Thelogic 132 uses partition enablesignals 126 from thepacket 124 to conditionally enable and disable theclock signal 130 to one or more of thestorage elements 134. For example, thelogic 132 disables theclock signal 130 for each one of thestorage elements 134 assigned to store data of partitions of the payload data associated with negated enable signals of the partition enable signals 126. Therefore, thelogic 132 uses the partition enablesignals 126 as clock enable signals. When the partition size is a byte, thelogic 132 uses byte enablesignals 126 as clock enable signals. - Later, when the
routing component 120 sends thepacket 124 to a next routing component (not shown) or to thedestination 140, theinterface 136 sends the packet information stored in thestorage elements 134. Theinterface 136 includes simply wires, one or more logic gate buffers, or other circuitry for transmitting data. Theinterface 136 sends the negated enable signals of the partition enablesignals 126 and a previous value stored in each storage element assigned to store data of partitions of the payload data associated with the negated enable signals. Since thelogic 132 disabled theclock signal 130 for particular one of thestorage elements 134, these particular storage elements did not load new values. The previous values from an earlier clock cycle are still stored in these particular storage elements and theinterface 136 sends these previous values to thedestination 140. The original values sent by thesource 110 of the particular partitions of the payload data associated with negated partition enable signals of thesignals 126 are not stored or transported by therouting component 120. Additionally, these storage elements do not consume power associated with loading new values, and their conditional clock signals do not toggle. In contrast, other ones of thestorage elements 134 store theheader 152,address 154 and the partition enablesignals 156 of thepacket 124. These storage elements receive a version of theclock signal 130, which is not qualified by the partition enable signals 126. - The
destination 140 receives the packet information, and thedestination 140 determines both the type of the packet and whether a particular partition of the payload data has an associated negated enable signal in the packet. When the packet type indicates each of the partitions of the payload data should have an associated asserted enable signal in the packet, but one or more of the partition enable signals are negated, thedestination 140 determines that one or more of the partitions require insertion of a particular data pattern. As described earlier, when the packet type is a full size write packet, a read response packet, or a cache victim packet, typically, each of the partition enable signals is asserted. Therefore, thedestination 140 interprets a negated enable signal as indicating the particular partition should include the particular data pattern. Accordingly, the payload data assembler 142 (or assembler 142) inserts the particular data pattern for the particular partition when storing the read response payload data. - It is noted that each of the
source 110, thedestination 140, therouting component 120, and thelogic assembler 142 is implemented with one of hardware circuitry, software, or a combination of hardware and software. Although not shown, in other embodiments, the routing component includes multiple stages of storage elements in addition to multiple types of queues for storing packets based on packet type. For example, therouting component 120 uses multiple queues for storing read response data, write data, read access requests, write access requests, probe requests, and so forth. Additionally, in some embodiments, therouting component 120 uses arbitration logic for determining an order for sending packets to thedestination 140 via theinterface 136. - Turning now to
FIG. 2 , a generalized block diagram of one embodiment of acomputing system 200 is shown. Circuitry and logic previously described is numbered identically. Although thestorage elements 234 of therouting component 120 are shown as flip-flop circuits, other storage elements are possible and contemplated. Thepacket 224 generally represents a packet received by therouting component 120 such aspacket 150. Therefore,packet 224 includes the same fields described earlier forpacket 150. The clock gating logic 232 (or logic 232) uses partition enable signals from thepacket 224 to conditionally enable and disable theclock signal 130 to one or more of thestorage elements 234. For example, thelogic 232 disables theclock signal 130 for each one of thestorage elements 234 assigned to store data of partitions of the payload data associated with negated enable signals of the partition enable signals. When the partition size is a byte, thelogic 232 uses byte enable signals as clock enable signals. - As shown, the
packet 224 includes N partitions with N being a positive, non-zero integer. Each of the partitions includes M bits with M being a positive, non-zero integer. As described earlier, examples of the partition size are a byte, a word, a dual-word, and so on. For each of the M bits of the Partition N, thelogic 232 uses the Partition N Enable signal to conditionally enable theclock signal 130 for associated ones of thestorage elements 234. When the partition size is a byte, the Partition N Enable is a byte enable signal for the M bits of the Partition N. Thelogic 232 uses the Partition N Enable signal as a clock enable signal for the payload data from Partition N, Bit 0 to Partition N, Bit M. - Referring now to
FIG. 3 , one embodiment of amethod 300 for efficient data transfer in a computing system by a routing component is shown. For purposes of discussion, the steps in this embodiment (as well as inFIGS. 4-6 ) are shown in sequential order. However, it is noted that in various embodiments of the described methods, one or more of the elements described are performed concurrently, in a different order than shown, or are omitted entirely. Other additional elements are also performed as desired. Any of the various systems or apparatuses described herein are configured to implementmethods 300 and 400-600. - Sources generate packets and send the packets to destinations through a communication fabric of a computing system. The communication fabric includes one or more routing components. An interface of a particular routing component receives a packet (block 302). Control logic of the routing component is implemented by hardware circuitry, software, or a combination of hardware and software. The control logic analyzes the received packet. For example, the control logic decodes a command in the header. If the packet is a data packet storing payload data, then the control logic inspects partition enable signals corresponding to the partitions of the payload data.
- If no partitions have negated enable signals (“no” branch of the conditional block 304), then the control logic maintains a clock signal for storage elements assigned to store data of the partitions (block 306). However, if any of the partitions have a negated enable signal (“yes” branch of the conditional block 304), then the control logic disables a clock signal for storage elements assigned to store data of these partitions (block 308). Therefore, the control logic uses the partition enable signals as clock enable signals. Afterward, the control logic conveys the packet to a destination via an interconnect such as the communication fabric (block 310).
- Referring now to
FIG. 4 , one embodiment of amethod 400 for efficient data transfer in a computing system by a source is shown. A source of one or more sources in a computing system generates packets in a computing system (block 402). Examples of sources are a general-purpose central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a multimedia engine, an input/output (I/O) device and so forth. - The source determines a type of the packet (block 404). Examples of packet types are read request packets, read response packets, full size write request packets, partial size write request packets, write data packets, cache victim packets, probe request packets, coherency command packets and so forth. In some embodiments, sources divide write requests into a write control packet and a write data packet. The source inserts a write command into the write control packet and inserts write data in a separate write data packet corresponding to the write command. In an embodiment, sources insert a read request command in a read control packet. Later, the destination receives the read control packet and inserts a read response command in a read control packet. The destination also inserts response data in a separate read data packet.
- A partial size write request packet is one example of a packet type with sparse asserted enable signals for partitions of payload data. For example, when a source desires to update two words (8 bytes) of a 64-byte cache line, the partial size write request packet includes eight asserted partition enable signals for the eight bytes to be updated. The other 56 partition enable signals are negated. In another example, the source desires to update all of the 64-byte cache line except the last word (4 bytes). Therefore, the partial size write request packet includes 60 asserted partition enable signals for the sixty bytes to be updated. The remaining 4 partition enable signals are negated. For other packet types, such as read response packets, cache victim packets and full size write packets, each of the partition enable signals is asserted. There are no negated partition enable signals for these packet types.
- If the packet type indicates sparse asserted enable signals for partitions of payload data (“yes” branch of the conditional block 406), then the source maintains the values for the partition enable signals (block 408). However, if the packet type does not indicate sparse asserted enable signals for partitions of payload data (“no” branch of the conditional block 406), then the source determines whether any partitions contain a particular data pattern. One example of the particular data pattern is all zeroes in the partition. Other examples of the particular data pattern are possible and contemplated.
- If the source determines no partition contains the particular data pattern (“no” branch of the conditional block 410), then control flow of
method 400 moves to block 408 where the source maintains the values for the partition enable signals. If the source determines any partition contains the particular data pattern (“yes” branch of the conditional block 410), then the source negates the enable signals for the partitions containing the particular data pattern (block 412). In some embodiments, the source asserts an indication in the packet header specifying that the packet has a packet type associated with no sparse asserted enable signals for partitions of the payload data. At a later time, the destination uses the indication, rather than decode the packet command in the header, to determine whether the packet type is associated with no sparse asserted enable signals for partitions of the payload data. The source transmits the packet to a destination via an interconnect such as a communication fabric (block 414). - Turning to
FIG. 5 , one embodiment of amethod 500 for identifying packet types for efficient data transfer in a computing system is shown. Control logic receives a packet. The control logic is located within a source or a destination of a computing system. The control logic is implemented by hardware circuitry, software or a combination of hardware and software. The control logic inspects the packet (block 502). The control logic analyzes the header of the packet to determine the packet type. If the control logic determines that the packet type is a read response type (“yes” branch of the conditional block 504), or a full size write request type (“yes” branch of the conditional block 506), or a cache victim type (“yes” branch of the conditional block 508), then the control logic determines the packet does not include sparse enable signals for partitions of the packet (block 510). Otherwise, the control logic determines the packet does include sparse enable signals for partitions of the packet (block 512). These results are used later by the control logic for determining whether to update the partition enable signals as described earlier inmethod 400. - Turning to
FIG. 6 , one embodiment of amethod 600 for identifying packet types for efficient data transfer in a computing system by a destination is shown. A destination of one or more destinations in a computing system receives packets in a computing system (block 602). Examples of destinations are system memory, a general-purpose central processing unit (CPU), a graphics processing unit (GPU), an accelerated processing unit (APU), a multimedia engine, an input/output (I/O) device and so forth. The destination determines a type of the packet (block 604). As described earlier, examples of packet types are read request packets, read response packets, full size write request packets, partial size write request packets, write data packets, cache victim packets, probe request packets, coherency command packets and so forth. - If the packet type indicates sparse asserted enable signals for partitions of payload data (“yes” branch of the conditional block 606), then the destination maintains the received data for the partitions of the packet (block 608). However, if the packet type does not indicate sparse asserted enable signals for partitions of payload data (“no” branch of the conditional block 606), then the destination determines whether any partitions have an associated enable signal that is negated. If there are no negated partition enable signals for the payload data (“no” branch of the conditional block 610), then control flow of
method 600 moves to block 608 where the destination maintains the received data for the partitions of the packet. - If the destination determines there are any negated partition enable signals for the payload data (“yes” branch of the conditional block 610), then the destination replaces these partitions with a particular data pattern and asserts the corresponding enable signal (block 612). One example of the particular data pattern is all zeroes in the partition. Other examples of the particular data pattern are possible and contemplated. The destination processes the packet with the data in its valid partitions (block 614). For example, the destination performs a write operation for partitions with asserted partition enable signals. The data of these partitions update the data stored at memory locations at the destination pointed to by an address stored in the packet. In some embodiments, the particular data pattern is used although this particular data pattern was not transported by the communication fabric between the source and the destination.
- Turning now to
FIG. 7 , a generalized block diagram of one embodiment of acomputing system 700 is shown. Thecomputing system 700 includescommunication fabric 710 betweenmemory controller 770 andclients 790.Memory controller 770 is used for interfacing withmemory 780. Although three clients 792-796 are shown inclients 790,computing system 700 includes any number of clients. The communication fabric 710 (or fabric 710) includes multiple types of blocks for routing control and data packets. For example,fabric 710 includesmultiple routing components routing component 750 androuting network 760. Each of the sources and destinations andfabric 710 incomputing system 700 supports a particular interconnect protocol. Packets transported infabric 710 include the same fields described earlier for packet 150 (ofFIG. 1 ). One or more of the blocks in thefabric 710 include clock gating logic that uses partition enable signals as clock enable signals for disabling storage elements used to store partitions of payload data. For example,routing component 720 is shown to includeclock gating logic 742, which receives partition enable signals 744. When the partition size is a byte, the partition enablesignals 744 are byte enable signals. Theclock gating logic 742 reduces power consumption for thecomputing system 700 by disabling clock signals. In various embodiments, theclock gating logic 742 has the equivalent functionality of clock gating logic 132 (ofFIG. 1 ) and clock gating logic 232 (ofFIG. 2 ). - In some embodiments,
clients 790 are individual dies on an integrated circuit (IC), such as a system-on-a-chip (SOC). In other embodiments,clients 790 are individual dies in a system-in-package (SiP) or a multi-chip module (MCM). In yet other embodiments,clients 790 are individual dies or chips on a printed circuit board. In various embodiments,clients 790 are used in a smartphone, a tablet computer, a gaming console, a smartwatch, a desktop computer and so forth. Each of theclients computing system 700 includes a general-purpose central processing unit (CPU) 792, a highly parallel data architecture processor such as a graphics processing unit (GPU) 794, and amultimedia engine 796. As described earlier, other examples of clients are possible such as a display unit, one or more input/output (I/O) peripheral devices, and one or more hubs used for interfacing to a multimedia player, a display unit and other. In such cases, the hubs are clients incomputing system 700. -
Memory controller 770 includes queues for storing requests and responses. Additionally,memory controller 770 includes control logic for grouping requests to be sent tomemory 780, sending the requests based on timing specifications of thememory 780 and supporting any burst modes.Memory controller 770 also includes status and control registers for storing control parameters. In various embodiments, each ofrouting component 720 andmemory controller 770 reorders received memory access requests for efficient out-of-order servicing. The reordering is based on one or more of a priority level, a quality of service (QoS) parameter, an age of a packet for a memory access request, and so forth. Although asingle memory controller 770 is shown, in other embodiments,computing system 700 includes multiple memory controllers, each supporting one or more memory channels. - In various embodiments,
memory 780 includes row buffers for storing the contents of a row of dynamic random access memory (DRAM) being accessed. In an embodiment, an access of thememory 780 includes a first activation or an opening stage followed by a stage that copies the contents of an entire row into a corresponding row buffer. Afterward, there is a read or write column access in addition to updating related status information. In some embodiments,memory 780 includes multiple banks. Each one of the banks includes a respective row buffer. The accessed row is identified by an address, such as a DRAM page address, in the received memory access request from one of theclients 790. In various embodiments, the row buffer stores a page of data. In some embodiments, a page is 4 kilobytes (KB) of contiguous storage of data. However, other page sizes are possible and contemplated. - In an embodiment,
memory 780 includes multiple three-dimensional (3D) memory dies stacked on one another. Die-stacking technology is a fabrication process that enables the physical stacking of multiple separate pieces of silicon (integrated chips) together in a same package with high-bandwidth and low-latency interconnects. In some embodiments, the die is stacked side by side on a silicon interposer, or vertically directly on top of each other. One configuration for the SiP is to stack one or more memory chips next to and/or on top of a processing unit. - In various embodiments, an up-to-date (most recent) copy of data is brought from the
memory 780 into one or more levels of a cache memory subsystem of one of theclients 790. Based on the instructions being processed by the client, the client updates the copy of the data and now contains the up-to-date (most recent) copy of the data. Alternatively, the client does not modify the data retrieved frommemory 780, but uses it to process instructions of one or more applications and update other data. At a later time, the client fills its cache memory subsystem with other data as it processes instructions of other applications and evicts the particular data stored at the specified memory address. The copy of the data is returned from the corresponding one of theclients 790 to thememory 780 by a write access request to update the stored copy in thememory 780. - In various embodiments,
fabric 710 transfers data back and forth betweenclients 790 and betweenmemory 780 andclients 790.Routing components clients routing components components components routing network 760. In an embodiment, routingcomponents components memory controller 770 include hardware circuitry and/or software for implementing algorithms to provide its desired functionality. - In various embodiments,
fabric 710 includes control logic, status and control registers and other storage elements for queuing requests and responses, storing control parameters, following one or more communication and network protocols, and efficiently routing traffic between sources and destinations on one or more buses. In an embodiment,routing network 760 utilizes multiple switches in a point-to-point (P2P) ring topology. In other embodiments,routing network 760 utilizes network switches with programmable routing tables in a cluster topology. In yet other embodiments,routing network 760 utilizes a combination of topologies. - As shown,
arbitration unit 730 includes readqueue 732, write queue 736 andselection logic 740. Although two queues are shown, in various embodiments,arbitration unit 730 includes any number of queues for storing memory access responses.Selection logic 740 selects between selected readresponses 734 and selected write responses 738 to send as selectedresponses 742 to a respective one ofclients 790 viafabric 710. In one embodiment,arbitration unit 730 receives memory access responses frommemory controller 770 viainterface 722. In some embodiments,arbitration unit 730 stores received read responses inread queue 732 and stores received write responses in write queue 736. In other embodiments, the received read responses and received write responses are stored in a same queue. In some embodiments,arbitration unit 730 reorders the received memory access responses for efficient out-of-order servicing. Reordering is based on one or more of a priority level, a quality of service (QoS) parameter, an age of a packet for a memory access request, and so forth. The reordering algorithm is used by logic (not shown) within or positioned next to readqueue 732 and write queue 736 as well asselection logic 740. - In various embodiments, the
arbitration unit 730 includes programmable control registers and/or control logic to adapt algorithms used for selection and reordering of responses based on the characteristics offabric 710. In some embodiments, each of theinterfaces clock gating logic 742 receives partition enablesignals 744 of data packets storing payload data divided into partitions. Theclock gating logic 742 reduces power consumption for thecomputing system 700 by disabling clock signals to the storage elements in at least interfaces 722 and 724. When multiple components in therouting network 760 and therouting components computing system 700 processes applications with reduced power consumption. - In various embodiments, program instructions of a software application are used to implement the methods and/or mechanisms previously described. The program instructions describe the behavior of hardware in a high-level programming language, such as C. Alternatively, a hardware design language (HDL) is used, such as Verilog. The program instructions are stored on a non-transitory computer readable storage medium. Numerous types of storage media are available. The storage medium is accessible by a computing system during use to provide the program instructions and accompanying data to the computing system for program execution. The computing system includes at least one or more memories and one or more processors that execute program instructions.
- It should be emphasized that the above-described embodiments are only non-limiting examples of implementations. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.
Claims (20)
1. A computing system, comprising:
a source comprising circuitry configured to generate a packet comprising:
a plurality of partitions of a data payload; and
a plurality of enable signals, each associated with a partition of the plurality of partitions;
a destination; and
a routing component coupled to each of the source and the destination;
wherein the routing component comprises circuitry configured to:
receive the packet from the source; and
disable a clock signal for each storage element of the routing component configured to store data of a given partition of the plurality of partitions, in response to determining the given partition has an associated enable signal in the packet that is negated.
2. The computing system as recited in claim 1 , wherein the routing component is further configured to convey to the destination:
the negated enable signal; and
a previous value stored in each storage element assigned to store data of the given partition.
3. The computing system as recited in claim 1 , wherein the routing component is further configured to enable a clock signal for each storage element of the routing component assigned to store data of the given partition, in response to:
determining the given partition has an associated asserted enable signal in the packet.
4. The computing system as recited in claim 1 , wherein the source is further configured to negate an enable signal for the given partition, in response to:
determining a type of the packet indicates the given partition has an associated asserted enable signal in the packet; and
determining the given partition comprises a given data pattern.
5. The computing system as recited in claim 4 , wherein the type of the packet indicating the given partition has an associated asserted enable signal in the packet comprises a response type of packet.
6. The computing system as recited in claim 1 , wherein the destination is further configured to:
receive the packet from the routing component; and
insert the given data pattern in the given partition of the packet, in response to:
determining a type of the packet indicates the given partition has an associated asserted enable signal in the packet; and
determining the given partition has an associated negated enable signal in the packet.
7. The computing system as recited in claim 1 , wherein the routing component further comprises one of a switch and a repeater of a communication fabric between the source and the destination.
8. The computing system as recited in claim 1 , wherein the source comprises one or more of a central processing unit, a graphics processing unit and a multimedia engine.
9. A method, comprising:
generating, by a source, a packet comprising:
a plurality of partitions of a data payload; and
a plurality of enable signals, each associated with a partition of the plurality of partitions;
processing, by a destination, the packet; and
receiving, by a routing component, the packet from the source;
disabling, by the routing component, a clock signal for each storage element of the routing component assigned to store data of a given partition of the plurality of partitions, in response to determining the given partition has an associated negated enable signal in the packet.
10. The method as recited in claim 9 , further comprising conveying to the destination:
the negated enable signal; and
a previous value stored in each storage element assigned to store data of the given partition.
11. The method as recited in claim 9 , further comprising enabling a clock signal for each storage element of the routing component assigned to store data of the given partition, in response to:
determining the given partition has an associated asserted enable signal in the packet.
12. The method as recited in claim 9 , further comprising negating an enable signal for the given partition, in response to:
determining a type of the packet indicates the given partition has an associated asserted enable signal in the packet; and
determining the given partition comprises a given data pattern.
13. The method as recited in claim 12 , wherein the type of the packet indicating the given partition has an associated asserted enable signal in the packet comprises a cache victim type of packet.
14. The method as recited in claim 12 , wherein the type of the packet indicating the given partition has an associated asserted enable signal in the packet comprises a full size write type of packet.
15. The method as recited in claim 9 , further comprising:
receiving, by the destination, the packet from the routing component; and
inserting, by the destination, the given data pattern in the given partition of the packet, in response to:
determining a type of the packet indicates the given partition has an associated asserted enable signal in the packet; and
determining the given partition has an associated negated enable signal in the packet.
16. The method as recited in claim 9 , wherein the routing component comprises one of a switch and a repeater of a communication fabric between the source and the destination.
17. An apparatus, comprising:
a first interface configured to receive, from a source, a packet comprising:
a plurality of partitions of a data payload; and
a plurality of enable signals, each associated with a partition of the plurality of partitions;
a second interface configured to convey the packet to a destination;
a plurality of storage elements; and
circuitry configured to:
disable a clock signal for each storage element of the plurality of storage elements assigned to store data of a given partition of the plurality of partitions, in response to determining the given partition has an associated negated enable signal in the packet.
18. The apparatus as recited in claim 17 , wherein the circuitry is further configured to convey, via the second interface, to the destination:
the negated enable signal; and
a previous value stored in each storage element assigned to store data of the given partition.
19. The apparatus as recited in claim 17 , wherein the circuitry is further configured to enable a clock signal for each storage element of the plurality of storage elements assigned to store data of the given partition, in response to:
determining the given partition has an associated asserted enable signal in the packet.
20. The apparatus as recited in claim 17 , wherein the apparatus comprises a switch of a communication fabric between the source and the destination.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US17/548,398 US20220103489A1 (en) | 2019-12-23 | 2021-12-10 | Re-purposing byte enables as clock enables for power savings |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/725,901 US11223575B2 (en) | 2019-12-23 | 2019-12-23 | Re-purposing byte enables as clock enables for power savings |
US17/548,398 US20220103489A1 (en) | 2019-12-23 | 2021-12-10 | Re-purposing byte enables as clock enables for power savings |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/725,901 Continuation US11223575B2 (en) | 2019-12-23 | 2019-12-23 | Re-purposing byte enables as clock enables for power savings |
Publications (1)
Publication Number | Publication Date |
---|---|
US20220103489A1 true US20220103489A1 (en) | 2022-03-31 |
Family
ID=74191841
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/725,901 Active 2040-01-25 US11223575B2 (en) | 2019-12-23 | 2019-12-23 | Re-purposing byte enables as clock enables for power savings |
US17/548,398 Pending US20220103489A1 (en) | 2019-12-23 | 2021-12-10 | Re-purposing byte enables as clock enables for power savings |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/725,901 Active 2040-01-25 US11223575B2 (en) | 2019-12-23 | 2019-12-23 | Re-purposing byte enables as clock enables for power savings |
Country Status (6)
Country | Link |
---|---|
US (2) | US11223575B2 (en) |
EP (1) | EP4081908A1 (en) |
JP (1) | JP2023507330A (en) |
KR (1) | KR20220113515A (en) |
CN (1) | CN114830102A (en) |
WO (1) | WO2021133629A1 (en) |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5535373A (en) * | 1991-11-27 | 1996-07-09 | International Business Machines Corporation | Protocol-to-protocol translator for interfacing disparate serial network nodes to a common parallel switching network |
US5768608A (en) * | 1994-11-22 | 1998-06-16 | Seiko Epson Corporation | Data processing apparatus and method for making same |
US5956743A (en) * | 1997-08-25 | 1999-09-21 | Bit Microsystems, Inc. | Transparent management at host interface of flash-memory overhead-bytes using flash-specific DMA having programmable processor-interrupt of high-level operations |
US6076139A (en) * | 1996-12-31 | 2000-06-13 | Compaq Computer Corporation | Multimedia computer architecture with multi-channel concurrent memory access |
US20040133714A1 (en) * | 2000-06-30 | 2004-07-08 | Intel Corporation | Transaction partitioning |
US20040267481A1 (en) * | 2003-05-20 | 2004-12-30 | Resnick David R. | Apparatus and method for testing memory cards |
US20120089889A1 (en) * | 2010-10-06 | 2012-04-12 | Cleversafe, Inc. | Data transmission utilizing partitioning and dispersed storage error encoding |
US20120246369A1 (en) * | 2009-11-26 | 2012-09-27 | Toshiki Takeuchi | Bus monitor circuit and bus monitor method |
US20180165199A1 (en) * | 2016-12-12 | 2018-06-14 | Intel Corporation | Apparatuses and methods for a processor architecture |
US20180349288A1 (en) * | 2017-05-30 | 2018-12-06 | Intel Corporation | Input/output translation lookaside buffer prefetching |
US10558602B1 (en) * | 2018-09-13 | 2020-02-11 | Intel Corporation | Transmit byte enable information over a data bus |
US20200371960A1 (en) * | 2019-05-24 | 2020-11-26 | Texas Instruments Incorporated | Methods and apparatus for allocation in a victim cache system |
US20210048865A1 (en) * | 2019-08-16 | 2021-02-18 | Apple Inc. | Dashboard with push model for receiving sensor data |
Family Cites Families (80)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4760515A (en) | 1985-10-28 | 1988-07-26 | International Business Machines Corporation | Arbitration apparatus for determining priority of access to a shared bus on a rotating priority basis |
US5553223A (en) | 1990-04-03 | 1996-09-03 | U S West Advanced Technologies, Inc. | Method and system of selectively transmitting display formats and data between a host computer and an intelligent terminal |
US5903324A (en) * | 1994-06-30 | 1999-05-11 | Thomson Multimedia S.A. | Transport processor interface for a digital television system |
US5737748A (en) * | 1995-03-15 | 1998-04-07 | Texas Instruments Incorporated | Microprocessor unit having a first level write-through cache memory and a smaller second-level write-back cache memory |
US6023561A (en) | 1995-06-01 | 2000-02-08 | Advanced Micro Devices, Inc. | System for processing traceable cache trace information |
US6138140A (en) | 1995-07-14 | 2000-10-24 | Sony Corporation | Data processing method and device |
US5815653A (en) | 1995-11-13 | 1998-09-29 | You; Lawrence L. | Debugging system with portable debug environment-independent client and non-portable platform-specific server |
US6058393A (en) | 1996-02-23 | 2000-05-02 | International Business Machines Corporation | Dynamic connection to a remote tool in a distributed processing system environment used for debugging |
US5706502A (en) | 1996-03-25 | 1998-01-06 | Sun Microsystems, Inc. | Internet-enabled portfolio manager system and method |
US5761513A (en) | 1996-07-01 | 1998-06-02 | Sun Microsystems, Inc. | System and method for exception handling in dynamically linked programs |
US5923885A (en) | 1996-10-31 | 1999-07-13 | Sun Microsystems, Inc. | Acquisition and operation of remotely loaded software using applet modification of browser software |
US6618854B1 (en) | 1997-02-18 | 2003-09-09 | Advanced Micro Devices, Inc. | Remotely accessible integrated debug environment |
US5926838A (en) * | 1997-03-19 | 1999-07-20 | Micron Electronics | Interface for high speed memory |
US6119247A (en) | 1998-06-22 | 2000-09-12 | International Business Machines Corporation | Remote debugging of internet applications |
SE514430C2 (en) | 1998-11-24 | 2001-02-26 | Net Insight Ab | Method and system for determining network topology |
US6163263A (en) * | 1999-02-02 | 2000-12-19 | Pittway Corporation | Circuitry for electrical device in multi-device communications system |
US6667960B1 (en) | 2000-04-29 | 2003-12-23 | Hewlett-Packard Development Company, L.P. | Protocol for identifying components in a point-to-point computer system |
JP4782937B2 (en) | 2001-03-27 | 2011-09-28 | 株式会社東芝 | Semiconductor memory device |
US7027400B2 (en) | 2001-06-26 | 2006-04-11 | Flarion Technologies, Inc. | Messages and control methods for controlling resource allocation and flow admission control in a mobile communications system |
US20030035371A1 (en) | 2001-07-31 | 2003-02-20 | Coke Reed | Means and apparatus for a scaleable congestion free switching system with intelligent control |
US7200144B2 (en) | 2001-10-18 | 2007-04-03 | Qlogic, Corp. | Router and methods using network addresses for virtualization |
US7433948B2 (en) | 2002-01-23 | 2008-10-07 | Cisco Technology, Inc. | Methods and apparatus for implementing virtualization of storage within a storage area network |
US20050198459A1 (en) | 2004-03-04 | 2005-09-08 | General Electric Company | Apparatus and method for open loop buffer allocation |
US20050228531A1 (en) | 2004-03-31 | 2005-10-13 | Genovker Victoria V | Advanced switching fabric discovery protocol |
US7542473B2 (en) | 2004-12-02 | 2009-06-02 | Nortel Networks Limited | High-speed scheduling apparatus for a switching node |
US7644255B2 (en) | 2005-01-13 | 2010-01-05 | Sony Computer Entertainment Inc. | Method and apparatus for enable/disable control of SIMD processor slices |
US7813360B2 (en) | 2005-01-26 | 2010-10-12 | Emulex Design & Manufacturing Corporation | Controlling device access fairness in switched fibre channel fabric loop attachment systems |
US7724778B2 (en) | 2005-01-28 | 2010-05-25 | I/O Controls Corporation | Control network with data and power distribution |
US9171585B2 (en) | 2005-06-24 | 2015-10-27 | Google Inc. | Configurable memory circuit system and method |
KR100685300B1 (en) * | 2005-11-02 | 2007-02-22 | 엠텍비젼 주식회사 | Method for transferring encoded data and image pickup device performing the method |
KR100735756B1 (en) | 2006-01-02 | 2007-07-06 | 삼성전자주식회사 | Semiconductor integrated circuit |
US7596647B1 (en) | 2006-09-18 | 2009-09-29 | Nvidia Corporation | Urgency based arbiter |
US7657710B2 (en) | 2006-11-17 | 2010-02-02 | Sun Microsystems, Inc. | Cache coherence protocol with write-only permission |
US8028131B2 (en) | 2006-11-29 | 2011-09-27 | Intel Corporation | System and method for aggregating core-cache clusters in order to produce multi-core processors |
US8095816B1 (en) | 2007-04-05 | 2012-01-10 | Marvell International Ltd. | Processor management using a buffer |
US20090016355A1 (en) | 2007-07-13 | 2009-01-15 | Moyes William A | Communication network initialization using graph isomorphism |
US8549207B2 (en) | 2009-02-13 | 2013-10-01 | The Regents Of The University Of Michigan | Crossbar circuitry for applying an adaptive priority scheme and method of operation of such crossbar circuitry |
US8230152B2 (en) | 2009-02-13 | 2012-07-24 | The Regents Of The University Of Michigan | Crossbar circuitry and method of operation of such crossbar circuitry |
US8448001B1 (en) | 2009-03-02 | 2013-05-21 | Marvell International Ltd. | System having a first device and second device in which the main power management module is configured to selectively supply a power and clock signal to change the power state of each device independently of the other device |
JP5424726B2 (en) * | 2009-06-05 | 2014-02-26 | オリンパス株式会社 | Imaging device |
US8359421B2 (en) | 2009-08-06 | 2013-01-22 | Qualcomm Incorporated | Partitioning a crossbar interconnect in a multi-channel memory system |
US8392661B1 (en) | 2009-09-21 | 2013-03-05 | Tilera Corporation | Managing cache coherence |
US8713294B2 (en) | 2009-11-13 | 2014-04-29 | International Business Machines Corporation | Heap/stack guard pages using a wakeup unit |
US9081501B2 (en) | 2010-01-08 | 2015-07-14 | International Business Machines Corporation | Multi-petascale highly efficient parallel supercomputer |
JP5482466B2 (en) * | 2010-06-03 | 2014-05-07 | 富士通株式会社 | Data transfer device and operating frequency control method for data transfer device |
US8667197B2 (en) | 2010-09-08 | 2014-03-04 | Intel Corporation | Providing a fine-grained arbitration system |
WO2012037518A1 (en) | 2010-09-17 | 2012-03-22 | Oracle International Corporation | System and method for facilitating protection against run-away subnet manager instances in a middleware machine environment |
US20120221767A1 (en) | 2011-02-28 | 2012-08-30 | Apple Inc. | Efficient buffering for a system having non-volatile memory |
KR101842245B1 (en) | 2011-07-25 | 2018-03-26 | 삼성전자주식회사 | Bus system in SoC and method of gating root clocks therefor |
WO2013077845A1 (en) | 2011-11-21 | 2013-05-30 | Intel Corporation | Reducing power consumption in a fused multiply-add (fma) unit of a processor |
CN104169892A (en) * | 2012-03-28 | 2014-11-26 | 华为技术有限公司 | Concurrently accessed set associative overflow cache |
US9170971B2 (en) | 2012-12-26 | 2015-10-27 | Iii Holdings 2, Llc | Fabric discovery for a cluster of nodes |
US9535860B2 (en) | 2013-01-17 | 2017-01-03 | Intel Corporation | Arbitrating memory accesses via a shared memory fabric |
US9436634B2 (en) | 2013-03-14 | 2016-09-06 | Seagate Technology Llc | Enhanced queue management |
JP6185291B2 (en) * | 2013-06-03 | 2017-08-23 | ローム株式会社 | Wireless power transmission apparatus, control circuit and control method thereof |
KR102114941B1 (en) | 2013-10-27 | 2020-06-08 | 어드밴스드 마이크로 디바이시즈, 인코포레이티드 | Input/output memory map unit and northbridge |
US10169256B2 (en) | 2014-01-31 | 2019-01-01 | Silicon Laboratories Inc. | Arbitrating direct memory access channel requests |
US9268970B2 (en) | 2014-03-20 | 2016-02-23 | Analog Devices, Inc. | System and method for security-aware master |
US9529400B1 (en) | 2014-10-29 | 2016-12-27 | Netspeed Systems | Automatic power domain and voltage domain assignment to system-on-chip agents and network-on-chip elements |
US9774503B2 (en) | 2014-11-03 | 2017-09-26 | Intel Corporation | Method, apparatus and system for automatically discovering nodes and resources in a multi-node system |
US10432586B2 (en) | 2014-12-27 | 2019-10-01 | Intel Corporation | Technologies for high-performance network fabric security |
US20160191420A1 (en) | 2014-12-27 | 2016-06-30 | Intel Corporation | Mitigating traffic steering inefficiencies in distributed uncore fabric |
US9594621B1 (en) | 2014-12-30 | 2017-03-14 | Juniper Networks, Inc. | Online network device diagnostic monitoring and fault recovery system |
US9652391B2 (en) | 2014-12-30 | 2017-05-16 | Arteris, Inc. | Compression of hardware cache coherent addresses |
GB2527165B (en) | 2015-01-16 | 2017-01-11 | Imagination Tech Ltd | Arbiter verification |
JP6883377B2 (en) * | 2015-03-31 | 2021-06-09 | シナプティクス・ジャパン合同会社 | Display driver, display device and operation method of display driver |
US10200261B2 (en) | 2015-04-30 | 2019-02-05 | Microsoft Technology Licensing, Llc | Multiple-computing-node system job node selection |
US20160378168A1 (en) | 2015-06-26 | 2016-12-29 | Advanced Micro Devices, Inc. | Dynamic power management optimization |
US9971700B2 (en) | 2015-11-06 | 2018-05-15 | Advanced Micro Devices, Inc. | Cache with address space mapping to slice subsets |
US9983652B2 (en) | 2015-12-04 | 2018-05-29 | Advanced Micro Devices, Inc. | Balancing computation and communication power in power constrained clusters |
US9918146B2 (en) | 2016-02-08 | 2018-03-13 | Intel Corporation | Computing infrastructure optimizations based on tension levels between computing infrastructure nodes |
US20170300427A1 (en) * | 2016-04-18 | 2017-10-19 | Mediatek Inc. | Multi-processor system with cache sharing and associated cache sharing method |
US20180048562A1 (en) | 2016-08-09 | 2018-02-15 | Knuedge Incorporated | Network Processor Inter-Device Packet Source ID Tagging for Domain Security |
US10298511B2 (en) | 2016-08-24 | 2019-05-21 | Apple Inc. | Communication queue management system |
US9946646B2 (en) * | 2016-09-06 | 2018-04-17 | Advanced Micro Devices, Inc. | Systems and method for delayed cache utilization |
US10146585B2 (en) | 2016-09-07 | 2018-12-04 | Pure Storage, Inc. | Ensuring the fair utilization of system resources using workload based, time-independent scheduling |
US10861504B2 (en) | 2017-10-05 | 2020-12-08 | Advanced Micro Devices, Inc. | Dynamic control of multi-region fabric |
US11196657B2 (en) | 2017-12-21 | 2021-12-07 | Advanced Micro Devices, Inc. | Self identifying interconnect topology |
KR101936951B1 (en) * | 2018-04-11 | 2019-01-11 | 주식회사 맴레이 | Memory controlling device and memory system including the same |
CN109032980B (en) * | 2018-06-30 | 2023-12-26 | 唯捷创芯(天津)电子技术股份有限公司 | Serial communication device and serial communication method |
-
2019
- 2019-12-23 US US16/725,901 patent/US11223575B2/en active Active
-
2020
- 2020-12-17 WO PCT/US2020/065567 patent/WO2021133629A1/en unknown
- 2020-12-17 CN CN202080086926.0A patent/CN114830102A/en active Pending
- 2020-12-17 JP JP2022536764A patent/JP2023507330A/en active Pending
- 2020-12-17 KR KR1020227024396A patent/KR20220113515A/en unknown
- 2020-12-17 EP EP20842753.4A patent/EP4081908A1/en active Pending
-
2021
- 2021-12-10 US US17/548,398 patent/US20220103489A1/en active Pending
Patent Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5535373A (en) * | 1991-11-27 | 1996-07-09 | International Business Machines Corporation | Protocol-to-protocol translator for interfacing disparate serial network nodes to a common parallel switching network |
US5768608A (en) * | 1994-11-22 | 1998-06-16 | Seiko Epson Corporation | Data processing apparatus and method for making same |
US6076139A (en) * | 1996-12-31 | 2000-06-13 | Compaq Computer Corporation | Multimedia computer architecture with multi-channel concurrent memory access |
US5956743A (en) * | 1997-08-25 | 1999-09-21 | Bit Microsystems, Inc. | Transparent management at host interface of flash-memory overhead-bytes using flash-specific DMA having programmable processor-interrupt of high-level operations |
US20040133714A1 (en) * | 2000-06-30 | 2004-07-08 | Intel Corporation | Transaction partitioning |
US20040267481A1 (en) * | 2003-05-20 | 2004-12-30 | Resnick David R. | Apparatus and method for testing memory cards |
US20120246369A1 (en) * | 2009-11-26 | 2012-09-27 | Toshiki Takeuchi | Bus monitor circuit and bus monitor method |
US20120089889A1 (en) * | 2010-10-06 | 2012-04-12 | Cleversafe, Inc. | Data transmission utilizing partitioning and dispersed storage error encoding |
US20180165199A1 (en) * | 2016-12-12 | 2018-06-14 | Intel Corporation | Apparatuses and methods for a processor architecture |
US20180349288A1 (en) * | 2017-05-30 | 2018-12-06 | Intel Corporation | Input/output translation lookaside buffer prefetching |
US10558602B1 (en) * | 2018-09-13 | 2020-02-11 | Intel Corporation | Transmit byte enable information over a data bus |
US20200371960A1 (en) * | 2019-05-24 | 2020-11-26 | Texas Instruments Incorporated | Methods and apparatus for allocation in a victim cache system |
US20210048865A1 (en) * | 2019-08-16 | 2021-02-18 | Apple Inc. | Dashboard with push model for receiving sensor data |
Also Published As
Publication number | Publication date |
---|---|
CN114830102A (en) | 2022-07-29 |
US11223575B2 (en) | 2022-01-11 |
KR20220113515A (en) | 2022-08-12 |
US20210194827A1 (en) | 2021-06-24 |
EP4081908A1 (en) | 2022-11-02 |
JP2023507330A (en) | 2023-02-22 |
WO2021133629A1 (en) | 2021-07-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR101925266B1 (en) | Interconnect systems and methods using hybrid memory cube links | |
US11868299B2 (en) | Network-on-chip data processing method and device | |
US11831565B2 (en) | Method for maintaining cache consistency during reordering | |
US20060059292A1 (en) | Method and an apparatus to efficiently handle read completions that satisfy a read request | |
US20220138107A1 (en) | Cache for storing regions of data | |
US10601723B2 (en) | Bandwidth matched scheduler | |
WO2022269582A1 (en) | Transmission of address translation type packets | |
US9390017B2 (en) | Write and read collision avoidance in single port memory devices | |
US10540304B2 (en) | Power-oriented bus encoding for data transmission | |
US8995210B1 (en) | Write and read collision avoidance in single port memory devices | |
US11223575B2 (en) | Re-purposing byte enables as clock enables for power savings | |
US10445267B2 (en) | Direct memory access (DMA) unit with address alignment | |
US10684965B2 (en) | Method to reduce write responses to improve bandwidth and efficiency | |
US20230195368A1 (en) | Write Request Buffer | |
US20200192842A1 (en) | Memory request chaining on bus | |
US20200099993A1 (en) | Multicast in the probe channel | |
EP3841484B1 (en) | Link layer data packing and packet flow control scheme | |
US20240079036A1 (en) | Standalone Mode |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |