EP3149585B1 - Systems and methods for packing data in a scalable memory system protocol - Google Patents

Systems and methods for packing data in a scalable memory system protocol Download PDF

Info

Publication number
EP3149585B1
EP3149585B1 EP15802720.1A EP15802720A EP3149585B1 EP 3149585 B1 EP3149585 B1 EP 3149585B1 EP 15802720 A EP15802720 A EP 15802720A EP 3149585 B1 EP3149585 B1 EP 3149585B1
Authority
EP
European Patent Office
Prior art keywords
packet
transaction
packets
data
memory
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP15802720.1A
Other languages
German (de)
English (en)
French (fr)
Other versions
EP3149585A1 (en
EP3149585A4 (en
Inventor
J. Thomas Pawlowski
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Micron Technology Inc
Original Assignee
Micron Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Micron Technology Inc filed Critical Micron Technology Inc
Publication of EP3149585A1 publication Critical patent/EP3149585A1/en
Publication of EP3149585A4 publication Critical patent/EP3149585A4/en
Application granted granted Critical
Publication of EP3149585B1 publication Critical patent/EP3149585B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0619Improving the reliability of storage systems in relation to data integrity, e.g. data losses, bit errors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • G06F3/0611Improving I/O performance in relation to response time
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0659Command handling arrangements, e.g. command buffers, queues, command scheduling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/0727Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a storage system, e.g. in a DASD or network based storage system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0706Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment
    • G06F11/073Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation the processing taking place on a specific hardware platform or in a specific software environment in a memory management context, e.g. virtual memory or cache management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/0703Error or fault processing not based on redundancy, i.e. by taking additional measures to deal with the error or fault not making use of redundancy in operation, in hardware, or in data representation
    • G06F11/0751Error or fault detection not based on redundancy
    • G06F11/0754Error or fault detection not based on redundancy by exceeding limits
    • G06F11/076Error or fault detection not based on redundancy by exceeding limits by exceeding a count or rate limit, e.g. word- or bit count limit
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1008Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
    • G06F11/1012Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices using codes or arrangements adapted for a specific type of error
    • G06F11/1016Error in accessing a memory location, i.e. addressing error
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1008Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
    • G06F11/1044Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices with specific ECC/EDC distribution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1008Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
    • G06F11/1068Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices in sector programmable memories, e.g. flash disk
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/07Responding to the occurrence of a fault, e.g. fault tolerance
    • G06F11/08Error detection or correction by redundancy in data representation, e.g. by using checking codes
    • G06F11/10Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's
    • G06F11/1008Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices
    • G06F11/1072Adding special bits or symbols to the coded information, e.g. parity check, casting out 9's or 11's in individual solid state devices in multilevel memories
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0604Improving or facilitating administration, e.g. storage management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/061Improving I/O performance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0602Interfaces specially adapted for storage systems specifically adapted to achieve a particular effect
    • G06F3/0614Improving the reliability of storage systems
    • G06F3/0617Improving the reliability of storage systems in relation to availability
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0638Organizing or formatting or addressing of data
    • G06F3/0644Management of space entities, e.g. partitions, extents, pools
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0661Format or protocol conversion arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device
    • G06F3/0679Non-volatile semiconductor memory device, e.g. flash memory, one time programmable memory [OTP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0685Hybrid storage combining heterogeneous device types, e.g. hierarchical storage, hybrid arrays
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0683Plurality of storage devices
    • G06F3/0688Non-volatile semiconductor memory arrays
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C29/00Checking stores for correct operation ; Subsequent repair; Testing stores during standby or offline operation
    • G11C29/52Protection of memory contents; Detection of errors in memory contents
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L1/00Arrangements for detecting or preventing errors in the information received
    • H04L1/12Arrangements for detecting or preventing errors in the information received by using return channel
    • H04L1/16Arrangements for detecting or preventing errors in the information received by using return channel in which the return channel carries supervisory signals, e.g. repetition request signals
    • H04L1/18Automatic repetition systems, e.g. Van Duuren systems
    • H04L1/1867Arrangements specially adapted for the transmitter end
    • H04L1/189Transmission or retransmission of more than one copy of a message
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/25Flow control; Congestion control with rate being modified by the source upon detecting a change of network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/52Queue scheduling by attributing bandwidth to queues
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Definitions

  • US 2014/0112339 discloses a physical layer coupled to a serial, differential link that is to include a number of lanes.
  • the PHY includes a transmitter and a receiver to be coupled to each lane of the number of lanes.
  • US 2013/0195210 discloses devices and systems for transmitting data packets over a chip-to-chip communications link.
  • the present disclosure generally relates to scalable memory system protocol. That is, the scalable memory system protocol may adjust certain operations based on characteristics of the data packets (e.g., requests, responses) being transferred.
  • the scalable memory system protocol (“scalable protocol”) may be a packet-based protocol that enables an efficient (e.g., power efficient, bit efficient) transmittal of packets of data between memory devices, computing devices, and the like.
  • the scalable protocol may be implemented in a number of combinations with various types of memory and processors such as Automata processors, a Processor-in-Memory, network devices, storage appliances, hierarchical memory, abstracted memory, and the like.
  • the scalable protocol may be designed to facilitate communication of data packets between various memory and processors while maintaining a lowest reasonable scalable protocol overhead.
  • the scalable protocol may be designed to provide a bit efficient transfer of data packets in that most, if not all, bits transferred via the scalable protocol are directly part of a corresponding data packet being transmitted.
  • the scalable protocol may enable request packets to be packed together without padding a signal with zeros unrelated to the respective packets, thereby maximizing a bit efficiency of data packets being transferred via transmission lanes of a bus.
  • the scalable protocol may, within reason, eliminate certain bits and messages that may be discerned from other bits or messages or may otherwise be unnecessary. For example, the scalable protocol may obviate the need for a device to transmit data related to information that may already be known to the receiver.
  • the scalable protocol may facilitate transactions that are "sent to the memory.”
  • the scalable protocol may also transfer local operations, where internal data flow is relatively low as compared to external control operations, with the external control operations.
  • the scalable protocol may implement an error control strategy that minimizes overhead using a dynamic field size that adjusts based on the amount of data (e.g., payload) being transmitted in the respective packet.
  • the scalable protocol may also be designed to use a minimum number of fields to convey data. As such, the scalable protocol may allow field size tuning and flexibility since every packet may not make use of all available fields.
  • the scalable protocol may also be designed to facilitate the coexistence of low-latency and high-latency data.
  • the scalable protocol may provide the ability to interlace the transmittal of low-latency data between the transmittal high-latency data.
  • the design of the scalable protocol may be characterized as simple and generic in that the variable packet size may be determined in a single field of the respective packet. Further, the scalable protocol may maintain simplicity in terms of its operations while remaining capable of performing complex transactions and operations. In addition, the scalable protocol may be flexible enough to enable future functions that it may not currently be designed to provide.
  • the scalable protocol may limit the order in which packets are sent using local ordering schemes. That is, the scalable protocol may not enforce certain global synchronization ordering rules or the like. To stay true to the notion that the scalable protocol remains abstract, the scalable protocol may facilitate operations with a special device or with different types of channel properties.
  • FIG. 2 depicts a block diagram of an embodiment of the memory device 14.
  • the memory device 14 may include any storage device designed to retain digital data.
  • the memory device 14 may encompass a wide variety of memory components including volatile memory and non-volatile memory.
  • Volatile memory may include Dynamic Random Access Memory (DRAM) and/or Static Random Access Memory (SRAM).
  • the volatile memory may include a number of memory modules, such as single inline memory modules (SIMMs) or dual inline memory modules (DIMMs).
  • the non-volatile memory may include a read-only memory (ROM), such as an EPROM, and/or flash memory (e.g., NAND) to be used in conjunction with the volatile memory. Additionally, the non-volatile memory may include a high capacity memory such as a tape or disk drive memory. As will be appreciated, the volatile memory or the non-volatile memory may be considered a non-transitory tangible machine-readable medium for storing code (e.g., instructions).
  • ROM read-only memory
  • EPROM e.g., EPROM
  • flash memory e.g., NAND
  • high capacity memory such as a tape or disk drive memory.
  • the volatile memory or the non-volatile memory may be considered a non-transitory tangible machine-readable medium for storing code (e.g., instructions).
  • the memory device 14 may include a system on chip (SoC) 22 that may be any suitable processor, such as a processor-in-memory (PIM) or a computer processor (CPU), tightly coupled to the memory components stored on the memory device 14.
  • SoC 22 may be on the same silicon chip as the memory components of the memory device 14.
  • the memory SoC 22 may manage the manner in which data requests and responses are transmitted and received between the memory components and the host SoC 12.
  • the memory SoC 22 may control the traffic between the memory components to reduce latency and increase bandwidth.
  • the memory device 14 may also include a buffer 23.
  • the buffer 23 may store one or more packets received by the memory SoC 22. Additional details with regard to how the memory SoC 22 may use the buffer 23 will be described below with reference to FIGS. 15-17 .
  • the memory device 14 may include memory types such as NAND memories 24, Reduced-latency Dynamic random access memory (RLDRAM) 26, double data rate fourth generation synchronous dynamic random-access memory (DDR4) 28, and the like.
  • the host SoC 12 and the memory SoC 22 may perform various operations based on computer-executable instructions provided via memory components, registers, and the like.
  • the memory components or storage may be any suitable articles of manufacture that can serve as media to store processor-executable code, data, or the like. These articles of manufacture may represent computer-readable media (i.e., any suitable form of memory or storage) that may store the processor-executable code used by the host SoC 12 or the memory SoC 22 to perform the presently disclosed techniques.
  • the memory and the storage may also be used to store the data, analysis of the data, and the like.
  • the memory and the storage may represent non-transitory computer-readable media (i.e., any suitable form of memory or storage) that may store the processor-executable code used by the host SoC 12 or the memory SoC 22 to perform various techniques described herein. It should be noted that non-transitory merely indicates that the media is tangible and not a signal.
  • the scalable protocol may facilitate communication between any two devices, such as communications between two processors, two memory modules, a processor and a memory module, and the like.
  • the memory SoC 22 may send packets of data structured according to a packet level view of a packet 30 illustrated in FIG. 3 .
  • the packet 30 may include a transaction type field 32, a payload field 34, and an error control code (ECC) field 36.
  • the transaction type field 32 may include data indicative of the type of transmittance, a type of packet being transmitted, or both.
  • the transaction type field 32 may also indicate a packet size to indicate a number of bits in the data payload and the number of bits in the ECC field, thereby indicating the number of bits in the entire packet.
  • the transaction type field 32 may indicate the size of the payload field 34 and the ECC field 36 in an indirect manner.
  • the data stored in the transaction type field 32 may serve as an index to a lookup table.
  • the lookup table may provide information regarding the sizes of the payload field 34 and the ECC field 36.
  • the memory SoC 22 may, in one example, receive the packet 30 and use the data stored in the transaction type field 32 as an index to a lookup table that may be stored within the memory device 14 to determine the sizes of the payload field 34 and the ECC field 36.
  • the transaction type field 32 may specify different types of packets based on whether the packet is being transmitted on a request bus Q or a response bus S, which may include the channels 16, the channels 29, or the like.
  • a request bus Q and the response bus S may be separate, unidirectional, or common inputs/outputs.
  • the request bus Q generally includes q lanes
  • the response bus S generally includes s lanes.
  • Example transaction type fields 32 for packets 30 transmitted on the request bus Q may include read operations (e.g., 8uRead, 8uRead2, varRead, where u might be an 8-bit unit or a 9-bit unit or possibly a non-integer unit size of data), message data (e.g., message), read-modify-write (RMW) operations (e.g., RMW1A, RMW2A, RMW3A, RMW4A), datasets (e.g., 32uData, 64uData, 128uData, 256uData), pattern write operations (e.g., 8uPatternWrite, 16uPatternWrite), write-with-enable operations (e.g., 8uWriteWithEnables, 16uWriteWithEnables), write operations (e.g., 8uWrite, 16uWrite, 32Write, 48uWrite, 64Write, 80uWrite, 96uWrite, 112uWrite, 128Write, 256Write),
  • Providing 32Write operations and 64Write operations may provide more flexibility to a system designer in picking a maximum packet size.
  • the scalable protocol may, in one embodiment, have a limit of 256Unit, but using a smaller maximum packet size may help with system latency. It should be understood that the difference between 32uWrite and 32Write is that 32uWrite is a single fixed size and the TransactionSize is not included in the packet. On the other hand, 32Write includes a TransactionSize and thus can involve additional 32U chunks of data, not just the 32U chunk included in the original request packet.
  • the packets 30 transmitted via the request bus Q may include a total of 26 native transactions (e.g., 8uRead, message, RMW1A, etc.), each of which may be represented using a 5-bit field for global (i.e., system that includes numerous CPU modules and/or numerous memory device modules in which packets may be relayed from unit to unit) or local systems (i.e., system that include few modules in which packets move point to point between units without relaying).
  • the transaction type field 32 for a packet 30 on the request bus Q may be 5 bits.
  • example transaction type fields 32 for packets 30 transmitted on the response bus S may include message data (e.g., message), datasets (e.g., 8uData, 16uData, 32uData, 48uData, 64uData, 80uData, 96uData, 112uData, 128uData, 256uData), and the like.
  • the packets 30 transmitted via the response bus S may include a total of 11 native transactions (e.g., message, 8uData, etc.), each of which may be represented using a 4-bit or 5-bit field for a local system.
  • the transaction type field 32 for a packet 30 on the response bus S may be 4 bits.
  • the total number of transaction types used by the request bus Q and the response bus S may be 32. These 32 transaction types may thus be represented in a 5-bit field. Additional details regarding the transaction types will be discussed further below.
  • the ECC field 36 may include the error control code to determine whether the packet 30 received by the receiving component includes any errors.
  • the error control code may include various algorithms, such as adding redundant data or parity data, to a message, such that the original data may be recovered by the receiving component even when a number of errors were introduced, either during the process of transmission, or on storage.
  • the error control code may provide the ability to detect an error within the limits of the code and indicate a further action, such as retransmitting the errant packet, when the error is detected.
  • the scalable protocol may be employed in a system having one or more request bus Q transactions and one or more response bus S transactions.
  • request bus Q and the response bus S has been described above as having a 5-bit field and a 4-bit field, respectively, it should be noted that the request bus Q and the response bus S may be designed to have a variety of different bit sizes.
  • request bus Q transactions may be indicated using a 5-bit field (e.g., 00000, 00001, ..., 11110, 11111), such that possible transaction types that may be associated with the 5-bit field as follows (where data unit u size is 8 bits):
  • the packet 30 may include the ECC field 36, which may be a fixed size as in conventional protocols. However, as will be appreciated, in certain embodiments, the ECC field 36 may be a variable size as will be discussed in greater detail below.
  • the example response bus S transactions above are listed in order of the ensuing packet size assuming a 5-bit transaction type on the request bus Q, a 4-bit transaction type on response bus S, a 4-bit transaction size, a 3-bit window, a 48-bit address, a 7-bit data sequence number, and extra bits in the data field which are stated specifically for each transaction type.
  • the transaction window may provide information related to a certain set of rules of engagement for each particular transaction.
  • the transaction window data may specify a set of lanes of a physical bus (e.g., channels 29) being used to transmit and receive packets for particular transactions.
  • the set of lanes specified by the transaction window may be referred to as a virtual channel accessible to the memory device 14.
  • the channels 29 described herein includes one or more lanes in which data may be transferred.
  • the scalable protocol may better manage the transmission of packets between processors.
  • the scalable protocol may designate a transaction window for each memory device.
  • the scalable protocol may use two fields to designate each transaction window: a 48-bit Address and a 3-bit Window (i.e., addressing Windows 0 through 7).
  • a transaction window field 42 and an address window field 44 may be part of the payload field 34.
  • the transaction window field 42 may specify a designated transaction window and the address window field 44 may specify the 48-bit address associated with the specified transaction window.
  • the 48-bit address may be a virtual address assigned to a virtual channel (i.e., window).
  • the virtual address space may reference a physical address located on a hard disk drive or some other storage device. As such, the memory device may have the ability to store more data than physically available.
  • the packet may include a start bit 46 and a level of indirection field 48.
  • the start bit 46 may indicate the beginning of a packet in a stream of bits.
  • the level of indirection field 48 may be part of the payload field 34 and may provide a value that indicates a number of levels of indirection the respective transaction may include. Additional details regarding the start bit field 46 and the level of indirection field 48 will be discussed in greater detail in other sections below.
  • each type of memory device may be assigned to a different transaction window.
  • DRAM0 may be assigned into Window0
  • DRAM1 into Window1
  • DRAM2 into Window2
  • NAND0 into Window3
  • SRAM buffers and control registers into Window7.
  • transactions 1, 3-6, 8, and 9 are part of Window0, which corresponds to a DRAM memory device.
  • Transactions 2 and 7, on the other hand, are part of Window3, which corresponds to a NAND memory device.
  • the receiving component may respond to the received requests using ordering rules established according to the respective transaction windows specified for each transaction. As such, the receiving component may use the transaction windows to provide a local ordering protocol between the transmitting component and the receiving component.
  • the ordering rules specified for a particular transaction window may be based on the respective latency associated with the respective transaction window. That is, the receiving component may respond to the requests involving lower latencies first before responding to the requests having longer latencies. Since the receiving component may be aware of the latency differences between each transaction window, the receiving component may decide to receive the transactions according to their window designations. As such, referring again to the example transactions described above, the receiving component implementing the scalable protocol may respond to the above requests as follows:
  • the receiving component may first respond to the low-latency requests of Window0 before responding to the higher latency requests of Window3. That is, the long latency requests may be transmitted later than the short latency requests.
  • the system bus servicing the requests is not hampered by the presence of different classes of memory on the same bus without adding various elaborate protocol complications, such as adding a field with REQUEST PRIORITY.
  • the scalable protocol provides a complex system operation using a minimal number of bits in a relatively simple manner.
  • the receiving component may employ a local ordering scheme based on a corresponding transaction window specified for each transaction. For the following transaction:
  • Another feature associated with the transaction window includes a simple system-level addressability of other spaces such as Window0 SRAM and system control registers without creating additional commands in the protocol. That is, SRAM and system control registers may be addressed by simply using Window0.
  • Prior protocols may use additional commands such as register.read and register.write to interact with these types of memories.
  • the same read and write commands used for other memory devices may also be used for SRAM and system control registers. That is, the read and write commands may simply point to an appropriate window.
  • the scalable protocol may employ fewer commands, thereby reducing the number of bits used in the protocol.
  • a typical DDR3 DRAM may include eight banks, and an internal bus may include eight such DRAMs.
  • the eight DRAMS may be organized such that Window1 represents bank 0 of a group of eight DDR3 DRAMs and Window2 provides access to bank 1 of this same group.
  • each window may specify a particular virtual address space of each DRAM.
  • suitable grouping methods are available since there could be any number of DRAMs grouped in a lock-step operation, each with pages, banks and ranks.
  • NANDs may also be grouped with pages, planes, and blocks.
  • multichannel devices can be further separated per channel and various aggregations thereof. Generally, the grouping options may be determined based on a complexity of logic chip design.
  • the scalable protocol may use the transaction windows to establish predictable data ordering in a system that contains memories that have different latencies.
  • the scalable protocol may support high and low priority requests without having an explicit protocol field that specified how the high and low priority requests are ordered.
  • FIG. 5 illustrates a flow chart of a method 50 for assigning transaction windows for various types of memories that are part of the memory device 14.
  • the method 50 is depicted in a particular order, it should be noted that the method 50 may be performed in any suitable order, and thus, is not limited to the order depicted in the figure. Additionally, the following description of the method 50 will be described as being performed by the memory SoC 22 for discussion purposes. As such, any suitable processor that is communicatively coupled to various types of memories may perform the operations described in the method 50.
  • the memory SoC 22 may receive an initialization signal from registers or other memory components stored within the memory SoC 22 itself.
  • the initialization signal may be received by the memory SoC 22 upon power up or when the memory device 14 initially receives power.
  • the memory SoC 22 may determine the memory types that it may be able to access. That is, the memory SoC 22 may scan its communication lanes (e.g., channels 29) and identify the different types of memories that may be communicatively coupled to the memory SoC 22. Referring back to the example memory device 14 depicted in FIG. 2 , the memory SoC 22 may determine that the RLDRAM 26, the DDR4 28, and the NAND 24 memory types are coupled to the memory SoC 22.
  • the memory SoC 22 may determine that the RLDRAM 26, the DDR4 28, and the NAND 24 memory types are coupled to the memory SoC 22.
  • the memory SoC 22 may, at block 58, assign a transaction window to each memory type identified at block 54 based on the respective capabilities of each memory type. Generally, the memory SoC 22 may assign each similar memory type to the same transaction window. That is, since each similar memory type has similar capabilities, the memory SoC 22 may assign the memory type to the same transaction window. For example, referring again to the example memory device 14 of FIG. 2 , the memory SoC 22 may assign the two DDR4 28 memories to the same transaction window because they are identical memory types. In the same manner, if two different memory types have a certain number of similar capabilities, the memory SoC 22 may also assign the two memory types to the same transaction window.
  • the scalable protocol may include other optional fields into the packet 30 to condition a request, such as a read, write, move, read-modify-write, and the like.
  • a condition may include indicating a number of levels of indirection to apply to a request.
  • the requesting component may initially send a request for the particular dataset with a first pointer.
  • the requesting component may receive the second pointer.
  • the requesting component may then send a second request for the particular dataset with the second pointer. This process may continue until the requesting component receives the particular dataset.
  • the traffic on the request bus Q may involve multiple requests before actually receiving the dataset requested by one single initial request.
  • the scalable protocol may specify within a design of an application-specific integrated circuit (ASIC), the memory SoC 22, the host SoC 12, or the like that implements the scalable protocol an indication of a number of pointers that the requesting component may receive before actually receiving the requested data.
  • ASIC application-specific integrated circuit
  • the memory system implementing the scalable protocol may identify the pointer chain between the original request and the location of the data and may service the request to the requested data based on the initial request from the requesting component. That is, one request, involving any number of levels of indirection from the requesting component may result in receiving just one response that includes the requested data.
  • Binary 11 may indicate 3 levels of indirection. As such, the supplied address may point to Address2, which may point to Address3, which may point to Address4, which may include the data content.
  • the memory system implementing the scalable protocol may provide the data content to the requesting component as the result of the read request.
  • the number of bits (e.g., size) used by the level of indirection field 48 may be determined based on a preference provided by the host SoC 12. For instance, upon power up, the host SoC 12 may discover the memory SoC 22 and determine that the memory SoC 22 is operating using the scalable protocol described herein. As such, the host SoC 12 may determine a maximum number of levels of indirection that it may be able to accommodate without compromising its performance. The maximum number of levels of indirection may be determined based on the write and/or read latencies of the host SoC 12 or other operating parameters of the host SoC 12.
  • the memory SoC 22 may determine the cause for the packet 30 to be transmitted. As such, the memory SoC 22 may determine what software command was used for the transfer of the packet 30.
  • the software command that generates the packet may correspond to a command to look up a pointer of a pointer, for example.
  • the memory SoC 22 may interpret this command as having two levels of indirection and thus may provide a 10 binary value in the level of indirection field 48 when preparing the packet 30 for transmission.
  • the levels of indirection may be useful for various types of operations.
  • arrays of arbitrary dimensions may use levels of indirection to assist requesting components identify the content of their respective requests without adding unnecessary traffic to the respective bus.
  • a 3-dimensional array may use three pointers to access data. Records of some defined structures may use pointers.
  • One example of such a record may include link lists that have a head and tail pointer for every structure in the list. For linked lists, the abstraction of levels of indirection may enable the parsing of the link list to occur more efficiently.
  • the memory system may retrieve the requested data or the 8th element of the list using the single request provided by the requesting component.
  • the memory system may parse each of the 8 levels of indirection to determine the location of the requested data.
  • the memory system may provide the requesting component the requested data, thus limiting the bus traffic to one request from the requesting component and one response from the location of the requested data.
  • Another technique for reducing bus traffic may include not acknowledging received packets. That is, in conventional protocols, each packet that has been received by a recipient component may send an acknowledgment packet back to the transmitting component. Since the vast majority of transmitted packets are received by the corresponding recipient component, sending acknowledgment packets may add to the traffic on the respective bus without providing much of a benefit.
  • the recipient component may transmit an acknowledge bit indicating success for 1x1010 packets and 1 packet indicating an error. Effectively, the recipient component may have sent about 1x1010 bits to indicate one error.
  • BER Bit Error Rate
  • the request bus Q upon receiving each packet of the data, the request bus Q does not send an acknowledgement packet indicating that the packet was received successfully.
  • the response bus S may include two stages for the operations. That is, the response bus S may indicate that the message is ready and then the response bus S may send the corresponding data related to the read request.
  • the recipient component may still receive packets that have errors. As such, the recipient component may notify the transmitting component that the packet has not been received or that the received packet contains an error by sending a NOT_ACKNOWLEDGE packet to the transmitting component. In addition to indicating that the sent packet has not been received, the NOT_ACKNOWLEDGE packet may indicate a most recent known-to-be-good bus transaction. As such, when an error is detected via an ECC subsystem, the packet having the error should be re-transmitted. The recipient component may identity the transmitting component of the most recent successful bus transaction as a reference to so that a retransmission can occur.
  • the scalable protocol may use 4 relevant fields to indicate to a transmitting component the identity of the last known-to-be-good bus transaction.
  • the relevant fields include a window, an address, a transaction, and an optional data sequence number. These four fields may identify any request/response in the system.
  • an additional ECC field may be used to detect an error in the transmission (e.g., a code which is guaranteed to detect the presence of 1, 2, 3, 4, or 5 random errors in the transmission packet, also known as an HD6 code, as will be described in more detail below).
  • the recipient component may send a NOT_ACKNOWLEDGE message to the transmitting component.
  • the size of this packet may be many possible field sizes.
  • the NOT ACKNOWLEDGE message may be a 4-bit transaction type, a 3-bit window, a 48-bit address, a 7-bit data sequence number, and a 5-bit original transaction type for a sum of 67 bits.
  • a 15-bit ECC field may be added, thereby bringing the total to 82 bits.
  • 82 bits is significantly lower than the 1x1010 bits sent for indicating one error in 1x1010 packets, and thus is a more efficient way to indicate address error packets.
  • the data sequence number mentioned above may identify the erroneous packet. Additional details regarding the data sequence number and how it may be generated will be discussed below with reference to FIGS. 12-14 .
  • the recipient component may accurately indicate to the transmitting component the correct point at which a retransmission may occur.
  • the recipient component may incorporate one exception for the above rule when there has been no good transaction (e.g., the first transaction since power-on or reset was unsuccessful). In this case, the recipient component may populate all fields with 0's, such that all elements of the system will interpret the field of 0's as a "first transaction.”
  • the scalable protocol may include an optional data sequence number field.
  • the optional data sequence number field may include 7 bits that reference any one of these 128 data packets. In this manner, if a NOT_ACKNOWLEDGE message is issued, the NOT _ACKNOWLEDGE message may correctly identify an exact point at which the transmission became unsuccessful.
  • the minimum TransactionSize of 8B, for TransactionSize 0 through 15, may be 8 bytes, 16 bytes, 32 bytes, 48 bytes, 64 bytes, 80 bytes, 96 bytes, 112 bytes, and 128 bytes, as opposed to 2 N bytes to conserve bits on the lower end.
  • the scalable protocol may employ data packing techniques when transmitting packets using any type of bus communication.
  • packet sizes are determined based on the type of request or response being sent, the data being sent, the operations being requested, etc., it may be difficult to anticipate what type of data channels to use before knowing more details regarding the packet.
  • the scalable protocol may be designed to maximize the use of the available channels by packing the data packets being transmitted together without padding each individual packet with zeros, as done with conventional protocols.
  • the term "without padding" means that between the transmission of data packets, zeros (i.e., bits having the value of zero) are not transmitted across a respective channel. Instead, the next scheduled packet ready to be transmitted will be transmitted on the clock cycle immediately after the previous packet is transmitted.
  • a request bus Q that includes 10 signal lanes and a response bus S that includes 8 signal lanes.
  • the present example assumes that there is no data encoding and that the transactions include only simple bit transmissions (i.e., no symbol transmissions). If the sizes of occupancy on the Q bus are: 4.3, 7.3, 9.7, 13.5, 14.3, 14.9, 20.0, 20.1, 21.6, 33.0, 36.2, 58.8, 65.2, 105.4, 110.5, and 123.0, a conventional protocol may pad the values having fractional components associated with them.
  • the conventional protocol may add zeros to the remaining portion of each fractional value such that the sizes of occupancy on the Q bus become 5, 8, 10, 14, 15, 15, 20, 21, 22, 33, 37, 59, 66, 106, 111, and 123, respectively.
  • zeros may be added to the transmission, which may adversely impact an overall bus utilization efficiency because the transmitted zeros are not truly representative of data being transmitted. In this manner, these zeros utilize the bus without conveying information, thereby reducing the bus utilization efficiency.
  • the scalable protocol may allow requests to be packed together.
  • the bus signal is thus left without padded zeros.
  • FIG. 8 illustrates a lane packing example 61 in which the scalable protocol packs two 18-bit requests together.
  • the scalable protocol may regard transmissions as symbols instead of bits. In the example of FIG. 8 , one bit may represent one symbol. Since the bus 62 in FIG. 8 includes 12 lanes (i.e. may transmit 12 bits in one flit), the scalable protocol may transmit the two 18-bit requests by packing the requests together. That is, a second 18-bit request 66 may be transmitted immediately after a first 18-bit request 64. As such, the transmission bus includes no wasted bits (e.g., padded zeros).
  • the transmitting component may start each new packet 30 with a start bit, which may be specified in the start bit field 46, as mentioned above.
  • a start bit e.g., value of 1
  • a receiving component when it receives the packets packed together, it may identify the beginning of each new packet, determine the transaction type of the packet based on the transaction type field 32, the transaction window based on the transaction window field 42, the address for the operation based on the address field 44, the number of levels of indirection based on the level of indirection field 48, and the error checking code based on the ECC field 36.
  • FIG. 9 illustrates a flow chart of a method 70 for generating a packet for transmission, such that the packet can be transmitted using the lane-packing scheme described above.
  • the following description of the method 70 will be discussed as being performed by the memory SoC 22 (i.e., transmitting/requesting component), but it should be understood that any processor that is part of the memory device 14 may perform the operations described in the method 70.
  • the memory SoC 22 may determine a transaction window based on the memory type associated with the requested data operation. That is, the memory SoC 22 may determine what type of memory will be accessed when performing the data operation and determine a corresponding transaction window based on the type of memory using a look-up table or the like. In addition to the transaction window, the memory SoC 22 may determine a memory address that refers to a location of data related to the data operation and the transaction window. For example, for a read operation, the address may refer to the location of the data that is to be read from a specified memory.
  • the memory SoC 22 may generate an error control code (ECC) value for the packet 30.
  • ECC error control code
  • the ECC value may be used by the receiving component to ensure that the packet 30 is received without error.
  • the memory SoC 22 may first determine an appropriate error control code (ECC) algorithm to use to encode the packet 30.
  • the software application requesting the transmission may specify the ECC to algorithm use.
  • the host SoC 12 or the memory SoC 22 may specify a particular ECC algorithm to use to encode and decode all of the transmitted and received packets.
  • the ECC value for the packet 30 may be determined based on the bits provided in the transaction type field 32 and the payload field 34.
  • the memory SoC 22 may, at block 82, generate the packet 30 according to the values determined at blocks 72, 74, 76, and 80.
  • the memory SoC 22 may initially provide a 1 for the start bit field 46 to indicate to a receiving component that a new packet is being transmitted. After inserting the 1 in the start bit field 46, the memory SoC 22 may provide a value that represents the transaction type identified at 74 in the transaction type field 32.
  • the memory SoC 22 may then generate the payload field 34 of the packet 30 using the transaction window and address determined at block 76 and the number of levels of indirection determined at block 78. That is, the memory SoC 22 may enter the transaction window value after the transaction type field 32 and into the transaction window field 42. The memory SoC 22 may then enter the address for the data operation into the address field 44 and the number of levels of indirection into the level of indirection field 48.
  • the memory SoC 22 may, at block 84, transmit the packet 30 via the channels 16, the channels 29, or the like depending on the destination of the packet 30. After the generated packet 30 is transmitted, the memory SoC 22 may proceed to block 86 and determine whether the next packet to be transmitted is ready for transmission. Generally, the next packet for transmission may be generated according to the process described above with regard to blocks 72-82. If the next packet is ready for transmission, the memory SoC 22 may proceed to block 84 again and transmit the next packet immediately after the previous packet is transmitted. By transmitting each subsequent packet immediately after another packet is transmitted, the memory SoC 22 may transmit packets according to a packed lane scheme, which does not involve padding zeros on a bus when all of the lanes of a bus are not utilized.
  • the number of UIs used to transmit the same packets is respectively 10 (80/8), 10.5 (84/8), 18.125 (145/8), 34.25 (274/8), 50.25 (402/8), 66.375 (531/8), 82.375 (659/8), 98.375 (787/8), 114.375 (915/8), 130.5 (1044/8), and 258.5 (2068/8).
  • the average savings for randomly selected packet sizes is 0.5 UI per transaction, hence the bit savings grows as the number of lanes is increased.
  • This example is indicative of any width of the request bus Q or the response bus S, whether they are equal or unequal widths on the two buses.
  • the host SoC 12 or any other receiver may use the following transmission/receiving scheme: receive the packet 30, parse contents of the packet 30 to identify the transaction type, size of the payload, and a location of the ECC field 36 within the packet 30, verify a correctness of the packet 30 based on the ECC, and then act upon the transmission with certitude.
  • the scalable protocol may be designed to facilitate a maximum bit efficiency.
  • the packet 30 may have an arbitrary size that does not correspond to an integer multiple of the utilized physical bus.
  • the transmission of arbitrarily sized packets maintains bit efficiency by packing the packets tightly together, such that each succeeding packet is transmitted immediately after the preceding packet without padding either packet with zeros.
  • the receiver e.g., host SoC 12
  • the receiver may implement certain techniques described herein for parsing the received packets.
  • the scalable protocol may specify a parsing method for the receiver to employ on received packets. This parsing method may include shift operations, error detection, and buffer management as pipelined operations at the head of the logical operations utilized in a system implementation.
  • one flit is considered to be one unit interval of data being present on a bus. That is, one flit may include 8 bits of data being transferred via the bus.
  • the smallest packet with Address 36b, Window 3b, and Hamming Density (HD6) error coverage of 59 bits may include a 5-bit Transaction Type, a 41-bit data payload, and a 13-bit ECC.
  • the receiver may first receive the 160-bit value immediately available from the FIFO. In the particular example described above, the entire first packet resides within that 160-bit zone.
  • the receiver may then know with certainty that the transaction type value was correct and hence the receiver may have the proper framing of the received packet.
  • the 59 known-to-be-correct bits may then be forwarded to the next pipeline stage for further packet processing (i.e., determine the exact request being made and process the request.)
  • the receiver may then barrel-shift the remaining 101 bits of the 160-bit wide FIFO to align to bit 0 and repeat the above process.
  • the receiver may have too little data available to parse (i.e., everything from transaction type field 32, through payload field 34, and ECC field 36 should be available).
  • the receiver may continue fetching information until it is all available.
  • large packets may exceed a single 160-bit section, since the receiver knows where ECC starts and ends from the transaction type, the receiver may forward the ECC bits to the appropriate ECC logical blocks.
  • the receiver since the transaction type is at the head of the packet, the receiver easily knows to look for it. Further, the receiver may determine that the payload field 34 includes everything between the transaction type field 32 and the ECC field 36. Upon identifying the payload field 34, the receiver may send the data payload to appropriate ECC logical blocks.
  • the ECC logic may be implemented in situ at register bits that temporarily store the data, depending on physical layout optimization uses.
  • FIG. 11 illustrates a flow chart of a method 100 that may be employed by a receiving component (e.g., host SoC 12) that receives packets according to the lane-packing scheme mentioned above.
  • a receiving component e.g., host SoC 12
  • the method 100 may be performed by any suitable receiving component that receives packets that have been lane packed according to the embodiments described herein.
  • the host SoC 12 may receive a stream of bits via the bus 62, the channels 16, or the like. As depicted in FIG. 10 , the host SoC 12 may receive a number of bits at a time based on the number of bit lanes available on the bus 62.
  • the host SoC 12 may identify a start bit of a new packet. As such, the host SoC 12 may monitor the stream of bits until it recieves a 1. For example, at bit time 0, the host SoC 12 may detect the start bit and begin parsing the first packet 92.
  • the host SoC 12 may determine whether the respective packet is free of errors. If the host SoC 12 verifies that the respective packet is error free, the host SoC 12 returns to block 102 and continues receiving the stream of bits. However, if the host SoC 12 determines that the respective packet is not error free, the host SoC 12 may proceed to block 114 and send a NOT_ACKNOWLEDGE packet back to the component that transmitted the respective packet. As discussed above, the NOT_ACKNOWLEDGE packet may indicate a most recent known-to-be-good bus transaction. As such, the NOT_ACKNOWLEDGE packet may indicate the transaction type and the address of the last successfully received packet. Since the transmitting component knows the order in which each packet was transmitted, the transmitting packet may then resend the packet immediately following the packet referenced in the NOT ACKNOWLEDGE packet.
  • the transmitting component may not disregard, delete, erase, or write over sent packets from its buffer until a certain amount of time has passed after a respective packet has been transmitted.
  • the transmitting component e.g., memory SoC 22
  • the transmitting component may wait a certain amount of time before it deletes the transmitted packet from its buffer component.
  • Some of the factors involved in determining the expected time for the various operations described above to be performed include the size of the packet being transmitted, the number of lanes on the request bus Q and the response bus S, an amount of time for a UI of data to be transmitted across each bus, a number of pipeline delays that are expected in the receiving component before the receiving component verifies that the received packet is error free, a maximum depth of queues in the transmitting component, information related to a policy of the transmitting component for sending urgent messages (e.g., are urgent messages placed in the front of the queue), and the like. It should be noted that the factors listed above are provided as examples and do not limit the scope of the factors that may be used to determine the expected time for the various operations to be performed.
  • the transaction windows may be used to indicate an order for a given transaction window, in some instances, performing the transaction operations according to the order of the respective transaction windows may be undesirable.
  • a DRAM might involve a refresh operation, which cannot be postponed by other DRAM operations.
  • Another example may include when a NAND memory may be shuffling data to prepare for an erase operation.
  • a range of addresses associated with the data being shuffled may be temporarily unavailable if a transaction operation is trying to access the same range of addresses.
  • the receiving component may send a data reorder message when it is desirable to depart from the natural response sequence.
  • the receiving component may determine that reordering may be preferred based on the transaction type indicated in the transaction type field 32. That is, the transaction type field 32 may inherently indicate that a reordering is preferred.
  • the transaction type field 32 may be a 64 bit message that includes 16 x 4-bit order identifiers. These identifiers may indicate the order of the next 16 responses, if there are 16 responses pending.
  • the reorder message may be sent any time that a new ordering is preferred.
  • a new reorder message may be sent.
  • the very next response would be response 0, not response 8, because an order counter is reset to zero any time a reorder message is sent.
  • the new relative order of 0 through 15 may be determined according to the most advantageous ordering.
  • all data may be in a "natural" order of the requests received per window.
  • the scalable protocol may save a large amount of overhead that is otherwise used in conventional protocols.
  • the host SoC 12 may receive a number of packets from the transmitting component (e.g., memory SoC 22).
  • the received packets may generally include operations requested to be performed by the host SoC 12 in a preferred order.
  • the transmitting component e.g., memory SoC 22
  • the transmitting component may send packets that correspond to data operations in a particular order, which may reflect a preferred order of operations.
  • the diagram 140 of FIG. 13 illustrates an example original order of packets received by the host SoC 12 in row 142. As shown in FIG. 13 , ten packets transmitted by the transmitting component may be initially numbered 1-10.
  • the host SoC 12 may determine whether the operations indicated in the received packets should be performed in a different order. That is, for example, if the host SoC 12 is unable to perform a particular operation for some reason (e.g., requested memory address is busy, unavailable, etc.), the host SoC 12 may instead perform a later operation before performing the previously requested operation. If the host SoC 12 determines that the operations should not be performed in a different order, the host SoC 12 may proceed to block 126 and perform the operations of the received packets in the preferred order (e.g., as transmitted by the transmitting component).
  • the preferred order e.g., as transmitted by the transmitting component
  • the host SoC 22 may determine a new order to perform the requested operations. To perform operations in a different order, the host SoC 12 may identify a particular packet that corresponds to an operation that may not be performed in the requested order. The host SoC 12 may then determine whether any subsequent operation is dependent on the results of the identified operation. That is, the host SoC 12 may determine whether performing the identified operation at a later time may cause an error in any remaining operations to be performed. In certain embodiments, the host SoC 12 may evaluate the transaction windows of each packet to determine whether operations may be reordered.
  • the host SoC 12 may delay the third Win2 request to perform the first Win3 request because they refer to different transaction windows and thus likely operate on different memory types. Using the transaction windows of each packet, the host SoC 12 may then determine a new order to perform the requested operations.
  • the host SoC 12 may rename a number of packets that are received after a packet immediately preceding the packet that corresponds with the identified operation. In one embodiment, the host SoC 12 may rename the packets according to their current position in the queue. For instance, referring again to FIG. 13 , if the host SoC 12 identifies original packet 5 as a packet containing an operation that should be performed at a later time, the host SoC 12 may rename the packets after packet 4 according to their current position in the queue. As such, packets 5-10 may be renamed to packets 0-5 as illustrated in row 144 of the diagram 140. In this manner, the remaining packets may be renamed according to their relative position in the queue.
  • the host SoC 12 may generate a reorder message that indicates a new order in which the remaining packets will be addressed by the host SoC 12 or according to the order of corresponding operations that will be performed by the host SoC 12.
  • the reorder message may be determined based on the new order determined at block 128 and according to the renamed packets, as provided in block 130. For instance, referring to the example in FIG. 13 again, if the host SoC 12 determined that the original 5 th packet operation should be performed after the original 7 th packet operation, the reorder message may be presented as 1, 2, 3, 0, 4, 5, as shown in row 146. Row 146 indicates the new order of operation according to the renamed packets. For illustrative purposes, row 148 indicates the order in which the reorder message specifies that the remaining packet operations will be according to their original packet numbers.
  • the host SoC 12 may transmit the reorder message to the transmitting component.
  • the transmitting component may use the reorder message to adjust the order in which the response packets transmitted from the host SoC 12 are associated with a respective request packet. That is, the transmitting component may associate each response packet received after the reorder message according to the renamed relative order indicated in the reorder message.
  • the host SoC 12 may provide a reference order to the transmitting component that is relative to the remaining response packets that are to be received by the transmitting component. As such, since the host SoC 12 and the transmitting component may know the order in which packets have already been sent, the packets renamed according to their relative order enables the host SoC 12 to associate the response packets without having to send a packet identification number with each packet, thereby providing a more bit-efficient communication scheme.
  • the scalable protocol may determine the order in which transaction operations are performed, as follows. If there are 4 request buses associated with 4 respective response buses, an associated pair of request and response buses may be named by the scalable protocol as a channel. As such, in one embodiment, a transaction operation may be defined as "channel.window.address.” Here, the ordering may then be defined as "channel.window.dataSequenceNumber.” Often times, just one datum may be part of the transaction operation, such that the data sequence number is often unimportant to save for transaction requests larger than a largest supported packet size. Otherwise, the scalable protocol may follow an ordering within the channel.window. Even when two channels are using the same window, the scalable protocol may not incorporate any ordering between them.
  • the scalable protocol may provide an order within each channel.window combination.
  • the scalable protocol may greatly simplify the operation of the system because channels have the possibility of asynchronous timing inter-relationships.
  • the scalable protocol keeps the ordering simple and also reduces a number of times arbitration may be performed.
  • this ordering technique may also reduce a number of reorder messages that have otherwise been sent.
  • scalable protocol has been described as being capable of providing a new relative order for transaction operations being sent, it may be difficult to incorporate this type of reordering scheme in large systems that may have a high frequency of reordering requests. That is, if reorder messages are sent at some high frequency (i.e., above a certain threshold), it may no longer be an efficient use of time and resources to send reorder messages and reorder the transaction operations. In other words, for some types of systems the frequency of data reordering could become so high that the amount of communications between the transmitting component and the receiving component may become inefficient. For such systems, the scalable protocol may reduce bit traffic of transaction identifications even when large numbers of reorder events are preferred.
  • the request bus Q sequence number may be denoted as "channel.window.Qseq,” such that Qseq may be assigned in round robin order for each respective channel and respective window, thereby preserving bandwidth by not transmitting known data. For instance, if an order of requests (all on channel 0) is as follows: Win2, Win2, Win2, Win3, Win3, Win2, and Win3 and these are the first transactions, the assigned Qseq numbers appended by the receiver would be: 0, 1, 2, 0, 1, 3, and 2 respectively. That is, each window may be associated with a round robin Qseq sequence based on the receipt of each type (i.e., channel/window) of request.
  • the host SoC 12 may determine whether a number of reordering messages transmitted to the transmitting component over some period of time exceeds some threshold.
  • the threshold may be related to a declining performance of the memory device 14, an average number of cycles involved when performing an operation, an average queue depth for each requested operation, or the like.
  • the host SoC 12 may continue sending reorder messages according to the method 120 described above. However, if the host SoC 12 determines that the number of reordering requests is greater than the threshold, the host SoC 12 may proceed to block 164. At block 164, the host SoC 12 may add a sequence value to each received packet in a round robin fashion according to the transaction window of each packet.
  • the transmitting component may store an order in which each packet has been transmitted, such that the order of transmission may correspond to the order in which each packet was received.
  • the receiving component may use a request bus Q sequence number (Qseq) and a data sequence number (DataSequence) to identify each packet when an error occurred and the pipeline may be flushed and the corresponding packets within the pipeline may be resent. For instance, if the error occurred in a packet on the response bus S, a last known-to-be-good packet received by the transmitting component may include a Qseq number in it to use as reference. As a result of employing this technique, some of the messages are actually now shorter since a transaction type is not referenced to indicate a transaction.
  • Qseq request bus Q sequence number
  • DataSequence data sequence number
  • the presently disclosed technique to reorder transaction operations for transaction operations that are reordered at a high frequency, or designed as such may still be economical as compared with conventional protocols, which may add 16 bits to every response.
  • the scalable protocol since the presently disclosed technique includes a sequence number for each response, the scalable protocol may not issue reorder messages or packets. Further, since each transaction operation is associated with a particular sequence number, the transaction operation may be transmitted in a round robin order to ensure that known data is not transmitted.
  • the scalable protocol may provide a flexible programming option for ordering transaction operations or packets in a system.
  • the flexible programming option e.g., ordering effort field
  • the flexible programming option may set a degree of effort that the scalable protocol should use in maintaining the original order of transactions. That is, the flexible ordering effort field may indicate to the scalable protocol how hard it should work to ensure that the packets are transmitted in order.
  • the flexible ordering effort field may be associated with a range of values between a first value that corresponds to keeping every pack in order and a second value that corresponds to allowing anything to be reordered.
  • transaction window 0 may be used as a general purpose control area for memory SoC 22.
  • transaction window 0 may reside in registers, SRAM buffers, cache SRAM, and other addressable control features.
  • the scalable protocol may enable configurable information that can be user programmed.
  • one type of the configurable information e.g., ordering effort field
  • ordering effort field may have a large variation in implementations. For instance, in a 2-bit field, the ordering effort may be characterized as follows:
  • the ordering zone may be related to a combination of a channel, a system window, and a transaction window (e.g., channel.syswin.window).
  • Channel may be a channel number from which the request was received.
  • System window may be an optional pair of fields that, for example, specifies which SoC in the system originated the request.
  • the host SoC 12 may monitor the capacity of the buffer 23 and determine whether the capacity of the buffer 23 of the receiver is less than or equal to some threshold. If the capacity of the buffer 23 is above the threshold, the host SoC 12 may proceed to block 184 and continue receiving packets at the present transmission rate from the transmitting component.
  • the windowMax field may not be relevant or may be considered to be equal to the channelMax field.
  • different backpressure functions may be defined for each respective transaction window. For instance, consider the following 4 examples of transaction windows that use a variety of different memory types as described below.
  • the receiving component may include a system failsafe mechanism to indicate to the transmitting component that the buffer 23 is about to be overrun or exceed its capacity.
  • the receiving component may send a message similar to the not3-acknowledged message described above. This message may have the same effect as the not-acknowledged message except that it may create an entry in a data log of the transmitting component to note that a message was rejected due to the buffer 23 being unable to accept the packet.
  • the transmitting component may determine a reason for the delay in bus traffic.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Communication Control (AREA)
  • Information Transfer Systems (AREA)
  • Dram (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Techniques For Improving Reliability Of Storages (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Memory System Of A Hierarchy Structure (AREA)
  • Memory System (AREA)
EP15802720.1A 2014-06-02 2015-06-01 Systems and methods for packing data in a scalable memory system protocol Active EP3149585B1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201462006668P 2014-06-02 2014-06-02
US14/724,473 US9747048B2 (en) 2014-06-02 2015-05-28 Systems and methods for packing data in a scalable memory system protocol
PCT/US2015/033568 WO2015187574A1 (en) 2014-06-02 2015-06-01 Systems and methods for packing data in a scalable memory system protocol

Publications (3)

Publication Number Publication Date
EP3149585A1 EP3149585A1 (en) 2017-04-05
EP3149585A4 EP3149585A4 (en) 2018-04-11
EP3149585B1 true EP3149585B1 (en) 2020-08-12

Family

ID=54701759

Family Applications (6)

Application Number Title Priority Date Filing Date
EP15802720.1A Active EP3149585B1 (en) 2014-06-02 2015-06-01 Systems and methods for packing data in a scalable memory system protocol
EP15802662.5A Active EP3149592B1 (en) 2014-06-02 2015-06-01 Systems and methods for improving efficiencies of a memory system
EP15802804.3A Active EP3149602B1 (en) 2014-06-02 2015-06-01 Systems and methods for reordering packet transmissions in a scalable memory system protocol
EP15803606.1A Active EP3149599B1 (en) 2014-06-02 2015-06-01 Systems and methods for throttling packet transmission in a scalable memory system protocol
EP15803148.4A Active EP3149586B1 (en) 2014-06-02 2015-06-01 Systems and methods for transmitting packets in a scalable memory system protocol
EP15802464.6A Active EP3149595B1 (en) 2014-06-02 2015-06-01 Systems and methods for segmenting data structures in a memory system

Family Applications After (5)

Application Number Title Priority Date Filing Date
EP15802662.5A Active EP3149592B1 (en) 2014-06-02 2015-06-01 Systems and methods for improving efficiencies of a memory system
EP15802804.3A Active EP3149602B1 (en) 2014-06-02 2015-06-01 Systems and methods for reordering packet transmissions in a scalable memory system protocol
EP15803606.1A Active EP3149599B1 (en) 2014-06-02 2015-06-01 Systems and methods for throttling packet transmission in a scalable memory system protocol
EP15803148.4A Active EP3149586B1 (en) 2014-06-02 2015-06-01 Systems and methods for transmitting packets in a scalable memory system protocol
EP15802464.6A Active EP3149595B1 (en) 2014-06-02 2015-06-01 Systems and methods for segmenting data structures in a memory system

Country Status (6)

Country Link
US (16) US9733847B2 (ko)
EP (6) EP3149585B1 (ko)
KR (3) KR102197401B1 (ko)
CN (8) CN106575257B (ko)
TW (6) TWI582588B (ko)
WO (6) WO2015187577A1 (ko)

Families Citing this family (71)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9606928B2 (en) * 2014-08-26 2017-03-28 Kabushiki Kaisha Toshiba Memory system
US10127383B2 (en) * 2014-11-06 2018-11-13 International Business Machines Corporation Resource usage optimized auditing of database shared memory
US9817602B2 (en) * 2014-11-13 2017-11-14 Violin Systems Llc Non-volatile buffering for deduplication
GB2539443B (en) * 2015-06-16 2020-02-12 Advanced Risc Mach Ltd A transmitter, a receiver, a data transfer system and a method of data transfer
EP3417583B1 (en) * 2016-02-19 2020-12-30 Viasat, Inc. Methods and systems for multi-level network capacity allocation
US9997232B2 (en) * 2016-03-10 2018-06-12 Micron Technology, Inc. Processing in memory (PIM) capable memory device having sensing circuitry performing logic operations
TWI587133B (zh) * 2016-05-20 2017-06-11 慧榮科技股份有限公司 資料儲存裝置之資料頁對齊方法及其查找表的製作方法
JP2018049387A (ja) * 2016-09-20 2018-03-29 東芝メモリ株式会社 メモリシステム及びプロセッサシステム
US11314648B2 (en) 2017-02-08 2022-04-26 Arm Limited Data processing
US10216671B2 (en) 2017-02-27 2019-02-26 Qualcomm Incorporated Power aware arbitration for bus access
US10784986B2 (en) 2017-02-28 2020-09-22 Intel Corporation Forward error correction mechanism for peripheral component interconnect-express (PCI-e)
US10318381B2 (en) * 2017-03-29 2019-06-11 Micron Technology, Inc. Selective error rate information for multidimensional memory
CN109478168B (zh) 2017-06-23 2020-12-04 华为技术有限公司 内存访问技术及计算机系统
US10713189B2 (en) * 2017-06-27 2020-07-14 Qualcomm Incorporated System and method for dynamic buffer sizing in a computing device
US10387242B2 (en) 2017-08-21 2019-08-20 Qualcomm Incorporated Dynamic link error protection in memory systems
US10908820B2 (en) * 2017-09-14 2021-02-02 Samsung Electronics Co., Ltd. Host-based and client-based command scheduling in large bandwidth memory systems
GB2569275B (en) * 2017-10-20 2020-06-03 Graphcore Ltd Time deterministic exchange
US10963003B2 (en) 2017-10-20 2021-03-30 Graphcore Limited Synchronization in a multi-tile processing array
GB201717295D0 (en) 2017-10-20 2017-12-06 Graphcore Ltd Synchronization in a multi-tile processing array
GB2569276B (en) 2017-10-20 2020-10-14 Graphcore Ltd Compiler method
CN107943611B (zh) * 2017-11-08 2021-04-13 天津国芯科技有限公司 一种快速产生crc的控制装置
US10824376B2 (en) 2017-12-08 2020-11-03 Sandisk Technologies Llc Microcontroller architecture for non-volatile memory
US10622075B2 (en) 2017-12-12 2020-04-14 Sandisk Technologies Llc Hybrid microcontroller architecture for non-volatile memory
CN110022268B (zh) * 2018-01-09 2022-05-03 腾讯科技(深圳)有限公司 一种数据传输控制方法、装置及存储介质
CN108388690B (zh) * 2018-01-16 2021-04-30 电子科技大学 元胞自动机实验平台
KR20190099879A (ko) * 2018-02-20 2019-08-28 에스케이하이닉스 주식회사 메모리 컨트롤러 및 그 동작 방법
US10810304B2 (en) 2018-04-16 2020-10-20 International Business Machines Corporation Injecting trap code in an execution path of a process executing a program to generate a trap address range to detect potential malicious code
US11003777B2 (en) 2018-04-16 2021-05-11 International Business Machines Corporation Determining a frequency at which to execute trap code in an execution path of a process executing a program to generate a trap address range to detect potential malicious code
US10831653B2 (en) * 2018-05-15 2020-11-10 Micron Technology, Inc. Forwarding code word address
US11003375B2 (en) 2018-05-15 2021-05-11 Micron Technology, Inc. Code word format and structure
US10496478B1 (en) * 2018-05-24 2019-12-03 Micron Technology, Inc. Progressive length error control code
US10409680B1 (en) * 2018-05-24 2019-09-10 Micron Technology, Inc. Progressive length error control code
US10969994B2 (en) * 2018-08-08 2021-04-06 Micron Technology, Inc. Throttle response signals from a memory system
US11074007B2 (en) 2018-08-08 2021-07-27 Micron Technology, Inc. Optimize information requests to a memory system
TWI819072B (zh) * 2018-08-23 2023-10-21 美商阿爾克斯股份有限公司 在網路運算環境中用於避免環路衝突的系統、非暫態電腦可讀取儲存媒體及電腦實現方法
KR102541897B1 (ko) 2018-08-27 2023-06-12 에스케이하이닉스 주식회사 메모리 시스템
US11080210B2 (en) 2018-09-06 2021-08-03 Micron Technology, Inc. Memory sub-system including an in package sequencer separate from a controller
US11061751B2 (en) 2018-09-06 2021-07-13 Micron Technology, Inc. Providing bandwidth expansion for a memory sub-system including a sequencer separate from a controller
US10838909B2 (en) 2018-09-24 2020-11-17 Hewlett Packard Enterprise Development Lp Methods and systems for computing in memory
US10771189B2 (en) 2018-12-18 2020-09-08 Intel Corporation Forward error correction mechanism for data transmission across multi-lane links
WO2020135385A1 (zh) * 2018-12-29 2020-07-02 上海寒武纪信息科技有限公司 通用机器学习模型、模型文件的生成和解析方法
CN109815043B (zh) * 2019-01-25 2022-04-05 华为云计算技术有限公司 故障处理方法、相关设备及计算机存储介质
US11637657B2 (en) 2019-02-15 2023-04-25 Intel Corporation Low-latency forward error correction for high-speed serial links
US10997111B2 (en) 2019-03-01 2021-05-04 Intel Corporation Flit-based packetization
US11249837B2 (en) * 2019-03-01 2022-02-15 Intel Corporation Flit-based parallel-forward error correction and parity
US10777240B1 (en) 2019-03-07 2020-09-15 Sandisk Technologies Llc Efficient control of memory core circuits
TWI810262B (zh) * 2019-03-22 2023-08-01 美商高通公司 用於計算機器的可變位元寬資料格式的單打包和拆包網路及方法
US10983795B2 (en) * 2019-03-27 2021-04-20 Micron Technology, Inc. Extended memory operations
US11296994B2 (en) 2019-05-13 2022-04-05 Intel Corporation Ordered sets for high-speed interconnects
US10877889B2 (en) * 2019-05-16 2020-12-29 Micron Technology, Inc. Processor-side transaction context memory interface systems and methods
US10971199B2 (en) 2019-06-20 2021-04-06 Sandisk Technologies Llc Microcontroller for non-volatile memory with combinational logic
US11740958B2 (en) 2019-11-27 2023-08-29 Intel Corporation Multi-protocol support on common physical layer
EP4082012A4 (en) 2019-12-26 2024-01-10 Micron Technology, Inc. METHOD FOR NON-DETERMINISTIC OPERATION OF A STACKED MEMORY SYSTEM
WO2021133692A1 (en) 2019-12-26 2021-07-01 Micron Technology, Inc. Truth table extension for stacked memory systems
WO2021133690A1 (en) 2019-12-26 2021-07-01 Micron Technology, Inc. Host techniques for stacked memory systems
KR20210091404A (ko) 2020-01-13 2021-07-22 삼성전자주식회사 메모리 장치, 메모리 모듈 및 메모리 장치의 동작 방법
US11507498B2 (en) 2020-03-05 2022-11-22 Sandisk Technologies Llc Pre-computation of memory core control signals
EP4133048A1 (en) 2020-04-10 2023-02-15 The Procter & Gamble Company Cleaning implement with a rheological solid composition
US12004009B2 (en) * 2020-05-04 2024-06-04 Qualcomm Incorporated Methods and apparatus for managing compressor memory
US11979330B2 (en) * 2020-06-22 2024-05-07 Google Llc Rate update engine for reliable transport protocol
US11474743B2 (en) * 2020-08-13 2022-10-18 Micron Technology, Inc. Data modification
US11494120B2 (en) * 2020-10-02 2022-11-08 Qualcomm Incorporated Adaptive memory transaction scheduling
TWI763131B (zh) * 2020-11-18 2022-05-01 瑞昱半導體股份有限公司 網路介面裝置、包含該網路介面裝置之電子裝置,及網路介面裝置的操作方法
US11409608B2 (en) * 2020-12-29 2022-08-09 Advanced Micro Devices, Inc. Providing host-based error detection capabilities in a remote execution device
US11481270B1 (en) * 2021-06-16 2022-10-25 Ampere Computing Llc Method and system for sequencing data checks in a packet
CN113840272B (zh) * 2021-10-12 2024-05-14 北京奕斯伟计算技术股份有限公司 数据传输方法、数据传输装置以及电子装置
US11886367B2 (en) * 2021-12-08 2024-01-30 Ati Technologies Ulc Arbitration allocating requests during backpressure
CN114301995B (zh) * 2021-12-30 2023-07-18 上海交通大学 实时工业以太网协议的转换切换与互通融合系统及其方法
US20230236992A1 (en) * 2022-01-21 2023-07-27 Arm Limited Data elision
US11922026B2 (en) 2022-02-16 2024-03-05 T-Mobile Usa, Inc. Preventing data loss in a filesystem by creating duplicates of data in parallel, such as charging data in a wireless telecommunications network
US11914473B1 (en) * 2022-10-20 2024-02-27 Micron Technology, Inc. Data recovery using ordered data requests

Family Cites Families (216)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7380092B2 (en) 2002-06-28 2008-05-27 Rambus Inc. Memory device and system having a variable depth write buffer and preload method
USRE36751E (en) * 1987-07-15 2000-06-27 Hitachi, Ltd. ATM switching system connectable to I/O links having different transmission rates
EP0453863A2 (en) 1990-04-27 1991-10-30 National Semiconductor Corporation Methods and apparatus for implementing a media access control/host system interface
US5379297A (en) * 1992-04-09 1995-01-03 Network Equipment Technologies, Inc. Concurrent multi-channel segmentation and reassembly processors for asynchronous transfer mode
JPH05308373A (ja) * 1992-04-28 1993-11-19 Matsushita Electric Ind Co Ltd スター型分散制御ネットワークおよびそれに用いる端末装置
US5771247A (en) * 1994-10-03 1998-06-23 International Business Machines Corporation Low latency error reporting for high performance bus
US6725349B2 (en) 1994-12-23 2004-04-20 Intel Corporation Method and apparatus for controlling of a memory subsystem installed with standard page mode memory and an extended data out memory
US5781449A (en) * 1995-08-10 1998-07-14 Advanced System Technologies, Inc. Response time measurement apparatus and method
US5978874A (en) * 1996-07-01 1999-11-02 Sun Microsystems, Inc. Implementing snooping on a split-transaction computer system bus
US5935213A (en) * 1996-05-02 1999-08-10 Fore Systems, Inc. System and method for generating explicit rate value information for flow control in ATAM network
US5918182A (en) * 1996-08-30 1999-06-29 Motorola, Inc. Method and apparatus for mitigating data congestion in an integrated voice/data radio communications system
US5754567A (en) 1996-10-15 1998-05-19 Micron Quantum Devices, Inc. Write reduction in flash memory systems through ECC usage
US6272600B1 (en) 1996-11-15 2001-08-07 Hyundai Electronics America Memory request reordering in a data processing system
US6208655B1 (en) * 1996-11-27 2001-03-27 Sony Europa, B.V., Method and apparatus for serving data
US6292834B1 (en) * 1997-03-14 2001-09-18 Microsoft Corporation Dynamic bandwidth selection for efficient transmission of multimedia streams in a computer network
KR100247022B1 (ko) * 1997-06-11 2000-04-01 윤종용 Atm 스위칭 시스템의 단일 스위치 소자 및 버퍼 문턱값 결정 방법
US6021124A (en) * 1997-08-19 2000-02-01 Telefonaktiebolaget Lm Ericsson Multi-channel automatic retransmission query (ARQ) method
US6516442B1 (en) * 1997-12-07 2003-02-04 Conexant Systems, Inc. Channel interface and protocols for cache coherency in a scalable symmetric multiprocessor system
JP2881418B1 (ja) * 1998-02-20 1999-04-12 一男 佐藤 識別データー記載シリコン基板およびその製造方法
JP3650262B2 (ja) * 1998-03-20 2005-05-18 富士通株式会社 セルの転送レート制御装置およびその方法
US6782490B2 (en) * 1999-03-17 2004-08-24 At&T Corp. Network-based service for the repair of IP multicast sessions
US6952401B1 (en) * 1999-03-17 2005-10-04 Broadcom Corporation Method for load balancing in a network switch
US7668189B1 (en) * 1999-07-08 2010-02-23 Thomson Licensing Adaptive transport protocol
US6751698B1 (en) * 1999-09-29 2004-06-15 Silicon Graphics, Inc. Multiprocessor node controller circuit and method
DE60036453T2 (de) * 1999-11-22 2008-06-19 Sony Corp. Videobandaufzeichnungs- und wiedergabegerät und videobandabspielgerät
US6799220B1 (en) * 2000-04-13 2004-09-28 Intel Corporation Tunneling management messages over a channel architecture network
US6715007B1 (en) * 2000-07-13 2004-03-30 General Dynamics Decision Systems, Inc. Method of regulating a flow of data in a communication system and apparatus therefor
CN1276372C (zh) * 2000-09-29 2006-09-20 艾拉克瑞技术公司 智能网络存储接口系统和装置
US20020159552A1 (en) 2000-11-22 2002-10-31 Yeshik Shin Method and system for plesiosynchronous communications with null insertion and removal
US20020069317A1 (en) 2000-12-01 2002-06-06 Chow Yan Chiew E-RAID system and method of operating the same
GB0031535D0 (en) * 2000-12-22 2001-02-07 Nokia Networks Oy Traffic congestion
US7469341B2 (en) * 2001-04-18 2008-12-23 Ipass Inc. Method and system for associating a plurality of transaction data records generated in a service access system
US7287649B2 (en) 2001-05-18 2007-10-30 Broadcom Corporation System on a chip for packet processing
US7006438B2 (en) * 2001-05-31 2006-02-28 Turin Networks Distributed control of data flow in a network switch
US20030033421A1 (en) * 2001-08-02 2003-02-13 Amplify.Net, Inc. Method for ascertaining network bandwidth allocation policy associated with application port numbers
US20030031178A1 (en) * 2001-08-07 2003-02-13 Amplify.Net, Inc. Method for ascertaining network bandwidth allocation policy associated with network address
US7072299B2 (en) * 2001-08-20 2006-07-04 International Business Machines Corporation Credit-based receiver using selected transmit rates and storage thresholds for preventing under flow and over flow-methods, apparatus and program products
KR100790131B1 (ko) * 2001-08-24 2008-01-02 삼성전자주식회사 패킷 통신시스템에서 매체 접속 제어 계층 엔터티들 간의 시그널링 방법
DE60213616T2 (de) * 2001-08-24 2007-08-09 Intel Corporation, Santa Clara Eine allgemeine eingabe-/ausgabearchitektur, protokoll und entsprechende verfahren zur umsetzung der flusssteuerung
US7062609B1 (en) * 2001-09-19 2006-06-13 Cisco Technology, Inc. Method and apparatus for selecting transfer types
US20030093632A1 (en) * 2001-11-12 2003-05-15 Intel Corporation Method and apparatus for sideband read return header in memory interconnect
KR100415115B1 (ko) * 2001-11-29 2004-01-13 삼성전자주식회사 통신시스템의 데이터 혼잡 통보 방법 및 장치
JP3912091B2 (ja) * 2001-12-04 2007-05-09 ソニー株式会社 データ通信システム、データ送信装置、データ受信装置、および方法、並びにコンピュータ・プログラム
WO2003063423A1 (en) * 2002-01-24 2003-07-31 University Of Southern California Pseudorandom data storage
US20030152096A1 (en) * 2002-02-13 2003-08-14 Korey Chapman Intelligent no packet loss networking
DE60205014T2 (de) * 2002-02-14 2005-12-29 Matsushita Electric Industrial Co., Ltd., Kadoma Verfahren zum Steuern der Datenrate in einem drahtlosen Paketdatenkommunikationssystem, Sender und Empfänger zu seiner Verwendung
US6970978B1 (en) * 2002-04-03 2005-11-29 Advanced Micro Devices, Inc. System and method for providing a pre-fetch memory controller
KR100429904B1 (ko) * 2002-05-18 2004-05-03 한국전자통신연구원 차등화된 QoS 서비스를 제공하는 라우터 및 그것의고속 IP 패킷 분류 방법
US6963868B2 (en) * 2002-06-03 2005-11-08 International Business Machines Corporation Multi-bit Patricia trees
US7133972B2 (en) 2002-06-07 2006-11-07 Micron Technology, Inc. Memory hub with internal cache and/or memory access prediction
US7043599B1 (en) 2002-06-20 2006-05-09 Rambus Inc. Dynamic memory supporting simultaneous refresh and data-access transactions
US7408876B1 (en) * 2002-07-02 2008-08-05 Extreme Networks Method and apparatus for providing quality of service across a switched backplane between egress queue managers
US7051150B2 (en) * 2002-07-29 2006-05-23 Freescale Semiconductor, Inc. Scalable on chip network
US7124260B2 (en) 2002-08-26 2006-10-17 Micron Technology, Inc. Modified persistent auto precharge command protocol system and method for memory devices
US7143264B2 (en) * 2002-10-10 2006-11-28 Intel Corporation Apparatus and method for performing data access in accordance with memory access patterns
US7372814B1 (en) * 2003-02-27 2008-05-13 Alcatel-Lucent Network system with color-aware upstream switch transmission rate control in response to downstream switch traffic buffering
US7080217B2 (en) 2003-03-31 2006-07-18 Intel Corporation Cycle type based throttling
US6988173B2 (en) 2003-05-12 2006-01-17 International Business Machines Corporation Bus protocol for a switchless distributed shared memory computer system
US7167942B1 (en) 2003-06-09 2007-01-23 Marvell International Ltd. Dynamic random access memory controller
KR100807446B1 (ko) * 2003-06-18 2008-02-25 니폰덴신뎅와 가부시키가이샤 무선 패킷 통신방법 및 통신장치
US7342881B2 (en) * 2003-06-20 2008-03-11 Alcatel Backpressure history mechanism in flow control
US7277978B2 (en) * 2003-09-16 2007-10-02 Micron Technology, Inc. Runtime flash device detection and configuration for flash data management software
US7174441B2 (en) * 2003-10-17 2007-02-06 Raza Microelectronics, Inc. Method and apparatus for providing internal table extensibility with external interface
KR100526187B1 (ko) * 2003-10-18 2005-11-03 삼성전자주식회사 모바일 애드 혹 네트워크 환경에서 최적의 전송율을 찾기위한 조절 방법
US20050108501A1 (en) * 2003-11-03 2005-05-19 Smith Zachary S. Systems and methods for identifying unending transactions
US7420919B1 (en) * 2003-11-10 2008-09-02 Cisco Technology, Inc. Self converging communication fair rate control system and method
KR100560748B1 (ko) * 2003-11-11 2006-03-13 삼성전자주식회사 알피알 공평 메카니즘을 이용한 대역폭 할당 방법
US7451381B2 (en) * 2004-02-03 2008-11-11 Phonex Broadband Corporation Reliable method and system for efficiently transporting dynamic data across a network
JP4521206B2 (ja) * 2004-03-01 2010-08-11 株式会社日立製作所 ネットワークストレージシステム、コマンドコントローラ、及びネットワークストレージシステムにおけるコマンド制御方法
US7475174B2 (en) * 2004-03-17 2009-01-06 Super Talent Electronics, Inc. Flash / phase-change memory in multi-ring topology using serial-link packet interface
US20050210185A1 (en) * 2004-03-18 2005-09-22 Kirsten Renick System and method for organizing data transfers with memory hub memory modules
US20050223141A1 (en) * 2004-03-31 2005-10-06 Pak-Lung Seto Data flow control in a data storage system
JP2005318429A (ja) * 2004-04-30 2005-11-10 Sony Ericsson Mobilecommunications Japan Inc 再送制御方法及び無線通信端末
US20060056308A1 (en) * 2004-05-28 2006-03-16 International Business Machines Corporation Method of switching fabric for counteracting a saturation tree occurring in a network with nodes
US7984179B1 (en) * 2004-06-29 2011-07-19 Sextant Navigation, Inc. Adaptive media transport management for continuous media stream over LAN/WAN environment
CN100502531C (zh) * 2004-07-13 2009-06-17 Ut斯达康通讯有限公司 无线基站系统中无线信号的分组传输方法
US7441087B2 (en) * 2004-08-17 2008-10-21 Nvidia Corporation System, apparatus and method for issuing predictions from an inventory to access a memory
US7433363B2 (en) 2004-08-23 2008-10-07 The United States Of America As Represented By The Secretary Of The Navy Low latency switch architecture for high-performance packet-switched networks
US7660245B1 (en) * 2004-09-16 2010-02-09 Qualcomm Incorporated FEC architecture for streaming services including symbol-based operations and packet tagging
US7340582B2 (en) * 2004-09-30 2008-03-04 Intel Corporation Fault processing for direct memory access address translation
TWI254849B (en) * 2004-10-13 2006-05-11 Via Tech Inc Method and related apparatus for data error checking
US7830801B2 (en) * 2004-10-29 2010-11-09 Broadcom Corporation Intelligent fabric congestion detection apparatus and method
US7859996B2 (en) * 2004-10-29 2010-12-28 Broadcom Corporation Intelligent congestion feedback apparatus and method
US20060143678A1 (en) * 2004-12-10 2006-06-29 Microsoft Corporation System and process for controlling the coding bit rate of streaming media data employing a linear quadratic control technique and leaky bucket model
US7702742B2 (en) * 2005-01-18 2010-04-20 Fortinet, Inc. Mechanism for enabling memory transactions to be conducted across a lossy network
US7877566B2 (en) 2005-01-25 2011-01-25 Atmel Corporation Simultaneous pipelined read with multiple level cache for improved system performance using flash technology
US8085755B2 (en) * 2005-04-01 2011-12-27 Cisco Technology, Inc. Data driven route advertisement
US7987306B2 (en) * 2005-04-04 2011-07-26 Oracle America, Inc. Hiding system latencies in a throughput networking system
US7743183B2 (en) * 2005-05-23 2010-06-22 Microsoft Corporation Flow control for media streaming
TWI305890B (en) 2005-05-27 2009-02-01 Darfon Electronics Corp Button mechanism
US8027256B1 (en) * 2005-06-02 2011-09-27 Force 10 Networks, Inc. Multi-port network device using lookup cost backpressure
DE102005035207A1 (de) * 2005-07-27 2007-02-01 Siemens Ag Verfahren und Vorrichtung zur Datenübertragung zwischen zwei relativ zueinander bewegten Komponenten
JP2009503743A (ja) * 2005-08-03 2009-01-29 サンディスク コーポレイション データファイルを直接記憶するメモリブロックの管理
US7630307B1 (en) * 2005-08-18 2009-12-08 At&T Intellectual Property Ii, Lp Arrangement for minimizing data overflow by managing data buffer occupancy, especially suitable for fibre channel environments
US8291295B2 (en) 2005-09-26 2012-10-16 Sandisk Il Ltd. NAND flash memory controller exporting a NAND interface
US7652922B2 (en) * 2005-09-30 2010-01-26 Mosaid Technologies Incorporated Multiple independent serial link memory
US7961621B2 (en) * 2005-10-11 2011-06-14 Cisco Technology, Inc. Methods and devices for backward congestion notification
US8149846B2 (en) * 2005-11-10 2012-04-03 Hewlett-Packard Development Company, L.P. Data processing system and method
US7698498B2 (en) 2005-12-29 2010-04-13 Intel Corporation Memory controller with bank sorting and scheduling
WO2007095551A2 (en) * 2006-02-13 2007-08-23 Digital Fountain, Inc. Fec streaming with aggregation of concurrent streams for fec computation
US7617437B2 (en) * 2006-02-21 2009-11-10 Freescale Semiconductor, Inc. Error correction device and method thereof
KR100695435B1 (ko) 2006-04-13 2007-03-16 주식회사 하이닉스반도체 반도체 메모리 소자
US7756028B2 (en) * 2006-04-27 2010-07-13 Alcatel Lucent Pulsed backpressure mechanism for reduced FIFO utilization
WO2008013528A1 (en) * 2006-07-25 2008-01-31 Thomson Licensing Recovery from burst packet loss in internet protocol based wireless networks using staggercasting and cross-packet forward error correction
US8407395B2 (en) * 2006-08-22 2013-03-26 Mosaid Technologies Incorporated Scalable memory system
US7739576B2 (en) * 2006-08-31 2010-06-15 Micron Technology, Inc. Variable strength ECC
EP2084864A1 (en) * 2006-10-24 2009-08-05 Medianet Innovations A/S Method and system for firewall friendly real-time communication
US7818489B2 (en) 2006-11-04 2010-10-19 Virident Systems Inc. Integrating data from symmetric and asymmetric memory
JP2008123330A (ja) * 2006-11-14 2008-05-29 Toshiba Corp 不揮発性半導体記憶装置
US7818389B1 (en) * 2006-12-01 2010-10-19 Marvell International Ltd. Packet buffer apparatus and method
US9116823B2 (en) * 2006-12-06 2015-08-25 Intelligent Intellectual Property Holdings 2 Llc Systems and methods for adaptive error-correction coding
KR101364443B1 (ko) 2007-01-31 2014-02-17 삼성전자주식회사 메모리 시스템, 이 시스템을 위한 메모리 제어기와 메모리,이 시스템의 신호 구성 방법
US7596643B2 (en) * 2007-02-07 2009-09-29 Siliconsystems, Inc. Storage subsystem with configurable buffer
US8693406B2 (en) * 2007-08-09 2014-04-08 Intel Corporation Multi-user resource allocation and medium access control (MAC) overhead reduction for mobile worldwide interoperability for microwave access (WiMAX) systems
US7937631B2 (en) * 2007-08-28 2011-05-03 Qimonda Ag Method for self-test and self-repair in a multi-chip package environment
JP4564520B2 (ja) * 2007-08-31 2010-10-20 株式会社東芝 半導体記憶装置およびその制御方法
US7769015B2 (en) 2007-09-11 2010-08-03 Liquid Computing Corporation High performance network adapter (HPNA)
US7821939B2 (en) * 2007-09-26 2010-10-26 International Business Machines Corporation Method, system, and computer program product for adaptive congestion control on virtual lanes for data center ethernet architecture
US8130649B2 (en) * 2007-10-18 2012-03-06 Alcatel Lucent Ingress traffic flow control in a data communications system
US8305991B1 (en) * 2007-11-14 2012-11-06 Sprint Spectrum L.P. Method and system for sector switching during packet transmission
US7870351B2 (en) * 2007-11-15 2011-01-11 Micron Technology, Inc. System, apparatus, and method for modifying the order of memory accesses
US8762620B2 (en) * 2007-12-27 2014-06-24 Sandisk Enterprise Ip Llc Multiprocessor storage controller
US8120990B2 (en) 2008-02-04 2012-02-21 Mosaid Technologies Incorporated Flexible memory operations in NAND flash devices
US8355336B2 (en) * 2008-02-13 2013-01-15 Qualcomm Incorporated Methods and apparatus for formatting headers in a communication frame
JP5141606B2 (ja) * 2008-03-26 2013-02-13 セイコーエプソン株式会社 印刷装置
US8724636B2 (en) * 2008-03-31 2014-05-13 Qualcomm Incorporated Methods of reliably sending control signal
EP2279576A4 (en) * 2008-04-24 2012-02-01 Ericsson Telefon Ab L M ERROR RATE MANAGEMENT
US8374986B2 (en) * 2008-05-15 2013-02-12 Exegy Incorporated Method and system for accelerated stream processing
US8223796B2 (en) * 2008-06-18 2012-07-17 Ati Technologies Ulc Graphics multi-media IC and method of its operation
US8542588B2 (en) * 2008-06-25 2013-09-24 Qualcomm Incorporated Invoking different wireless link rate selection operations for different traffic classes
KR101431760B1 (ko) * 2008-06-25 2014-08-20 삼성전자주식회사 Ecc 알고리즘을 이용한 플래시 메모리 장치 및 그구동방법
US7937419B2 (en) * 2008-06-26 2011-05-03 Tatu Ylonen Oy Garbage collection via multiobjects
US8547846B1 (en) * 2008-08-28 2013-10-01 Raytheon Bbn Technologies Corp. Method and apparatus providing precedence drop quality of service (PDQoS) with class-based latency differentiation
KR101003102B1 (ko) * 2008-09-24 2010-12-21 한국전자통신연구원 멀티 프로세싱 유닛에 대한 메모리 매핑방법, 및 장치
JP5659791B2 (ja) * 2008-10-09 2015-01-28 日本電気株式会社 コンテンツ配信システム、コンテンツ配信方法及びプログラム
US8402190B2 (en) * 2008-12-02 2013-03-19 International Business Machines Corporation Network adaptor optimization and interrupt reduction
US20100161938A1 (en) * 2008-12-23 2010-06-24 Marco Heddes System-On-A-Chip Supporting A Networked Array Of Configurable Symmetric Multiprocessing Nodes
US8737374B2 (en) * 2009-01-06 2014-05-27 Qualcomm Incorporated System and method for packet acknowledgment
JP5168166B2 (ja) * 2009-01-21 2013-03-21 富士通株式会社 通信装置および通信制御方法
EP2214100A1 (en) * 2009-01-30 2010-08-04 BRITISH TELECOMMUNICATIONS public limited company Allocation of processing tasks
JP5342658B2 (ja) * 2009-03-06 2013-11-13 アスペラ,インク. I/o駆動の速度適応のための方法およびシステム
TWI384810B (zh) 2009-05-07 2013-02-01 Etron Technology Inc 可節省通用串列匯流排協定中用來儲存封包之記憶體之資料傳輸方法及其裝置
US8880716B2 (en) * 2009-05-08 2014-11-04 Canon Kabushiki Kaisha Network streaming of a single data stream simultaneously over multiple physical interfaces
CN101924603B (zh) * 2009-06-09 2014-08-20 华为技术有限公司 数据传输速率的自适应调整方法、装置及系统
US20110035540A1 (en) * 2009-08-10 2011-02-10 Adtron, Inc. Flash blade system architecture and method
US8238244B2 (en) * 2009-08-10 2012-08-07 Micron Technology, Inc. Packet deconstruction/reconstruction and link-control
US8281065B2 (en) * 2009-09-01 2012-10-02 Apple Inc. Systems and methods for determining the status of memory locations in a non-volatile memory
US8543893B2 (en) * 2009-09-02 2013-09-24 Agere Systems Llc Receiver for error-protected packet-based frame
FR2949931B1 (fr) * 2009-09-10 2011-08-26 Canon Kk Procedes et dispositifs de transmission d'un flux de donnees, produit programme d'ordinateur et moyen de stockage correspondants.
US8966110B2 (en) 2009-09-14 2015-02-24 International Business Machines Corporation Dynamic bandwidth throttling
US8312187B2 (en) * 2009-09-18 2012-11-13 Oracle America, Inc. Input/output device including a mechanism for transaction layer packet processing in multiple processor systems
JP5404798B2 (ja) * 2009-09-21 2014-02-05 株式会社東芝 仮想記憶管理装置及び記憶管理装置
JP2013506917A (ja) * 2009-09-30 2013-02-28 サンプリファイ システムズ インコーポレイテッド 圧縮及び復元を用いたマルチ・プロセッサの波形データ交換の改善
US9477636B2 (en) 2009-10-21 2016-10-25 Micron Technology, Inc. Memory having internal processors and data communication methods in memory
US8719516B2 (en) * 2009-10-21 2014-05-06 Micron Technology, Inc. Memory having internal processors and methods of controlling memory access
US8281218B1 (en) * 2009-11-02 2012-10-02 Western Digital Technologies, Inc. Data manipulation engine
US8473669B2 (en) * 2009-12-07 2013-06-25 Sandisk Technologies Inc. Method and system for concurrent background and foreground operations in a non-volatile memory array
US9081501B2 (en) * 2010-01-08 2015-07-14 International Business Machines Corporation Multi-petascale highly efficient parallel supercomputer
US8183452B2 (en) * 2010-03-23 2012-05-22 Yamaha Corporation Tone generation apparatus
US8321753B2 (en) 2010-04-13 2012-11-27 Juniper Networks, Inc. Optimization of packet buffer memory utilization
CN101883446B (zh) * 2010-06-28 2014-03-26 华为终端有限公司 一种sd控制芯片及数据通信方法
US8680886B1 (en) * 2010-05-13 2014-03-25 Altera Corporation Apparatus for configurable electronic circuitry and associated methods
US8295292B2 (en) * 2010-05-19 2012-10-23 Telefonaktiebolaget L M Ericsson (Publ) High performance hardware linked list processors
US20110299589A1 (en) * 2010-06-04 2011-12-08 Apple Inc. Rate control in video communication via virtual transmission buffer
US8539311B2 (en) * 2010-07-01 2013-09-17 Densbits Technologies Ltd. System and method for data recovery in multi-level cell memories
KR101719395B1 (ko) * 2010-07-13 2017-03-23 샌디스크 테크놀로지스 엘엘씨 백-엔드 메모리 시스템 인터페이스를 동적으로 최적화하는 방법
US20120192026A1 (en) * 2010-07-16 2012-07-26 Industrial Technology Research Institute Methods and Systems for Data Transmission Management Using HARQ Mechanism for Concatenated Coded System
US8751903B2 (en) 2010-07-26 2014-06-10 Apple Inc. Methods and systems for monitoring write operations of non-volatile memory
CN103229155B (zh) 2010-09-24 2016-11-09 德克萨斯存储系统股份有限公司 高速内存系统
US10397823B2 (en) * 2010-10-01 2019-08-27 Signify Holding B.V. Device and method for scheduling data packet transmission in wireless networks
EP2447842A1 (en) * 2010-10-28 2012-05-02 Thomson Licensing Method and system for error correction in a memory array
US8842536B2 (en) * 2010-12-23 2014-09-23 Brocade Communications Systems, Inc. Ingress rate limiting
JP2012150152A (ja) * 2011-01-17 2012-08-09 Renesas Electronics Corp データ処理装置及び半導体装置
WO2012129191A2 (en) * 2011-03-18 2012-09-27 Fusion-Io, Inc. Logical interfaces for contextual storage
JP5800565B2 (ja) 2011-05-11 2015-10-28 キヤノン株式会社 データ転送装置及びデータ転送方法
KR20130021865A (ko) * 2011-08-24 2013-03-06 삼성전자주식회사 이동통신 시스템의 고정자원 할당 방법 및 장치
US8832331B2 (en) * 2011-08-29 2014-09-09 Ati Technologies Ulc Data modification for device communication channel packets
WO2013048493A1 (en) * 2011-09-30 2013-04-04 Intel Corporation Memory channel that supports near memory and far memory access
US8588221B2 (en) * 2011-10-07 2013-11-19 Intel Mobile Communications GmbH Method and interface for interfacing a radio frequency transceiver with a baseband processor
US20130094472A1 (en) * 2011-10-14 2013-04-18 Qualcomm Incorporated Methods and apparatuses for reducing voice/data interruption during a mobility procedure
US8793543B2 (en) * 2011-11-07 2014-07-29 Sandisk Enterprise Ip Llc Adaptive read comparison signal generation for memory systems
US8954822B2 (en) * 2011-11-18 2015-02-10 Sandisk Enterprise Ip Llc Data encoder and decoder using memory-specific parity-check matrix
US9048876B2 (en) * 2011-11-18 2015-06-02 Sandisk Enterprise Ip Llc Systems, methods and devices for multi-tiered error correction
US9740484B2 (en) * 2011-12-22 2017-08-22 Intel Corporation Processor-based apparatus and method for processing bit streams using bit-oriented instructions through byte-oriented storage
TW201346572A (zh) 2012-01-27 2013-11-16 Marvell World Trade Ltd 發送器設備及發送器系統
EP2815529B1 (en) * 2012-02-17 2019-12-11 Samsung Electronics Co., Ltd. Data packet transmission/reception apparatus and method
US9135192B2 (en) 2012-03-30 2015-09-15 Sandisk Technologies Inc. Memory system with command queue reordering
US8694698B2 (en) * 2012-04-12 2014-04-08 Hitachi, Ltd. Storage system and method for prioritizing data transfer access
US9436625B2 (en) * 2012-06-13 2016-09-06 Nvidia Corporation Approach for allocating virtual bank managers within a dynamic random access memory (DRAM) controller to physical banks within a DRAM
WO2014000172A1 (en) * 2012-06-27 2014-01-03 Qualcomm Incorporated Low overhead and highly robust flow control apparatus and method
US10034023B1 (en) * 2012-07-30 2018-07-24 Google Llc Extended protection of digital video streams
US9444751B1 (en) * 2012-08-03 2016-09-13 University Of Southern California Backpressure with adaptive redundancy
GB2505956B (en) * 2012-09-18 2015-08-05 Canon Kk Method and apparatus for controlling the data rate of a data transmission between an emitter and a receiver
US9215174B2 (en) * 2012-10-18 2015-12-15 Broadcom Corporation Oversubscription buffer management
US9418035B2 (en) * 2012-10-22 2016-08-16 Intel Corporation High performance interconnect physical layer
US9424228B2 (en) 2012-11-01 2016-08-23 Ezchip Technologies Ltd. High performance, scalable multi chip interconnect
US8713311B1 (en) 2012-11-07 2014-04-29 Google Inc. Encryption using alternate authentication key
US9438511B2 (en) * 2012-12-11 2016-09-06 Hewlett Packard Enterprise Development Lp Identifying a label-switched path (LSP) associated with a multi protocol label switching (MPLS) service and diagnosing a LSP related fault
US9229854B1 (en) * 2013-01-28 2016-01-05 Radian Memory Systems, LLC Multi-array operation support and related devices, systems and software
US9652376B2 (en) * 2013-01-28 2017-05-16 Radian Memory Systems, Inc. Cooperative flash memory control
KR20140100008A (ko) * 2013-02-05 2014-08-14 삼성전자주식회사 휘발성 메모리 장치의 구동 방법 및 휘발성 메모리 장치의 테스트 방법
US9569612B2 (en) * 2013-03-14 2017-02-14 Daniel Shawcross Wilkerson Hard object: lightweight hardware enforcement of encapsulation, unforgeability, and transactionality
US9030771B2 (en) * 2013-04-26 2015-05-12 Oracle International Corporation Compressed data verification
WO2014205119A1 (en) * 2013-06-18 2014-12-24 The Regents Of The University Of Colorado, A Body Corporate Software-defined energy communication networks
US9967778B2 (en) * 2013-06-19 2018-05-08 Lg Electronics Inc. Reception method of MTC device
KR102123439B1 (ko) * 2013-11-20 2020-06-16 삼성전자 주식회사 이동 망에서 비디오 트래픽의 사용자 만족도 최적화를 고려한 혼잡 완화 방법 및 그 장치
GB2520724A (en) * 2013-11-29 2015-06-03 St Microelectronics Res & Dev Debug circuitry
US9699079B2 (en) * 2013-12-30 2017-07-04 Netspeed Systems Streaming bridge design with host interfaces and network on chip (NoC) layers
JP6249403B2 (ja) * 2014-02-27 2017-12-20 国立研究開発法人情報通信研究機構 光遅延線及び電子バッファ融合型光パケットバッファ制御装置
US9813815B2 (en) * 2014-05-20 2017-11-07 Gn Hearing A/S Method of wireless transmission of digital audio
KR102310580B1 (ko) * 2014-10-24 2021-10-13 에스케이하이닉스 주식회사 메모리 시스템 및 메모리 시스템의 동작 방법
US9740646B2 (en) * 2014-12-20 2017-08-22 Intel Corporation Early identification in transactional buffered memory
US9185045B1 (en) * 2015-05-01 2015-11-10 Ubitus, Inc. Transport protocol for interactive real-time media
US10003529B2 (en) * 2015-08-04 2018-06-19 Telefonaktiebolaget Lm Ericsson (Publ) Method and system for memory allocation in a software-defined networking (SDN) system
KR102525295B1 (ko) * 2016-01-06 2023-04-25 삼성전자주식회사 데이터 관리 방법 및 장치
KR102589410B1 (ko) * 2017-11-10 2023-10-13 삼성전자주식회사 메모리 장치 및 그의 파워 제어 방법

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
CN106489134A (zh) 2017-03-08
CN110262751A (zh) 2019-09-20
EP3149585A1 (en) 2017-04-05
EP3149599A4 (en) 2018-01-03
US20150347019A1 (en) 2015-12-03
US11531472B2 (en) 2022-12-20
US20200150884A1 (en) 2020-05-14
CN113971004A (zh) 2022-01-25
EP3149599A1 (en) 2017-04-05
TW201614476A (en) 2016-04-16
US10572164B2 (en) 2020-02-25
TWI625632B (zh) 2018-06-01
US10146457B2 (en) 2018-12-04
EP3149602B1 (en) 2019-05-22
US20200097190A1 (en) 2020-03-26
US9733847B2 (en) 2017-08-15
CN106471460A (zh) 2017-03-01
US20210247915A1 (en) 2021-08-12
KR20170012400A (ko) 2017-02-02
TW201610688A (zh) 2016-03-16
EP3149595A1 (en) 2017-04-05
KR101796413B1 (ko) 2017-12-01
EP3149602A1 (en) 2017-04-05
CN106575257B (zh) 2019-05-28
US11526280B2 (en) 2022-12-13
CN106471474A (zh) 2017-03-01
US20170329545A1 (en) 2017-11-16
EP3149595A4 (en) 2018-03-28
US9690502B2 (en) 2017-06-27
EP3149592A1 (en) 2017-04-05
US20150347048A1 (en) 2015-12-03
EP3149585A4 (en) 2018-04-11
EP3149592B1 (en) 2022-05-04
US11461017B2 (en) 2022-10-04
CN106471485B (zh) 2019-01-08
TWI582588B (zh) 2017-05-11
KR102196747B1 (ko) 2020-12-31
TWI554883B (zh) 2016-10-21
EP3149595B1 (en) 2022-11-16
US9823864B2 (en) 2017-11-21
CN109032516B (zh) 2021-10-22
US10921995B2 (en) 2021-02-16
TW201614501A (en) 2016-04-16
WO2015187572A1 (en) 2015-12-10
WO2015187578A1 (en) 2015-12-10
TWI570569B (zh) 2017-02-11
US20150347225A1 (en) 2015-12-03
TWI545497B (zh) 2016-08-11
EP3149602A4 (en) 2017-08-09
CN106489136A (zh) 2017-03-08
EP3149592A4 (en) 2018-01-03
WO2015187576A1 (en) 2015-12-10
WO2015187577A1 (en) 2015-12-10
US9600191B2 (en) 2017-03-21
US20190102095A1 (en) 2019-04-04
US20150347226A1 (en) 2015-12-03
US11194480B2 (en) 2021-12-07
KR20170012399A (ko) 2017-02-02
TW201617868A (zh) 2016-05-16
TW201617879A (zh) 2016-05-16
CN106489134B (zh) 2018-08-14
US20170168728A1 (en) 2017-06-15
CN106471474B (zh) 2019-08-20
US20210141541A1 (en) 2021-05-13
KR20170005498A (ko) 2017-01-13
WO2015187575A1 (en) 2015-12-10
US11003363B2 (en) 2021-05-11
US9696920B2 (en) 2017-07-04
EP3149599B1 (en) 2022-09-21
US20170300382A1 (en) 2017-10-19
US20150347015A1 (en) 2015-12-03
US20210247914A1 (en) 2021-08-12
US11461019B2 (en) 2022-10-04
US9747048B2 (en) 2017-08-29
US10540104B2 (en) 2020-01-21
CN106471485A (zh) 2017-03-01
KR102197401B1 (ko) 2021-01-04
US20200097191A1 (en) 2020-03-26
CN106575257A (zh) 2017-04-19
EP3149586A1 (en) 2017-04-05
WO2015187574A1 (en) 2015-12-10
CN106471460B (zh) 2019-05-10
CN109032516A (zh) 2018-12-18
TW201610687A (zh) 2016-03-16
CN106489136B (zh) 2020-03-06
TWI547799B (zh) 2016-09-01
EP3149586B1 (en) 2022-07-20
US20150350082A1 (en) 2015-12-03
EP3149586A4 (en) 2018-08-29

Similar Documents

Publication Publication Date Title
US11461019B2 (en) Systems and methods for packing data in a scalable memory system protocol
CN113971004B (zh) 用于在可扩展存储器系统协议中包封数据的系统及方法

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20161202

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20180309

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 3/06 20060101AFI20180306BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20190403

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 602015057431

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: G06F0011100000

Ipc: G06F0003060000

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 3/06 20060101AFI20200206BHEP

Ipc: G06F 11/10 20060101ALI20200206BHEP

Ipc: G06F 13/16 20060101ALI20200206BHEP

INTG Intention to grant announced

Effective date: 20200225

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602015057431

Country of ref document: DE

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1302207

Country of ref document: AT

Kind code of ref document: T

Effective date: 20200915

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20200812

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201113

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201112

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201112

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1302207

Country of ref document: AT

Kind code of ref document: T

Effective date: 20200812

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201212

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602015057431

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

26N No opposition filed

Effective date: 20210514

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20210630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210601

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210630

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210601

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20210630

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20201214

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20150601

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230622

Year of fee payment: 9

Ref country code: DE

Payment date: 20230627

Year of fee payment: 9

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230620

Year of fee payment: 9

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20200812