EP4134830B1 - Multifunktionale flexible rechnerische speichervorrichtung - Google Patents

Multifunktionale flexible rechnerische speichervorrichtung Download PDF

Info

Publication number
EP4134830B1
EP4134830B1 EP22188425.7A EP22188425A EP4134830B1 EP 4134830 B1 EP4134830 B1 EP 4134830B1 EP 22188425 A EP22188425 A EP 22188425A EP 4134830 B1 EP4134830 B1 EP 4134830B1
Authority
EP
European Patent Office
Prior art keywords
function
computational
data
port
storage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP22188425.7A
Other languages
English (en)
French (fr)
Other versions
EP4134830A1 (de
Inventor
Ramdas P. Kachare
Hingkwan Huen
Jimmy Lau
Howard R. BUTLER
Xuebin Yao
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/669,351 external-priority patent/US20240211418A9/en
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of EP4134830A1 publication Critical patent/EP4134830A1/de
Application granted granted Critical
Publication of EP4134830B1 publication Critical patent/EP4134830B1/de
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/10Program control for peripheral devices
    • G06F13/105Program control for peripheral devices where the programme performs an input/output emulation function
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • G06F13/1684Details of memory controller using multiple buses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1668Details of memory controller
    • G06F13/1673Details of memory controller using buffers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/382Information transfer, e.g. on bus using universal interface adapter
    • G06F13/385Information transfer, e.g. on bus using universal interface adapter for adaptation of a particular data processing system to different peripheral devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4004Coupling between buses
    • G06F13/4022Coupling between buses using switching circuits, e.g. switching matrix, connection or expansion network
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/40Bus structure
    • G06F13/4004Coupling between buses
    • G06F13/4027Coupling between buses using bus bridges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4204Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus
    • G06F13/4221Bus transfer protocol, e.g. handshake; Synchronisation on a parallel bus being an input/output bus, e.g. ISA bus, EISA bus, PCI bus, SCSI bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • G06F13/4282Bus transfer protocol, e.g. handshake; Synchronisation on a serial bus, e.g. I2C bus, SPI bus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
    • G06F15/163Interprocessor communication
    • G06F15/173Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
    • G06F15/17306Intercommunication techniques
    • G06F15/17331Distributed shared memory [DSM], e.g. remote direct memory access [RDMA]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0026PCI express
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0028Serial attached SCSI [SAS]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0032Serial ATA [SATA]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2213/00Indexing scheme relating to interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F2213/0036Small computer system interface [SCSI]

Definitions

  • a Lightweight Bridge may be a circuit.
  • An endpoint of the LWB that may expose a plurality of Physical Functions (PFs) to a host.
  • a root port of the LWB may connect to a device and determine the PFs and Virtual Functions (VFs) exposed by the device.
  • An Application Layer-Endpoint (APP-EP) and an Application Layer-Root Port (APP-RP) may translate between the PFs exposed by the endpoint and the PFs/VFs exposed by the device.
  • the APP-EP and the APP-RP may implement a mapping between the PFs exposed by the endpoint and the PFs/VFs exposed by the device.
  • the bridge may also include a buffer to enable the storage devices and/or computational storage units to share data without transferring the data through main memory.
  • FIG. 1 shows a machine including an accelerator to reduce data dimensionality and perform calculations, according to embodiments of the disclosure.
  • machine 105 which may also be termed a host or a system, may include processor 110, memory 115, and storage device 120.
  • Processor 110 may be any variety of processor. (Processor 110, along with the other components discussed below, are shown outside the machine for ease of illustration: embodiments of the disclosure may include these components within the machine.) While FIG.
  • Multi-function device 135 may be implemented using any desired hardware.
  • multi-function device 135, or components thereof may be implemented using a Field Programmable Gate Array (FPGA), an Application-Specific Integrated Circuit (ASIC), a central processing unit (CPU), a System-on-a-Chip (SoC), a graphics processing unit (GPU), a general purpose GPU (GPGPU), a data processing unit (DPU), a neural processing unit (NPU), a Network Interface Card (NIC), or a tensor processing unit (TPU), to name a few possibilities.
  • FPGA Field Programmable Gate Array
  • ASIC Application-Specific Integrated Circuit
  • CPU central processing unit
  • SoC System-on-a-Chip
  • GPU graphics processing unit
  • GPU general purpose GPU
  • DPU data processing unit
  • NPU neural processing unit
  • NIC Network Interface Card
  • TPU tensor processing unit
  • device driver 130 may provide application programming interfaces (APIs) to access storage device 120 and/or computational storage unit 140.
  • APIs application programming interfaces
  • existing applications may be executed by processor 110 without change to the applications (although embodiments of the disclosure may involve modifications to other elements in a software stack).
  • a TPU may have a TPU device driver
  • a GPU may have a GPU device driver: applications that access functions of the TPU or the GPU may continue to use the existing TPU device driver or GPU device driver.
  • computational storage unit 140 may be any computational storage unit, even if manufactured by a different manufacturer from storage device 120 and/or multi-function device 135.
  • device driver 130 (or other device drivers) may be proprietary.
  • memory 115 might not include enough memory to include such a physical address, but memory 115 is not necessarily required to actually enough memory to include such an address.
  • memory 115 might include 2 gigabytes (GB) of memory, but might support addressing up to 4 GB of memory.
  • a subset of addresses, such as those between 2 GB and 3 GB, might be used to identify commands for peer-to-peer communication, even though memory 115 might not be able to process a request for those particular addresses.
  • Multi-function device 135 may identify such commands based on the address assigned to the command, and may intercept such commands for processing.
  • Endpoint 310 may be connected to (or implemented as part of) connector 305. Endpoint 310 may function as an endpoint for queries from processor 110 of FIG. 1 . Endpoint 310 may expose functions of devices attached to other connectors of multi-function device 135, such as connectors 315 and 320, as discussed further below.
  • multi-function device 135 may operate using the same clock cycle as processor 110 of FIG. 1 .
  • asynchronous buffer 325 may be omitted entirely, or replaced with a synchronous buffer (to permit temporary storage of requests, messages, and/or data received from or to be transmitted to processor 110 of FIG. 1 ).
  • Bridges 335 and 340 may be connected to asynchronous buffers 345 and 350, respectively.
  • Asynchronous buffers 345 and 350 like asynchronous buffer 325, may enable multi-function device 135 to operate at a different clock cycle than the various devices connected to connectors 315 and 320.
  • multi-function device 135 may operate using the same clock cycle as the device(s) connected to connectors 315 and/or 320.
  • asynchronous buffer 345 and/or 350 may be omitted entirely, or replaced with synchronous buffers (to permit temporary storage of requests, messages, and/or data received from or to be transmitted to the devices connected to connectors 315 and/or 320).
  • devices may enumerate the functions starting at zero: if the devices connected to connectors 315 and 320 were both assigned the function number starting at zero, multiplexer/demultiplexer 330 might not be able to determine for which device a particular request associated with function number zero is intended.
  • multiplexer/demultiplexer 330 may assign the PFs to the device connected to connector 315 using numbers 0, 1, and 2, and may assign the PFs to the device connected to connector 320 using numbers 3 and 4.
  • multiplexer/demultiplexer 330 may map functions in any desired manner.
  • VFs exposed by the devices connected to connectors 315 and/or 320 may be exposed as VFs or PFs (that is, VFs of the devices may map to PFs exposed by multi-function device 135).
  • multi-function device 135 may also include additional bridges like bridges 335 and 340, additional asynchronous buffers like asynchronous buffers 345 and 350, and additional root ports like root ports 355 and 360, to support additional devices.
  • FIG. 3 also includes multiplexer/demultiplexer 365, which may be interposed between bridge 340 and asynchronous buffer 350.
  • Multiplexer/demultiplexer 365 is used in peer-to-peer communication between the devices connected to connectors 315 and 320. That is, using multiplexer/demultiplexer 365, it is possible for the device attached to connector 320 to communicate with the device attached to connector 315 without having such communications to pass through processor 110 of FIG. 1 (via connector 305). To achieve this result, multiplexer/demultiplexer 365 may example information in a request, message, or data received at multiplexer/demultiplexer 365.
  • multiplexer/demultiplexer 365 may receive requests, messages, and/or data from a device attached to connector 315, and from processor 110 of FIG. 1 attached to connector 305.
  • requests received by multiplexer/demultiplexer 365 from processor 110 of FIG. 1 and from the device attached to connector 315 may include tags that identify the request.
  • read requests may include a tag that identifies the request, so that data may be returned associated with the same tag.
  • tags will not conflict: for example, multiplexer/demultiplexer 365 may reasonably assume that processor 110 of FIG. 1 would not assign the same tag to two different read requests.
  • One solution may be to process requests from only one source at a time, and the other source might wait until no requests from the first source are active. But this solution might not offer the best performance.
  • Another solution may be to permit only requests with unique tags to be active at any time. Thus, so long as each request has a different tag from any other active requests, the request may be processed; if the request replicates a tag that is associated with another active request, the new request may be buffered until the active request with that tag is complete. This solution offers better performance.
  • multiplexer/demultiplexer 365 may provide tags that may be used by the various sources: so long as each source may be provided a set of tags that does not intersect with the set of tags assigned to another source, tag conflict may be avoided. Yet another solution may be for multiplexer/demultiplexer 365 to introduce a level of indirection, mapping tags from each source to new tags (used internally to multiplexer/demultiplexer 365). When a request is received, the tag may be mapped and the mapping from the original tag to the new tag may be stored in a table in multiplexer/demultiplexer 365. When the request is completed, multiplexer/demultiplexer 365 may determine the original tag from the new tag received with the response.
  • bridge 335 may direct the DMA request to multiplexer/demultiplexer 365 (rather than to multiplexer/demultiplexer 330). In this manner processor 110 of FIG. 1 may be bypassed, which may result in the request being processed more expeditiously.
  • one device may need a memory, and the other device may need a circuit to read data from or write data to that memory (in the first device). If either element is lacking (for example, if computational storage unit 140 of FIG. 1 includes neither memory nor a circuit to read or write a memory in storage device 120 of FIG. 1 ), then DMA might not be possible.
  • Data processor 375 may perform any desired processing on data in buffer 370.
  • Data processor 375 may include a circuit and/or software to perform some expected processing. But data processor 375 may also be general enough to support processing as instructed by processor 110 of FIG. 1 . That is, processor 110 of FIG. 1 may download a program to data processor 375, which may then execute that program on data in buffer 370 to transform the data into a format expected by the destination device.
  • FIG. 4 shows details of storage device 120 of FIG. 1 , according to embodiments of the disclosure.
  • storage device 120 may include host interface layer (HIL) 405, controller 410, and various flash memory chips 415-1 through 415-8 (also termed "flash memory storage"), which may be organized into various channels 420-1 through 420-4.
  • HIL host interface layer
  • Host interface layer 405 may manage communications between storage device 120 and other components (such as processor 110 of FIG. 1 )
  • Host interface layer 405 may also manage communications with devices remote from storage device 120: that is, devices that are not considered part of multi-function device 135 of FIG. 1 , but in communication with storage device 120: for example, over one or more network connections. These communications may include read requests to read data from storage device 120, write requests to write data to storage device 120, and delete requests to delete data from storage device 120.
  • the PCIe bridge-storage may take the P2P control information and may intercept all the TLPs coming from the NVMe SSD controller that fall in the P2P address range.
  • the intercepted P2P PCIe transactions (Memory Write, and Memory Read) then may be presented to the Multiplexer/Demultiplexer module on the Root Port side.
  • This Multiplexer/Demultiplexer module may merge TLPs coming from the host and coming from the PCIe bridge-storage and destined for the compute resource. This module may allow the host and the PCIe bridge-storage to concurrently access the compute resource.
  • For Write transactions, which may be posted transactions, a simple TLP multiplexing may achieve the desired functionality.
  • completions packets coming back from the compute resource may be separated for the host and the PCIe bridge-storage. Such separation may be achieved using a Read Request tag value contained in each Read Request.
  • conflicting tag values between the host and the PCIe bridge-storage may need to be managed, and multiple ways to handle Read Request tag conflict may be used.
  • Associated data may be delivered over transmission environments, including the physical and/or logical network, in the form of packets, serial data, parallel data, propagated signals, etc., and may be used in a compressed or encrypted format. Associated data may be used in a distributed environment, and stored locally and/or remotely for machine access.
  • a software module may reside in Random Access Memory (RAM), flash memory, Read Only Memory (ROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), registers, hard disk, a removable disk, a CD ROM, or any other form of storage medium known in the art.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • EPROM Electrically Programmable ROM
  • EEPROM Electrically Erasable Programmable ROM
  • registers hard disk, a removable disk, a CD ROM, or any other form of storage medium known in the art.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Information Transfer Systems (AREA)
  • Warehouses Or Storage Devices (AREA)

Claims (12)

  1. Multifunktionsvorrichtung (135), die folgendes aufweist:
    einen ersten Port (310), der so konfiguriert ist, dass er mit einem Host-Prozessor (110) kommuniziert;
    einen zweiten Port (355), der so konfiguriert ist, dass er mit einer Speichervorrichtung (120) kommuniziert;
    einen dritten Port (360), der so konfiguriert ist, dass er mit einer rechnergestützten Speichereinheit (140) kommuniziert;
    eine erste Schaltung, die so konfiguriert ist, dass sie eine Nachricht von dem Host-Prozessor (110) an mindestens eine der Speichervorrichtungen (120) oder die rechnergestützte Speichereinheit (140) weiterleitet;
    wobei die Multifunktionsvorrichtung (135) ferner Folgendes aufweist:
    eine zweite Schaltung zur Peer-to-Peer-Kommunikation, wobei die zweite Schaltung eine dritte Schaltung zur Kommunikation zwischen der Speichervorrichtung (120) und der rechnergestützten Speichereinheit (140) enthält; und
    die Multifunktionsvorrichtung (135) ferner aufweist:
    einen ersten Multiplexer/Demultiplexer (365), der so konfiguriert ist, dass er der rechnergestützte Speichereinheit (140) ermöglicht, mit dem Host-Prozessor (110) und der Speichervorrichtung (120) zu kommunizieren.
  2. Multifunktionsvorrichtung (135) gemäß Anspruch 1, wobei die erste Schaltung einen zweiten Multiplexer/Demultiplexer (330) enthält, der so konfiguriert ist, dass er mindestens entweder die Speichervorrichtung (120) oder die rechnergestützte Speichereinheit (140) als Ziel für die Nachricht identifiziert, mindestens teilweise basierend auf den Daten in der Nachricht.
  3. Multifunktionsvorrichtung (135) gemäß Anspruch 2, wobei die erste Schaltung ferner enthält:
    eine erste Bridge (335), die mit dem zweiten Port (355) verbunden ist; und
    eine zweite Bridge (340), die mit dem dritten Port (360) verbunden ist,
    wobei der zweite Multiplexer/Demultiplexer (330) so konfiguriert ist, dass er die Nachricht mindestens teilweise basierend auf den Daten in der Nachricht entweder an die erste Bridge (335) oder an die zweite Bridge (340) weiterleitet.
  4. Multifunktionsvorrichtung (135) gemäß einem der Ansprüche 1 bis 3, wobei:
    der erste Port (310) einen Endpunkt enthält;
    der zweite Port (355) einen ersten Root-Port enthält; und
    der dritte Port (360) einen zweiten Root-Port enthält.
  5. Multifunktionsvorrichtung (135) gemäß Anspruch 4, wobei:
    der erste Root-Port so konfiguriert ist, dass er mindestens eine erste exponierte Funktion der Speichervorrichtung (120) identifiziert;
    der zweite Root-Port so konfiguriert ist, dass er mindestens eine zweite exponierte Funktion der rechnergestützten Speichereinheit (140) identifiziert; und
    der Endpunkt so konfiguriert ist, dass er mindestens die erste exponierte Funktion und die zweite exponierte Funktion freilegt,
    wobei die erste exponierte Funktion mindestens eine Physikalische Funktion, PF, oder eine Virtuelle Funktion, VF, enthält und wobei die zweite exponierte Funktion mindestens eine PF oder eine VF enthält.
  6. Multifunktionsvorrichtung (135) gemäß einem der Ansprüche 1 bis 5, wobei die zweite Schaltung einen Puffer zum Speichern von Daten enthält, die mit der rechnergestützten Speichereinheit (140) gemeinsam genutzt werden sollen.
  7. Multifunktionsvorrichtung (135) gemäß einem der Ansprüche 1 bis 6, wobei die rechnergestützte Speichereinheit (140) eine Netzwerk-Schnittstellenkarte (NIC) enthält.
  8. Multifunktionsvorrichtung (135) gemäß einem der Ansprüche 1 bis 7, wobei die Multifunktionsvorrichtung (135) so konfiguriert ist, dass sie unter Verwendung eines Protokolls mit mindestens einem von dem Host-Prozessor (110), der Speichervorrichtung (120) oder der rechnergestützten Speichereinheit (140) kommuniziert, wobei das Protokoll mindestens eine der folgenden Komponenten enthält: Peripheral Component Interconnect Express, PCIe, Ethernet, Remote Direct Memory Access, RDMA, Transmission Control Protocol/Internet Protocol, TCP/IP, InfiniBand, Serial Attached Small Computer System Interface, SCSI, SAS, Internet SCSI, iSCSI, oder Serial AT Attachment, SATA.
  9. Verfahren, das folgendes aufweist:
    Empfangen einer Anforderung an einem ersten Port (310) einer Multifunktionsvorrichtung (135) von einem Host-Prozessor (110);
    Identifizieren einer ersten Vorrichtung von der Anforderung;
    Identifizieren eines zweiten Ports (355) der Multifunktionsvorrichtung (135), der mit der ersten Vorrichtung verbunden ist; und
    Senden der Anforderung an die erste Vorrichtung über den zweiten Port (355) der Multifunktionsvorrichtung (135),
    wobei die Multifunktionsvorrichtung (135) einen dritten Port (360) enthält, der mit einer zweiten Vorrichtung verbunden ist,
    die erste Vorrichtung aus einem Satz mit einer Speichervorrichtung (120) oder einer rechnergestützten Speichereinheit (140) genommen wird; und
    die zweite Vorrichtung aus dem Satz mit der Speichervorrichtung (120) oder der rechnergestützten Speichereinheit (140) genommen wird,
    wobei die Multifunktionsvorrichtung (135) aufweist:
    eine erste Schaltung zum Weiterleiten der Anforderung von dem Host-Prozessor (110) an mindestens eine der Speichervorrichtung (120) oder der rechnergestützten Speichereinheit (140);
    wobei die Multifunktionsvorrichtung (135) aufweist:
    eine zweite Schaltung zur Peer-to-Peer-Kommunikation, wobei die zweite Schaltung eine dritte Schaltung zum Kommunizieren zwischen der Speichervorrichtung (120) und der rechnergestützten Speichereinheit (140) enthält; und
    die Multifunktionsvorrichtung (135) aufweist:
    einen ersten Multiplexer/Demultiplexer (365), der so konfiguriert ist, dass er der rechnergestützten Speichereinheit (140) ermöglicht, mit dem Host-Prozessor (110) und der Speichervorrichtung (120) zu kommunizieren.
  10. Verfahren gemäß Anspruch 9, wobei:
    Empfangen der Anforderung an dem ersten Port (310) der Multifunktionsvorrichtung (135) von dem Host-Prozessor (110) Empfangen der Anforderung an einem Endpunkt der Multifunktionsvorrichtung (135) von dem Host-Prozessor (110) enthält;
    Senden der Anforderung an die erste Vorrichtung über den zweiten Port (355) der Multifunktionsvorrichtung (135) Senden der Anforderung an die erste Vorrichtung über einen Root-Port der Multifunktionsvorrichtung (135) enthält.
  11. Verfahren gemäß Anspruch 10, wobei:
    der dritte Port (360) einen zweiten Root-Port der Multifunktionsvorrichtung (135) enthält; und
    das Verfahren ferner aufweist:
    Identifizieren mindestens einer ersten exponierten Funktion der ersten Vorrichtung über den Root-Port;
    Identifizieren mindestens einer zweiten exponierten Funktion der zweiten Vorrichtung über den zweiten Root-Port; und
    Exponieren mindestens der ersten exponierten Funktion und der zweiten exponierten Funktion gegenüber dem Host-Prozessor (110) am Endpunkt der Multifunktionsvorrichtung (135),
    wobei die erste exponierte Funktion mindestens eine Physikalische Funktion, PF, oder eine Virtuelle Funktion, VF, enthält und wobei die zweite exponierte Funktion mindestens eine PF oder eine VF enthält.
  12. Verfahren gemäß einem der Ansprüche 9 bis 11, ferner aufweisend Bereitstellen von Daten von der zweiten Vorrichtung an die erste Vorrichtung.
EP22188425.7A 2021-08-12 2022-08-03 Multifunktionale flexible rechnerische speichervorrichtung Active EP4134830B1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163232631P 2021-08-12 2021-08-12
US17/669,351 US20240211418A9 (en) 2019-06-24 2022-02-10 Multi-function flexible computational storage device

Publications (2)

Publication Number Publication Date
EP4134830A1 EP4134830A1 (de) 2023-02-15
EP4134830B1 true EP4134830B1 (de) 2024-07-03

Family

ID=83115444

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22188425.7A Active EP4134830B1 (de) 2021-08-12 2022-08-03 Multifunktionale flexible rechnerische speichervorrichtung

Country Status (4)

Country Link
EP (1) EP4134830B1 (de)
KR (1) KR20230024843A (de)
CN (1) CN115705306A (de)
TW (1) TW202341347A (de)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117744118B (zh) * 2023-12-21 2024-05-28 北京星驰致远科技有限公司 一种基于fpga的高速加密存储装置和方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10628353B2 (en) * 2014-03-08 2020-04-21 Diamanti, Inc. Enabling use of non-volatile media-express (NVMe) over a network
US10509758B1 (en) * 2017-09-28 2019-12-17 Amazon Technologies, Inc. Emulated switch with hot-plugging
US11809799B2 (en) * 2019-06-24 2023-11-07 Samsung Electronics Co., Ltd. Systems and methods for multi PF emulation using VFs in SSD controller

Also Published As

Publication number Publication date
TW202341347A (zh) 2023-10-16
KR20230024843A (ko) 2023-02-21
CN115705306A (zh) 2023-02-17
EP4134830A1 (de) 2023-02-15

Similar Documents

Publication Publication Date Title
US7849260B2 (en) Storage controller and control method thereof
US8949486B1 (en) Direct memory access to storage devices
CN113253919A (zh) 多功能存储装置和处理消息的方法
EP3201753B1 (de) Gemeinsame virtualisierte lokale speicherung
US10564898B2 (en) System and method for storage device management
US20130198450A1 (en) Shareable virtual non-volatile storage device for a server
US11940933B2 (en) Cross address-space bridging
US8332593B2 (en) Memory space management and mapping for memory area network
US7562111B2 (en) Multi-processor architecture with high capacity I/O
EP4134830B1 (de) Multifunktionale flexible rechnerische speichervorrichtung
US11029847B2 (en) Method and system for shared direct access storage
US20230195320A1 (en) Systems and methods for integrating a compute resource with a storage device
US20230198740A1 (en) Systems and methods for integrating fully homomorphic encryption (fhe) with a storage device
US20240211418A9 (en) Multi-function flexible computational storage device
JP5728088B2 (ja) 入出力制御装置及び入出力制御装置のフレーム処理方法
EP4332748A1 (de) Systeme und verfahren zur integration von vollständig homomorpher verschlüsselung mit einer speichervorrichtung
EP4332747A1 (de) Systeme und verfahren zur integration einer rechenressource mit einer speichervorrichtung
US10628042B2 (en) Control device for connecting a host to a storage device
JP6825263B2 (ja) ストレージ制御装置、およびストレージシステム
JP4983133B2 (ja) 入出力制御装置およびその制御方法、並びにプログラム
TW202424719A (zh) 多功能裝置和多功能裝置的整合方法
TWI840641B (zh) 多重功能儲存元件以及用於操作多重功能儲存元件的方法
CN117648203A (zh) 多功能装置和用于多功能装置的方法
CN117648052A (zh) 多功能装置和用于多功能装置的方法
CN117349202A (zh) 一种数据处理设备、数据处理方法以及系统

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230228

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230520

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20240205

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR