CN220041094U - Service storage integrated device - Google Patents

Service storage integrated device Download PDF

Info

Publication number
CN220041094U
CN220041094U CN202320823039.8U CN202320823039U CN220041094U CN 220041094 U CN220041094 U CN 220041094U CN 202320823039 U CN202320823039 U CN 202320823039U CN 220041094 U CN220041094 U CN 220041094U
Authority
CN
China
Prior art keywords
storage
service
pcie
pool
server
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202320823039.8U
Other languages
Chinese (zh)
Inventor
寗树梁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiangbolong Digital Technology Co ltd
Original Assignee
Shanghai Jiangbolong Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiangbolong Digital Technology Co ltd filed Critical Shanghai Jiangbolong Digital Technology Co ltd
Priority to CN202320823039.8U priority Critical patent/CN220041094U/en
Application granted granted Critical
Publication of CN220041094U publication Critical patent/CN220041094U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The utility model discloses a service storage integrated device, wherein the service storage integrated device comprises: separating the storage pool; and the servers are respectively connected with the separation storage pool, and are symmetrically and laminated on the two opposite sides of the separation storage pool. Through the structure, the utility model can improve the calculation efficiency of the server and the service storage integrated device and ensure the bandwidth between the server and the separated storage pool.

Description

Service storage integrated device
Technical Field
The utility model is applied to the technical field of data processing, in particular to a service storage integrated device.
Background
With the rapid development of science and technology, more and more fields need to use big data technology for data management.
However, due to the explosive increase of the data content in the big data, the requirement of the multi-core heterogeneous architecture processor in the server increases, so that the storage capacity allocated to each processor core is reduced, the computing efficiency is rapidly reduced, the storage bandwidth allocated to each processor core is reduced, and the problem of insufficient computing energy efficiency of the server is further caused.
Therefore, a method for improving the computing energy efficiency of the server and guaranteeing the bandwidth of the server is needed.
Disclosure of Invention
The utility model provides a service storage integrated device to solve the problem of insufficient computing energy efficiency of a server.
In order to solve the above technical problems, the present utility model provides a service storage integration apparatus, including: separating the storage pool; and the servers are respectively connected with the separation storage pool, and are symmetrically and laminated on the two opposite sides of the separation storage pool.
The service storage integrated device comprises a plurality of PCIE links; each PCIE link is respectively connected to a corresponding server and a separate storage pool.
The PCIE links are connected with the corresponding servers through the first ports, and are connected with the separate storage pools through the second ports; the projections of the first port and the second port of each PCIE link on the separate storage pools coincide.
Wherein, the length of each PCIE link is less than 50 cm.
The separated storage pool comprises a main chip and at least one storage unit; each storage unit is connected with a main chip respectively, and the main chip is connected with each server through a corresponding PCIE link.
The main chip comprises a plurality of PCIE interfaces, a network switching element and at least one storage main control element; each storage unit comprises a plurality of storage mediums; each PCIE interface is connected with a corresponding server through a corresponding PCIE link, and each PCIE interface is also connected with a network switching element; the network switching elements are also respectively connected with at least one storage main control element, and each storage main control element is respectively connected with a corresponding storage unit.
The storage type of each storage main control element is the same as the storage type of the corresponding connected storage unit, and the storage type of the storage medium in each storage unit is the same.
The type of the memory unit comprises one or more of DDR, DDR4, DDR3, DDR5, DDR6, MRAM, PMEM, RMEM, PRAM and FRAM.
Wherein the separate storage pool further comprises a power supply element; the power supply element is connected with the main chip and each storage unit respectively.
Wherein the separate storage pool further comprises a heat sink element; the heat dissipation element is attached to the main chip; or the heat sink element is electrically connected to the main chip.
In order to solve the technical problems, the utility model sets the separated storage pool in the service storage integrated device, enlarges the storage capacity of the service storage integrated device, further improves the storage capacity distributed by a processor in the server, ensures the calculation efficiency of the processor, further improves the calculation efficiency of the server and the service storage integrated device, and a plurality of servers are symmetrically and laminated on two opposite sides of the separated storage pool, so that the separated storage pool is arranged in the middle of the servers, the total distance between each server and the separated storage pool is reduced, the total connection length between the server and the separated storage pool is further shortened, the bandwidth between the server and the separated storage pool is increased, and the error rate of a communication port is reduced.
Drawings
FIG. 1 is a schematic diagram of an embodiment of a service storage integrated device according to the present utility model;
FIG. 2 is a schematic diagram of an implementation of the split storage pool of the embodiment of FIG. 1.
Detailed Description
The following description of the embodiments of the present utility model will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present utility model, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present utility model without making any inventive effort, are intended to fall within the scope of the present utility model.
It should be noted that, if directional indications (such as up, down, left, right, front, and rear … …) are included in the embodiments of the present utility model, the directional indications are merely used to explain the relative positional relationship, movement conditions, etc. between the components in a specific posture (as shown in the drawings), and if the specific posture is changed, the directional indications are correspondingly changed.
In addition, if there is a description of "first", "second", etc. in the embodiments of the present utility model, the description of "first", "second", etc. is for descriptive purposes only and is not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In addition, the technical solutions of the embodiments may be combined with each other, but it is necessary to base that the technical solutions can be realized by those skilled in the art, and when the technical solutions are contradictory or cannot be realized, the combination of the technical solutions should be considered to be absent and not within the scope of protection claimed in the present utility model.
Referring to fig. 1, fig. 1 is a schematic structural diagram of an embodiment of a service storage integrated device according to the present utility model.
The service storage integration apparatus 10 of the present embodiment includes a separate storage pool 11 and a plurality of servers 12. Each server 12 is connected to a separate storage pool 11, and a plurality of servers 12 are symmetrically stacked on opposite sides of the separate storage pool 11.
The separate storage pool 11 is a storage device independent from the server 12, wherein the storage pool is a technology that can connect a group of storage devices together to form a memory space together, so that the storage devices can be used more effectively, and the performance and usability of the storage devices are improved. The split pool 11 may be a CXL pool or other type of pool, CXL (Compute Express Link) being an industry-supported cache coherence interconnect protocol for processors, memory extensions, and accelerators.
By providing the separate storage pool 11 in the service storage integrated device 10, the storage capacity of the service storage integrated device 10 is enlarged, so that the storage capacity of each server 12, which can be allocated by the processor, is improved, the computing efficiency of the processor is ensured, and the computing efficiency of the server 12 and the service storage integrated device 10 is improved.
Each server 12 includes at least one processor therein for data processing. Each server 12 is connected to a separate storage pool 11 to store data into the separate storage pool 11 or to invoke data in the separate storage pool 11, respectively.
The number of servers 12 may be an even number of 2, 4, 6, 8, or 10, and the plurality of servers 12 are respectively symmetrically and stacked on opposite sides of the separate storage pool 11 in a fitting manner.
In a specific application scenario, when the number of servers 12 in the service storage integrated apparatus 10 is 2, the service storage integrated apparatus 10 includes the servers 12, the separate storage pools 11, and the servers 12 that are stacked and attached in sequence, that is, two opposite sides of the separate storage pool 11 are attached to one server 12 respectively. In a specific application scenario, when the number of servers 12 in the service storage integrated apparatus 10 is 6, the service storage integrated apparatus 10 includes sequentially stacked and bonded servers 12, separated storage pools 11, servers 12, and servers 12, i.e. three servers 12 are respectively bonded on opposite sides of the separated storage pools 11. When the number of the servers 12 in the service storage integrated apparatus 10 is other, the structure is similar to the above application scenario, and will not be described again.
The servers 12 are symmetrically and laminated on opposite sides of the separation storage pool 11, and are connected in a point-to-point and symmetrical star topology by network lines, so that the total distance between each server 12 and the separation storage pool 11 can be reduced, the total connection length between the server 12 and the separation storage pool 11 is further shortened, the bandwidth between the server 12 and the separation storage pool 11 is increased, and the error rate of a communication port is reduced.
The service storage integrated device 10 of the present embodiment may be applied to the fields of cloud servers and the like.
Through the structure, the separation storage pool is arranged in the service storage integrated device, the storage capacity of the service storage integrated device is enlarged, the storage capacity distributed by a processor in a server is further improved, the computing efficiency of the processor is further improved, the computing efficiency of the server and the service storage integrated device is further improved, a plurality of servers are symmetrically and laminated on two opposite sides of the separation storage pool, the separation storage pool is arranged in the middle of the plurality of servers, the total distance between each server and the separation storage pool is reduced, the total connection length between the server and the separation storage pool is further shortened, the bandwidth between the server and the separation storage pool is increased, and the error rate of a communication port is reduced.
In other embodiments, the service storage integration apparatus 10 includes a plurality of PCIE links 13; each PCIE link 13 connects a corresponding server 12 and a separate storage pool 11. The number of PCIE links 13 in the service storage integrated apparatus 10 is the same as the number of servers 12 in the service storage integrated apparatus 10 to be connected in one-to-one correspondence, respectively.
PCIE (peripheral component interconnect express) is a high-speed serial computer expansion bus standard. PCIE devices communicate through logical connections called interconnects or links. A link is a point-to-point communication channel between two PCIE ports.
PCIE link 13 is a high-speed signaling link, but if the length of PCIE link 13 is too long, signaling is affected. In the service storage integrated device 10 of this embodiment, a plurality of servers 12 are symmetrically stacked on opposite sides of the separate storage pool 11 in a lamination manner, so that the total distance between each server 12 and the separate storage pool 11 is reduced, and the length of the PCIE link 13 between each server 12 and the separate storage pool 11 is further shortened, thereby ensuring high-speed transmission of PCIE link 13 signals between each server 12 and the separate storage pool 11, reducing the influence of the distance on the transmission effect of the PCIE link 13, reducing the transmission error rate of the PCIE link 13, and improving the reliability of the service storage integrated device 10.
In other embodiments, a first port 131 and a second port 132 are formed at opposite ends of each PCIE link 1313, each PCIE link 13 is connected to the corresponding server 12 through the first port 131, and each PCIE link 13 is connected to the separate storage pool 11 through the second port 132.
The projections of the first port 131 and the second port 132 of each PCIE link 13 on the separate storage pool 11 are overlapped, that is, the first port 131 and the second port 132 of each PCIE link 13 are located on the same straight line, and the straight line is perpendicular to the plane where the separate storage pool 11 is located, so that the distance between the first port 131 and the second port 132 can be reduced as much as possible through the overlapped first port 131 and second port 132 of each PCIE link 13, the length of the corresponding PCIE link 13 is further reduced, the high-speed transmission of signals of PCIE links 13 between each server 12 and the separate storage pool 11 is further ensured, the influence of the distance on the transmission effect of PCIE links 13 is reduced, the transmission error rate of PCIE links 13 is reduced, and the reliability of the service storage integrated device 10 is improved.
In other embodiments, the length of each PCIE link is less than 50 cm, and may specifically be 5 cm, 10 cm, 16 cm, 20 cm, 25 cm, 28 cm, 30 cm, 34 cm, 35 cm, 39 cm, 40 cm, 42 cm, 45 cm, 48 cm, 50 cm, or the like.
By limiting the distance difference between the length of the PCIE link 13 and the interface distance, the occurrence of the situation that the length of the PCIE link 13 is too long is reduced, the length redundancy of the PCIE link 13 is reduced, the necessary PCIE link 13 is reserved, the high-speed transmission of PCIE link 13 signals between each server 12 and the separate storage pool 11 is ensured, and the influence of the distance on the transmission effect of the PCIE link 13 is reduced.
In other embodiments, please further refer to fig. 2, fig. 2 is a schematic diagram illustrating a configuration of an implementation of the separation storage pool in the embodiment of fig. 1.
The split storage pool 11 includes a primary chip 111 and at least one storage unit 115; each storage unit 115 is connected to the main chip 111, so as to control storage, data call and the like of each storage unit 115 through the main chip 111, and the main chip 111 is also connected to each server 12 through the corresponding PCIE link 13, so as to receive external transmission data, and control the external transmission data to be stored in a specific storage unit 115.
The storage type of the storage unit 115 may include one or more of memory, flash memory, and operation memory, which are specifically set based on actual requirements, and are not limited herein. The main chip 111 may select a corresponding type based on the type of the memory unit 115 to realize control of the memory unit 115.
In other embodiments, the main chip 111 includes a plurality of PCIE interfaces 112, a network switching element 113, and at least one storage master 114.
Each PCIE interface 112 is connected to the corresponding server 12 through the corresponding PCIE link 13, and each PCIE interface 112 is also connected to the network switching element 113; the number of PCIE interfaces 112 in the main chip 111 may be the same as the number of servers 12 and the number of PCIE links 13 in the service storage integrated device 10, so as to connect one by one through the corresponding PCIE links 13.
The interface types of PCIE interface 112 may include 2lanes PCIE 5.0,4lanes PCIE 5.0,8lanes PCIE 5.0, 16lanes PCIE 5.0, 2lanes PCIE 6.0,4lanes PCIE 6.0,8lanes PCIE 6.0, 16lanes PCIE 6.0, etc., without limitation.
The main chip 111 is connected with each server 12 through the PCIE interface 112 and connected with the PCIE link 13, so that data is transmitted between the main chip 111 and each server 12 at a high speed through the PCIE protocol, and information transmission efficiency in the service storage integrated device 10 is improved.
The network switching elements 113 are also respectively connected to at least one memory master element 114, and each memory master element 114 is respectively connected to a corresponding memory unit 115. The memory master control elements 114 are correspondingly the same in number as the memory units 115, and are connected in one-to-one correspondence.
The network switching element 113 is connected to the PCIE interfaces 112 to receive or send transmission data through the PCIE interfaces 112, and is used to manage exchange of data between the separate storage pools 11 and the servers 12.
The memory master control element 114 is used to control the corresponding memory unit 115. Each storage unit 115 includes a plurality of storage media 116. The storage master 114 is respectively connected to the corresponding storage media 116 through the corresponding storage protocol to control data storage and data recall.
In a specific application scenario, when the server 12 transmits data to the separate storage pool 11, the data is first transmitted to the network switching element 113 through the PCIE link 13 through the PCIE interface 112, the network switching element 113 then transmits the transmission data to the corresponding storage master element 114, and the storage master element 114 finally determines to transmit and store the transmission data to one or more specific storage media 116 in the corresponding storage unit 115.
In a specific application scenario, when the server 12 calls data to the separate storage pool 11, the storage master control element 114 calls data to one or more specific storage media 116 in the corresponding storage unit 115, and transmits the data to the network switching element 113, and the network switching element 113 transmits the data to the corresponding server 12 through the PCIE interface 112 and the PCIE link 13, so as to complete data call.
In other embodiments, the storage type of the storage master 114 is the same as the storage type of the corresponding connected storage units 115, and the storage type of the storage medium 116 within each storage unit 115 is the same.
In a specific application scenario, when the storage type of the storage master 114 is DDR4 storage master, the storage type of the storage unit 115 correspondingly connected is also DDR4, and the storage type of each storage medium 116 in the DDR4 storage unit is DDR4. Each storage medium 116 is coupled to the storage master 114 via the DDR4 protocol.
In other embodiments, the types of memory cells 115 may include one or more of DDR, DDR4, DDR3, DDR5, DDR6, MRAM (Magnetoresistive Random Access Memory), PMEM (Persistent Memory ), RMEM, PRAM, and FRAM (ferroelectric random Access memory ) memory types. And are not limited herein.
The type of storage unit 115, the type of storage medium 116 within storage unit 115, the type of corresponding storage master 114, and the type of protocol between storage medium 116 and storage master 114 are all correspondingly the same.
In other embodiments, the separate storage pool 11 further includes a power supply element 117, where the power supply element 117 is connected to the main chip 111 and each storage unit 115, respectively, for supplying power to the main chip 111 and each storage unit 115, so as to ensure the operation of the separate storage pool 11.
Specifically, the power supply element 117 may be respectively connected to the network switch element 113, the storage master element 114, each PCIE interface 112, and each storage medium 116 for supplying power.
In a specific application scenario, the power supply element 117 may include several power supply modules, so as to supply different voltages to the connections respectively. For example: the network switch 113 is supplied with 2V, the storage master 114 is supplied with 3V, each PCIE interface 112 and each storage medium 116 are supplied with 1V, etc., which is not limited herein.
In other embodiments, the split storage pool 11 further includes a heat sink element 118; the heat dissipation member 118 is attached to the main chip 111. When the heat dissipation element 118 is attached to the main chip 111 for dissipating heat, the heat dissipation element 118 may include a metal block, a heat dissipation fin, a heat dissipation post, etc. to conduct heat, and the heat dissipation element 118 is not electrically connected to the main chip 111.
The heat dissipation member 118 is electrically connected to the main chip 111. When the heat dissipating element 118 is connected to the main chip 111, the heat dissipating element 118 may include a heat dissipating device such as an electric fan, a refrigerator, etc. to start the heat dissipating device to dissipate heat. And are not limited herein.
Since the main chip 111 generates heat when the separation storage pool 11 is active, the heat dissipation of the main chip 111 and the separation storage pool 11 is accelerated by the arrangement of the heat dissipation element 118, the active environments of the main chip 111 and the separation storage pool 11 are ensured, and the reliability of the separation storage pool 11 is improved.
Through the structure, the separation storage pool is arranged in the service storage integrated device, the storage capacity of the service storage integrated device is enlarged, the storage capacity distributed by a processor in a server is further improved, the computing efficiency of the processor is further improved, the computing efficiency of the server and the service storage integrated device is further improved, a plurality of servers are symmetrically and laminated on two opposite sides of the separation storage pool, the separation storage pool is arranged in the middle of the plurality of servers, the total distance between each server and the separation storage pool is reduced, the total connection length between the server and the separation storage pool is further shortened, the bandwidth between the server and the separation storage pool is increased, and the error rate of a communication port is reduced.
The foregoing is only the embodiments of the present utility model, and therefore, the patent scope of the utility model is not limited thereto, and all equivalent structures or equivalent processes using the descriptions of the present utility model and the accompanying drawings, or direct or indirect application in other related technical fields, are included in the scope of the utility model.

Claims (10)

1. A service storage integration apparatus, comprising:
separating the storage pool;
and the servers are respectively connected with the separated storage pool, and are symmetrically and laminated and stacked on two opposite sides of the separated storage pool.
2. The service storage integration apparatus of claim 1, wherein the service storage integration apparatus comprises a plurality of PCIE links;
each PCIE link is respectively connected to a corresponding server and the separate storage pool.
3. The service storage integration apparatus of claim 2, wherein,
a first port and a second port are formed at two opposite ends of each PCIE link respectively, each PCIE link is connected with a corresponding server through the first port, and each PCIE link is connected with the separate storage pool through the second port;
the projections of the first port and the second port of each PCIE link on the separate storage pool coincide.
4. The service storage integration apparatus of claim 3, wherein,
the length of each PCIE link is less than 50 cm.
5. The service storage integration apparatus of claim 1, wherein the separate storage pool comprises a master chip and at least one storage unit;
each storage unit is respectively connected with the main chip, and the main chip is connected with each server through a corresponding PCIE link.
6. The service storage integrated apparatus of claim 5, wherein the master chip comprises a plurality of PCIE interfaces, a network switching element, and at least one storage master element; each storage unit comprises a plurality of storage mediums;
each PCIE interface is connected with the corresponding server through the corresponding PCIE link, and each PCIE interface is also connected with the network switching element;
the network switching elements are also respectively connected with at least one storage main control element, and each storage main control element is respectively connected with the corresponding storage unit.
7. The service storage integration apparatus according to claim 6, wherein the storage type of each storage master element is the same as the storage type of the corresponding connected storage unit, and the storage type of the storage medium in each storage unit is the same.
8. The service storage integration apparatus of claim 7, wherein,
the types of memory cells include one or more of DDR, DDR4, DDR3, DDR5, DDR6, MRAM, PMEM, RMEM, PRAM, and FRAM.
9. The service storage integration apparatus of claim 5, wherein the separate storage pool further comprises a power supply element;
the power supply element is respectively connected with the main chip and each storage unit.
10. The service storage integration apparatus of claim 5, wherein the separate storage pool further comprises a heat sink element;
the heat dissipation element is attached to the main chip; or (b)
The heat dissipation element is electrically connected with the main chip.
CN202320823039.8U 2023-04-13 2023-04-13 Service storage integrated device Active CN220041094U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202320823039.8U CN220041094U (en) 2023-04-13 2023-04-13 Service storage integrated device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202320823039.8U CN220041094U (en) 2023-04-13 2023-04-13 Service storage integrated device

Publications (1)

Publication Number Publication Date
CN220041094U true CN220041094U (en) 2023-11-17

Family

ID=88740085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202320823039.8U Active CN220041094U (en) 2023-04-13 2023-04-13 Service storage integrated device

Country Status (1)

Country Link
CN (1) CN220041094U (en)

Similar Documents

Publication Publication Date Title
US10732879B2 (en) Technologies for processing network packets by an intelligent network interface controller
US10254987B2 (en) Disaggregated memory appliance having a management processor that accepts request from a plurality of hosts for management, configuration and provisioning of memory
US20180024957A1 (en) Techniques to enable disaggregation of physical memory resources in a compute system
EP2887223A1 (en) Memory system, memory module, memory module access method and computer system
US11671522B2 (en) System and method for memory access in server communications
EP3716085B1 (en) Technologies for flexible i/o endpoint acceleration
JP7349812B2 (en) memory system
US20160292115A1 (en) Methods and Apparatus for IO, Processing and Memory Bandwidth Optimization for Analytics Systems
US10970246B2 (en) Technologies for remote networked accelerators
US20150254205A1 (en) Low Cost, High Performance and High Data Throughput Server Blade
US10318473B2 (en) Inter-device data-transport via memory channels
CN207234816U (en) A kind of VPX power boards of SRIO and Ethernet
CN102103471B (en) Data transmission method and system
JP2016513321A (en) Improved 3D torus
KR20180105978A (en) Operation method for electronic device comprising on-chip network
CN220041094U (en) Service storage integrated device
US20230350795A1 (en) Dual-port memory module design for composable computing
EP3739448B1 (en) Technologies for compressing communication for accelerator devices
US20060143357A1 (en) Multiple cell computer systems and methods
CN209248518U (en) A kind of solid state hard disk expansion board clamping and server
CN107122268B (en) NUMA-based multi-physical-layer partition processing system
US8732331B2 (en) Managing latencies in a multiprocessor interconnect
CN220553142U (en) Storage pool and server system
CN104618121A (en) Switch and server system
US20210279128A1 (en) Buffer that supports burst transfers having parallel crc and data transmissions

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant