CN111352475A - Server - Google Patents

Server Download PDF

Info

Publication number
CN111352475A
CN111352475A CN201811580994.3A CN201811580994A CN111352475A CN 111352475 A CN111352475 A CN 111352475A CN 201811580994 A CN201811580994 A CN 201811580994A CN 111352475 A CN111352475 A CN 111352475A
Authority
CN
China
Prior art keywords
fpga
service data
server
acceleration module
acceleration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811580994.3A
Other languages
Chinese (zh)
Inventor
偶瑞军
吕俊杰
赵存
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aisino Corp
Original Assignee
Aisino Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aisino Corp filed Critical Aisino Corp
Priority to CN201811580994.3A priority Critical patent/CN111352475A/en
Publication of CN111352475A publication Critical patent/CN111352475A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/18Packaging or power distribution
    • G06F1/183Internal mounting support structures, e.g. for printed circuit boards, internal connecting means

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Power Engineering (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Storage Device Security (AREA)

Abstract

The invention discloses a server which comprises a storage module and an FPGA acceleration module, wherein the FPGA acceleration module comprises at least one of an FPGA-AI acceleration module, an FPGA-IO acceleration module and an FPGA-password operation acceleration module. According to the invention, the data transmission process of the server is enhanced through the FPGA acceleration module comprising at least one of the FPGA-AI acceleration module, the FPGA-IO acceleration module and the FPGA-password operation acceleration module, so that the high-performance and high-load service processing requirement is met.

Description

Server
Technical Field
The invention relates to the technical field of data processing, in particular to a server.
Background
Most of the existing block chain systems use a general server as a node hardware platform, and due to different traffic volumes in different areas, the capacity requirements on the server are different.
In order to meet the service requirements of high-performance high-load blockchain network services, research and development in the prior art are usually focused on the improved design of algorithms such as consensus algorithm, identity management, transaction privacy, service load distribution and the like, but a general server may still not meet the requirements of higher performance and load.
Disclosure of Invention
The invention provides a server, which is used for solving the problem that a universal server in the prior art cannot meet the requirements of higher performance and load.
The invention provides a server which comprises a storage module and a Field Programmable Gate Array (FPGA) acceleration module, wherein the FPGA acceleration module comprises at least one of an FPGA-AI acceleration module, an FPGA-IO acceleration module and an FPGA-password operation acceleration module.
Further, if the FPGA acceleration module comprises an FPGA-AI acceleration module;
and the FPGA-AI acceleration module is used for carrying out AI operation acceleration on the service data to be processed in the storage module.
Further, if the FPGA acceleration module comprises an FPGA-IO acceleration module;
the FPGA-IO acceleration module is used for performing input and/or output acceleration on the service data to be processed.
Further, if the FPGA acceleration module comprises an FPGA-password operation acceleration module;
the FPGA-password operation acceleration module is used for carrying out encryption acceleration operation and/or decryption operation acceleration on the service data to be processed in the storage module.
Further, the storage module comprises a Solid State Disk (SSD) and/or a magnetic disk hard disk.
Further, if the storage module comprises an SSD and a magnetic disk hard disk;
the SSD is used for storing first service data, wherein the storage time of the first service data does not exceed the service data with a set time threshold;
and the magnetic disk hard disk is used for storing second service processing, wherein the storage time of the second service data exceeds the service data with the set time threshold.
Further, the SSD is also used for storing the service data with the access frequency higher than the set access frequency threshold and/or storing the service data with the access speed higher than the set access speed threshold.
Further, the server is a node in the blockchain network service system.
Further, the running environment of the server comprises an intel SGX chipset and/or a trusted execution environment TEE.
The invention provides a server which comprises a storage module and an FPGA acceleration module, wherein the FPGA acceleration module comprises at least one of an FPGA-AI acceleration module, an FPGA-IO acceleration module and an FPGA-password operation acceleration module. According to the invention, the data transmission process of the server is enhanced through the FPGA acceleration module comprising at least one of the FPGA-AI acceleration module, the FPGA-IO acceleration module and the FPGA-password operation acceleration module, so that the high-performance and high-load service processing requirement is met.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic structural diagram of a server according to embodiment 1 of the present invention;
fig. 2 is a schematic structural diagram of a server according to embodiment 1 of the present invention;
fig. 3 is a schematic structural diagram of a server according to embodiment 1 of the present invention;
fig. 4 is a schematic structural diagram of a server according to embodiment 1 of the present invention;
fig. 5 is a schematic structural diagram of a server according to embodiment 2 of the present invention;
fig. 6 is a schematic structural diagram of a server according to embodiment 4 of the present invention;
fig. 7 is a schematic structural diagram of a server according to embodiment 4 of the present invention.
Detailed Description
In order to meet the requirement of high-performance and high-load service processing, the embodiment of the invention provides a server.
In order to make the objects, technical solutions and advantages of the present invention clearer, the present invention will be described in further detail with reference to the accompanying drawings, and it is apparent that the described embodiments are only a part of the embodiments of the present invention, not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Example 1:
fig. 1 is a schematic structural diagram of a server according to an embodiment of the present invention, where the server includes a storage module 101 and an FPGA (Field-Programmable Gate Array) acceleration module 102, where the FPGA acceleration module 102 includes at least one of an FPGA-AI acceleration module, an FPGA-IO acceleration module, and an FPGA-cryptographic operation acceleration module.
Since the existing blockchain system mostly uses a general-purpose server as a node hardware platform. The key point of research and development of the blockchain system is to improve software design in the aspects of consensus algorithm, identity management, transaction privacy, service load and the like, but general server hardware is generally fixed and cannot meet the requirements for higher performance and higher conformity, and the specific general server hardware is generally configured as a hot standby power supply, a high-speed disk hard disk, a large-capacity memory, a high-speed dual-network port and the like.
In the embodiment of the invention, the FPGA acceleration module comprises at least one of an FPGA-AI acceleration module, an FPGA-IO acceleration module and an FPGA-cryptographic operation acceleration module, the hardware performance of the server is improved, and the FPGA acceleration module is used as a reinforcing module.
The server may be a node in a blockchain network service system, i.e., a blockchain node.
As shown in fig. 2, if the FPGA acceleration module 102 includes the FPGA-AI acceleration module 102-1;
and the FPGA-AI acceleration module is used for carrying out AI operation acceleration on the service data to be processed in the storage module.
The FPGA-AI acceleration module can acquire the service data to be processed in the storage module.
The process of performing AI operation acceleration on the service data to be processed in the storage module by the FPGA-AI acceleration module may be implemented by using the prior art, or may be implemented by using an improved technique of the prior art, which is not described in detail in the embodiment of the present invention.
If the server is a block chain node, an FPGA-AI acceleration module in the block chain node handles AI operation acceleration of the intelligent function of the service data, and training is carried out by combining an AI algorithm model based on daily and monthly link data to provide intelligent function service.
As shown in fig. 3, if the FPGA acceleration module 102 includes the FPGA-IO acceleration module 102-2;
the FPGA-IO acceleration module is used for performing input and/or output acceleration on the service data to be processed.
After the FPGA-IO acceleration module accelerates the input of the service data to be processed, the input data may be stored in the storage module.
After the FPGA-IO acceleration module accelerates the output of the service data to be processed, the FPGA-IO acceleration module may specifically accelerate the output of the service data to be processed in the storage module.
The process of the FPGA-IO acceleration module for performing input and/or output acceleration on the service data to be processed can be implemented by adopting the prior art, and can be implemented by adopting the improved technology of the prior art, which is not described in detail in the embodiment of the present invention.
The FPGA-IO acceleration module can optimize the data flow direction in the aspect of data throughput, so that the external operation performance of the node is improved.
As shown in fig. 4, if the FPGA acceleration module 101 includes the FPGA-cryptographic operation acceleration module 102-3;
the FPGA-password operation acceleration module is used for carrying out encryption acceleration operation and/or decryption operation acceleration on the service data to be processed in the storage module.
The FPGA-password operation acceleration module can perform encryption acceleration operation on the service data to be processed in the storage module.
The FPGA-password operation acceleration module can be used for decrypting and accelerating the service data to be processed in the storage module.
The process that the FPGA-cryptographic operation acceleration module can perform encryption acceleration operation and/or decryption operation acceleration on the to-be-processed service data in the storage module can be realized by adopting the prior art, and can be realized by adopting an improved technology of the prior art, which is not described in detail in the embodiment of the present invention.
If the server is a block chain node, the block chain system relates to a large amount of password operation, and the performance in the aspect of password operation processing can be accelerated by using the FPGA-password operation acceleration module.
It is believed to be clear to those skilled in the art that the process of selecting one or more of the FPGA acceleration modules including the FPGA-AI acceleration module, the FPGA-IO acceleration module, and the FPGA-cryptographic operation acceleration module to perform the service processing according to the service data of different service types or according to the service data of different requirements. And the order in which the plurality of acceleration modules perform the business processes when they are selected for the business processes is believed to be clear to those skilled in the art.
In the embodiment of the invention, the data transmission process of the server is enhanced by the FPGA acceleration module comprising at least one of the FPGA-AI acceleration module, the FPGA-IO acceleration module and the FPGA-password operation acceleration module, so that the high-performance and high-load service processing requirement is met.
Example 2:
on the basis of the above embodiments, in the embodiments of the present invention, the storage module includes an SSD (Solid state disk) and/or a magnetic disk hard disk.
If the storage module only comprises the SSD, the service data to be processed can be stored in the SSD.
If the storage module only comprises a disk hard disk, the business processes to be processed can be stored in the disk hard disk.
As shown in fig. 5, if the storage module includes an SSD and a disk hard disk;
the SSD is used for first service data, wherein the storage time of the first service data does not exceed the service data with a set time threshold;
and the magnetic disk hard disk is used for storing second service processing, wherein the storage time of the second service data exceeds the service data with the set time threshold.
The server may pre-store a set time threshold, where the set time threshold may be set by a user or a server administrator, may be obtained by the server according to big data statistics, and the like, and is not described in detail in the embodiment of the present invention.
When the service data received by the server needs to be stored or the service data obtained through calculation needs to be stored, the server can determine the storage time of the service data.
The storage time of the service data determined by the server may be an actual storage time of the service data, and may be a predicted storage time of the service data.
If the server determines the storage time of the service data according to the actual storage time of the service data, the service data of which the actual storage time does not exceed the set time threshold value can be stored in the SSD as the first service data, and the service data of which the actual storage time exceeds the set time threshold value can be stored in the disk hard disk as the second service data.
If the server determines the storage time of the service data according to the predicted storage time of the service data, the server may determine the predicted storage time of the service data according to the service type of the service data and the corresponding relation between the service type and the storage time, determine the predicted storage time of the service data according to the obtained time of the service data and the corresponding relation between the obtained time and the storage time, and determine the predicted storage time of the service data according to a storage time prediction model which is trained in advance.
The service data of which the predicted storage time does not exceed the set time threshold may be stored in the SSD as the first service data, and the service data of which the predicted storage time exceeds the set time threshold may be stored in the disk hard disk as the second service data.
Compared with a magnetic disk hard disk, the SSD has higher reading and writing speed, so that the service data can be read and written at higher speed, and the magnetic disk hard disk stores historical precipitation data, so that an isolation effect can be formed, and the data security is further enhanced. If the server is a block chain node, the SSD is used for supporting storage access of the consensus operation of the block chain system, and the magnetic disk hard disk does not participate in the consensus operation.
Example 3:
on the basis of the foregoing embodiments, in an embodiment of the present invention, the SSD is further configured to store service data with an access frequency higher than a set access frequency threshold, and/or store service data with an access speed higher than a set access speed threshold.
The server may pre-store a set access frequency threshold, where the set access frequency threshold may be set by a user or a server administrator, may be obtained by the server according to big data statistics, and the like, and is not described in detail in the embodiment of the present invention.
When the service data received by the server needs to be stored or the service data obtained through calculation needs to be stored, the server can also determine the access frequency of the service data.
The access frequency of the service data determined by the server may be an actual access frequency of the service data, and may be a predicted access frequency of the service data.
If the server determines the access frequency of the service according to the actual access frequency of the service data, the service data with the actual access frequency exceeding the set access frequency threshold can be stored in the SSD, and the service data with the actual access frequency not exceeding the access frequency threshold can be stored in the disk hard disk.
If the server determines the access frequency of the service data according to the predicted access frequency of the service data, the server may determine the predicted access frequency of the service data according to the service type of the service data and the corresponding relationship between the service type and the access frequency, determine the predicted access frequency of the service data according to the obtained time of the service data and the corresponding relationship between the obtained time and the access frequency, and determine the predicted access frequency of the service data according to a pre-trained access frequency prediction model.
The service data with the predicted access frequency exceeding the set access frequency threshold can be stored in the SSD, and the service data with the predicted access frequency not exceeding the access frequency threshold can be stored in the magnetic disk hard disk.
The server may pre-store a set access speed threshold, where the set access speed threshold may be set by a user or a server administrator, may be obtained by the server according to big data statistics, and the like, and is not described in detail in the embodiment of the present invention.
When the service data received by the server needs to be stored or the service data obtained through calculation needs to be stored, the server can also determine the access speed of the service data.
The access frequency of the service data determined by the server may be an actual access speed of the service data, and may be a predicted access speed of the service data.
If the server determines the access speed of the service according to the actual access speed of the service data, the service data with the actual access speed exceeding the set access speed threshold value can be stored in the SSD, and the service data with the actual access speed not exceeding the access speed threshold value can be stored in the magnetic disk hard disk.
If the server determines the access speed of the service data according to the predicted access speed of the service data, the predicted access speed of the service data can be determined according to the access speed of the service data and the corresponding relation between the service type and the access speed, the predicted access speed of the service data can be determined according to the obtaining time of the service data and the corresponding relation between the obtaining time and the access speed, and the predicted access speed of the service data can be determined according to a pre-trained access speed prediction model.
The service data with the predicted access speed exceeding the set access speed threshold can be stored in the SSD, and the service data with the predicted access speed not exceeding the access speed threshold can be stored in the disk hard disk.
The service data with the service frequency higher than the set access frequency threshold and/or the service data with the storage access speed higher than the set access speed threshold may be selected from the services with the storage time exceeding the set time threshold, or the service data with the service frequency higher than the set access frequency threshold and/or the service data with the storage access speed higher than the set access speed threshold may be selected from all the data.
Compared with a magnetic disk hard disk, the SSD has higher reading and writing speed, the service data with the access frequency higher than the set access frequency threshold and/or the service data with the access speed higher than the set access speed threshold are stored in the SSD, so that the service data can be further ensured to be read and written at higher speed.
Example 4:
on the basis of the foregoing embodiments, in the embodiment of the present invention, the Execution Environment of the server includes an intelSGX chipset and/or a TEE (Trusted Execution Environment).
In order to further enhance data security, a more secure hardware operating environment may be provided in the server.
The more secure hardware operating environment may include the intel SGX chipset and/or the TEE. Specifically, the intelSGX chipset may be an intel x86SGX chipset.
The model with intel SGX technical characteristics and TEE environment is selected in the universal server, an SSD hard disk is added on the basis, FGPA-AI acceleration, FPGA-IO acceleration and FPGA-password operation acceleration are added, and the modules can be combined and used in different configurations according to different use occasions.
As shown in FIG. 6, the server's operating environment includes the intel SGX chipset and the TEE.
Fig. 7 is a schematic structural diagram showing a combination of a hardware environment and a software environment of a server. The hardware environment comprises an SSD storage module, an FPGA-AI acceleration module, an FPGA-IO acceleration module, an FPGA-password operation acceleration module, an intel SGX chip set and a TEE, and the software environment comprises an operating system and a block chain node program.
While the preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (9)

1. The server is characterized by comprising a storage module and a Field Programmable Gate Array (FPGA) acceleration module, wherein the FPGA acceleration module comprises at least one of an FPGA-AI acceleration module, an FPGA-IO acceleration module and an FPGA-password operation acceleration module.
2. The server of claim 1, wherein if the FPGA acceleration module comprises an FPGA-AI acceleration module;
and the FPGA-AI acceleration module is used for carrying out AI operation acceleration on the service data to be processed in the storage module.
3. The server of claim 1, wherein if the FPGA acceleration module comprises an FPGA-IO acceleration module;
the FPGA-IO acceleration module is used for performing input and/or output acceleration on the service data to be processed.
4. The server of claim 1, wherein if the FPGA acceleration module comprises an FPGA-cryptographic operation acceleration module;
the FPGA-password operation acceleration module is used for carrying out encryption acceleration operation and/or decryption operation acceleration on the service data to be processed in the storage module.
5. The server of claim 1, wherein the storage module comprises a Solid State Disk (SSD) and/or a disk hard.
6. The server according to claim 5, wherein if the storage module comprises an SSD and a disk hard;
the SSD is used for storing first service data, wherein the storage time of the first service data does not exceed the service data with a set time threshold;
and the magnetic disk hard disk is used for storing second service processing, wherein the storage time of the second service data exceeds the service data with the set time threshold.
7. The server according to claim 6, wherein the SSD is further configured to store the traffic data having an access frequency higher than a set access frequency threshold and/or the traffic data having an access speed higher than a set access speed threshold.
8. The server according to any of claims 1-7, wherein the server is a node in a blockchain network service system.
9. The server according to any of claims 1-7, wherein the server's execution environment comprises an intel SGX chipset and/or a trusted execution environment TEE.
CN201811580994.3A 2018-12-24 2018-12-24 Server Pending CN111352475A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811580994.3A CN111352475A (en) 2018-12-24 2018-12-24 Server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811580994.3A CN111352475A (en) 2018-12-24 2018-12-24 Server

Publications (1)

Publication Number Publication Date
CN111352475A true CN111352475A (en) 2020-06-30

Family

ID=71192728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811580994.3A Pending CN111352475A (en) 2018-12-24 2018-12-24 Server

Country Status (1)

Country Link
CN (1) CN111352475A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103064926A (en) * 2012-12-21 2013-04-24 华为技术有限公司 Data processing method and device
CN104267912A (en) * 2014-09-19 2015-01-07 北京联创信安科技有限公司 NAS (Network Attached Storage) accelerating method and system
CN104657308A (en) * 2015-03-04 2015-05-27 浪潮电子信息产业股份有限公司 Method for realizing server hardware acceleration by using FPGA
CN105631343A (en) * 2014-10-29 2016-06-01 航天信息股份有限公司 Password operation realization method and device based on encryption card and server
CN106354574A (en) * 2016-08-30 2017-01-25 浪潮(北京)电子信息产业有限公司 Acceleration system and method used for big data K-Mean clustering algorithm
CN107135078A (en) * 2017-06-05 2017-09-05 浙江大学 PBKDF2 cryptographic algorithms accelerated method and equipment therefor

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103064926A (en) * 2012-12-21 2013-04-24 华为技术有限公司 Data processing method and device
CN104267912A (en) * 2014-09-19 2015-01-07 北京联创信安科技有限公司 NAS (Network Attached Storage) accelerating method and system
CN105631343A (en) * 2014-10-29 2016-06-01 航天信息股份有限公司 Password operation realization method and device based on encryption card and server
CN104657308A (en) * 2015-03-04 2015-05-27 浪潮电子信息产业股份有限公司 Method for realizing server hardware acceleration by using FPGA
CN106354574A (en) * 2016-08-30 2017-01-25 浪潮(北京)电子信息产业有限公司 Acceleration system and method used for big data K-Mean clustering algorithm
CN107135078A (en) * 2017-06-05 2017-09-05 浙江大学 PBKDF2 cryptographic algorithms accelerated method and equipment therefor

Similar Documents

Publication Publication Date Title
US10097378B2 (en) Efficient TCAM resource sharing
CN107368259B (en) Method and device for writing service data into block chain system
Liu et al. A Bayesian Q-learning game for dependable task offloading against DDoS attacks in sensor edge cloud
US10956584B1 (en) Secure data processing
CN108829350A (en) Data migration method and device based on block chain
TWI705686B (en) Method, device and equipment for data statistics
CN108696594A (en) A kind of the big data traffic load equalization methods and device of market surpervision block chain
Saini et al. E2EE for data security for hybrid cloud services: a novel approach
CN109191287A (en) A kind of sharding method, device and the electronic equipment of block chain intelligence contract
EP3719648A1 (en) Edge component computing system having integrated faas call handling capability
Li et al. Blockchain queuing model with non-preemptive limited-priority
CN104618304A (en) Data processing method and data processing system
Zhang et al. Enhanced adaptive cloudlet placement approach for mobile application on spark
US20190149478A1 (en) Systems and methods for allocating shared resources in multi-tenant environments
US9674064B1 (en) Techniques for server transaction processing
CN113890842B (en) Information transmission delay upper bound calculation method, system, equipment and storage medium
CN114610813A (en) Distributed storage method, device, equipment and medium for federal learning
CN113792890A (en) Model training method based on federal learning and related equipment
CN111352475A (en) Server
CN107391541A (en) A kind of real time data merging method and device
CN111093194A (en) Edge computing virtual base station management method and device based on block chain
CN111147575B (en) Data storage system based on block chain
Jiang et al. A Task Offloading Method with Edge for 5G‐Envisioned Cyber‐Physical‐Social Systems
CN113704750A (en) Network attack detection method and device of distributed power generation system and terminal equipment
CN111984202A (en) Data processing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200630