CN116760850B - Data processing method, device, equipment, medium and system - Google Patents

Data processing method, device, equipment, medium and system Download PDF

Info

Publication number
CN116760850B
CN116760850B CN202311034910.7A CN202311034910A CN116760850B CN 116760850 B CN116760850 B CN 116760850B CN 202311034910 A CN202311034910 A CN 202311034910A CN 116760850 B CN116760850 B CN 116760850B
Authority
CN
China
Prior art keywords
storage
server
data
target
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311034910.7A
Other languages
Chinese (zh)
Other versions
CN116760850A (en
Inventor
刘伟
李仁刚
徐亚明
邓子为
刘杨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Inspur Electronic Information Industry Co Ltd
Original Assignee
Inspur Electronic Information Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Inspur Electronic Information Industry Co Ltd filed Critical Inspur Electronic Information Industry Co Ltd
Priority to CN202311034910.7A priority Critical patent/CN116760850B/en
Publication of CN116760850A publication Critical patent/CN116760850A/en
Application granted granted Critical
Publication of CN116760850B publication Critical patent/CN116760850B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1097Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/22Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks comprising specially adapted graphical user interfaces [GUI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/04Processing captured monitoring data, e.g. for logfile generation
    • H04L43/045Processing captured monitoring data, e.g. for logfile generation for graphical visualisation of monitoring data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/08Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
    • H04L43/0876Network utilisation, e.g. volume of load or congestion level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1408Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic by monitoring network traffic
    • H04L63/1416Event detection, e.g. attack signature detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/14Network architectures or network communication protocols for network security for detecting or protecting against malicious traffic
    • H04L63/1441Countermeasures against malicious traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/20Network architectures or network communication protocols for network security for managing network security; network security policies in general
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1095Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L9/00Cryptographic mechanisms or cryptographic arrangements for secret or secure communications; Network security protocols
    • H04L9/40Network security protocols
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a data processing method, a device, equipment, a medium and a system in the technical field of computers. The method and the system utilize the heterogeneous platform to communicate with the client, the metadata server and the storage server, store storage views in advance on the heterogeneous platform, and can determine available storage structures in the local storage views according to storage requests sent by the client, and generate operation identifiers according to client information of the client and structure information of the available storage structures; if the operation identifier passes the verification, determining a corresponding storage server at the storage server according to the structure information for the target data to be stored in the storage request, and storing the target data to the storage server. Therefore, the data storage position can be determined through the pre-stored storage view on the heterogeneous platform, the frequency of the metadata server side being accessed can be reduced, the metadata server side is prevented from becoming an access bottleneck, and the heterogeneous platform can also improve the access efficiency and performance.

Description

Data processing method, device, equipment, medium and system
Technical Field
The present invention relates to the field of computer technologies, and in particular, to a data processing method, apparatus, device, medium, and system.
Background
In the distributed storage system, all clients need to acquire the data storage positions through the metadata management end, so that the access pressure of the metadata management end is high and the metadata management end is easy to attack by a network. Once the metadata management end fails, the access of the whole system is affected. Therefore, the metadata management end is used for monitoring, scheduling and maintaining the data storage position, system access bottlenecks are easy to occur, and system access efficiency and performance are affected.
Therefore, how to solve the system access bottleneck and improve the access efficiency and performance is a problem that needs to be solved by those skilled in the art.
Disclosure of Invention
In view of the above, the present invention aims to provide a data processing method, apparatus, device, medium and system, so as to solve the system access bottleneck and improve the access efficiency and performance. The specific scheme is as follows:
in a first aspect, the present invention provides a data processing method applied to a heterogeneous platform, where the heterogeneous platform is communicatively connected to a client, a metadata server, and a storage server, and the method includes:
receiving a data processing request sent by the client;
if the data processing request is a storage request, determining an available storage structure in a local storage view, and generating an operation identifier according to client information of the client and structure information of the available storage structure; the local storage view is obtained in advance from the metadata server;
And if the operation identifier passes the verification, determining a corresponding storage server at the storage server for the target data to be stored for the storage request according to the structure information, and storing the target data to the storage server.
Optionally, the determining the available storage structure in the local storage view includes:
selecting an idle target storage unit in the local storage view according to the data amount of the target data;
constructing a plurality of target storage blocks; each target memory block includes: a plurality of target storage units belonging to different storage servers;
building the available storage structure including the target storage block;
the structure information including block information of the target storage block is generated.
Optionally, the determining, according to the structure information, a corresponding storage server for the target data to be stored in the storage request at the storage server includes:
adding the target memory block to an in-use queue;
determining the target memory block in the in-use queue;
and determining the storage server according to the block information of the target storage block.
Optionally, after the storing the target data in the storage server, the method further includes:
Adding the storage units of the stored data in the target storage block to an used queue, and recycling the storage units of the non-stored data in the target storage block to the available queue;
transmitting an address of a storage unit of stored data in the target storage block to the metadata server so that the metadata server marks the storage unit as used;
deleting the target storage block from the in-use queue.
Optionally, the determining, at the storage server, a corresponding storage server for the target data to be stored for the storage request according to the structure information, and storing the target data to the storage server includes:
performing erasure calculation on the target data;
and determining a corresponding storage server at the storage server for the data subjected to erasure correction calculation according to the structure information, and storing the data subjected to erasure correction calculation to the storage server.
Optionally, the heterogeneous platform includes: a control chip and an integrated circuit;
accordingly, the generating an operation identifier according to the client information of the client and the structure information of the available storage structure includes:
and carrying out hash calculation on the IP information of the client and the metadata information in the structure information by using the control chip to obtain a hash result, and splicing the hash result and the storage unit position information in the structure information to obtain the operation identifier.
Optionally, the heterogeneous platform comprises a control chip and an integrated circuit;
correspondingly, if the operation identifier passes the verification, determining a corresponding storage server at the storage server for the target data to be stored in the storage request according to the structure information, and storing the target data to the storage server, including:
and verifying the operation identifier by using the integrated circuit, determining a corresponding storage server at the storage server for target data to be stored in the storage request according to the structure information after the operation identifier passes the verification, and storing the target data to the storage server.
Optionally, the verifying the operation identifier with the integrated circuit includes:
the integrated circuit judges whether the operation identifier generated by the integrated circuit is consistent with the operation identifier generated by the control chip;
if the operation identifiers are consistent, determining that the operation identifiers pass verification; otherwise, determining that the operation identifier is not checked, and generating an error prompt message of the operation identifier.
Optionally, the method further comprises:
judging whether the request information of the data processing request is dangerous or not according to a preset security policy; the request information includes: source port, source IP, protocol type, and/or operational behavior;
If yes, adding a dangerous mark to the data processing request, and intercepting the data processing request;
if not, judging the type of the data processing request, wherein the type comprises the following steps: storage requests, read requests, and copy requests.
Optionally, the method further comprises:
if the data processing request is a read request, forwarding the read request to the metadata server;
receiving read address information returned by the metadata server;
and reading corresponding data from the storage server according to the read address information.
Optionally, the method further comprises:
if the data processing request is a copy request, forwarding the copy request to the metadata server;
receiving copy address information returned by the metadata server;
reading corresponding data from the storage server according to the copy address information, taking the read data as the target data, executing the determination of an available storage structure in a local storage view, and generating an operation identifier according to the client information of the client and the structure information of the available storage structure; and if the operation identifier passes the verification, determining a corresponding storage server for the target data at the storage server according to the structure information, and storing the target data to the storage server.
Optionally, the method further comprises:
if the copy request requires copying multiple copies, then multiple copies are copied in parallel using multiple heterogeneous platforms.
Optionally, the method further comprises:
if the target storage server and the source storage server are the same, sending a notice to the storage server so that the storage server autonomously performs copying.
Optionally, the method further comprises:
when communicating with any storage server in the storage server, selecting a corresponding outlet port according to the IP of the currently determined storage server and a preset redirection strategy; the preset redirection strategy is determined according to the network segment to which the IP belongs and/or the data transmission protocol.
Optionally, the method further comprises:
when communicating with any storage server in the storage server, selecting a corresponding outlet port according to the flow size and resource use condition of the receiving ports of each storage server in the storage server, the flow size of all outlet ports, the security rule and the preset redirection policy.
Optionally, the heterogeneous platforms are multiple, and the process of applying for the local storage view by each heterogeneous platform includes:
if the available storage structure does not exist in the local storage view, sending a view application request to the metadata server;
Receiving a writable storage view returned by the metadata server according to the view application request;
merging the writable storage view to the local storage view;
the local storage views of different heterogeneous platform applications are part of the global storage view, and a distributed storage view is formed.
In a second aspect, the present invention provides a data processing apparatus, applied to a heterogeneous platform, where the heterogeneous platform is communicatively connected to a client, a metadata server, and a storage server, and includes:
the receiving module is used for receiving the data processing request sent by the client;
the determining module is used for determining an available storage structure in the local storage view if the data processing request is a storage request, and generating an operation identifier according to the client information of the client and the structure information of the available storage structure; the local storage view is obtained in advance from the metadata server;
and the storage module is used for determining a corresponding storage server at the storage server according to the structure information for the target data to be stored in the storage request if the operation identifier passes the verification, and storing the target data to the storage server.
In a third aspect, the present invention provides an electronic device, comprising:
a memory for storing a computer program;
and a processor for executing the computer program to implement the previously disclosed data processing method.
In a fourth aspect, the present invention provides a readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the previously disclosed data processing method.
In a fifth aspect, the present invention provides a data processing system comprising: the system comprises a client, a hardware execution end comprising a plurality of heterogeneous platforms, a metadata server and a storage server; the heterogeneous platform implements the method of any of the preceding claims based on a data plane programming language.
As can be seen from the above solution, the present invention provides a data processing method, which is applied to a heterogeneous platform, where the heterogeneous platform is communicatively connected with a client, a metadata server, and a storage server, and includes: receiving a data processing request sent by the client; if the data processing request is a storage request, determining an available storage structure in a local storage view, and generating an operation identifier according to client information of the client and structure information of the available storage structure; the local storage view is obtained in advance from the metadata server; and if the operation identifier passes the verification, determining a corresponding storage server at the storage server for the target data to be stored for the storage request according to the structure information, and storing the target data to the storage server.
The beneficial effects of the invention are as follows: the heterogeneous platform is used for communicating with the client, the metadata server and the storage server, and the storage view is stored in advance on the heterogeneous platform, so that the heterogeneous platform can determine an available storage structure in the local storage view according to a storage request sent by the client, and generate an operation identifier according to client information of the client and structure information of the available storage structure; if the operation identifier passes the verification, determining a corresponding storage server at the storage server according to the structure information for the target data to be stored in the storage request, and storing the target data to the storage server. Therefore, the metadata server does not need to be accessed to determine the data storage position during writing operation, and the data storage position can be determined through the pre-stored storage view on the heterogeneous platform, so that the frequency of the metadata server being accessed can be reduced, the metadata server is prevented from becoming an access bottleneck, and the heterogeneous platform can also improve the access efficiency and performance.
Correspondingly, the data processing device, the medium and the system provided by the invention also have the technical effects.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only embodiments of the present invention, and that other drawings can be obtained according to the provided drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a data processing method disclosed by the invention;
FIG. 2 is a schematic diagram of a memory block according to the present disclosure;
FIG. 3 is a schematic diagram of a memory structure according to the present disclosure;
FIG. 4 is a schematic diagram of a distributed storage system according to the present disclosure;
FIG. 5 is a functional schematic of a distributed storage system according to the present disclosure;
FIG. 6 is a schematic diagram of an in-use queue, an available queue, and a used queue according to the present disclosure;
FIG. 7 is a flow chart of a read/write operation of the present disclosure;
FIG. 8 is a diagram illustrating queue management according to the present disclosure;
FIG. 9 is a schematic diagram of a data processing apparatus according to the present disclosure;
FIG. 10 is a schematic diagram of an electronic device according to the present disclosure;
FIG. 11 is a diagram illustrating a server configuration according to the present invention;
fig. 12 is a diagram of a terminal structure according to the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
At present, in a distributed storage system, all clients need to acquire data storage positions through a metadata management end, so that the access pressure of the metadata management end is high and the metadata management end is also easy to attack by a network. Once the metadata management end fails, the access of the whole system is affected. Therefore, the metadata management end is used for monitoring, scheduling and maintaining the data storage position, system access bottlenecks are easy to occur, and system access efficiency and performance are affected. Therefore, the invention provides a data processing scheme which can solve the system access bottleneck and improve the access efficiency and performance.
Referring to fig. 1, the embodiment of the invention discloses a data processing method, which is applied to a heterogeneous platform, wherein the heterogeneous platform is in communication connection with a client, a metadata server and a storage server, and comprises the following steps:
s101, receiving a data processing request sent by a client.
In this embodiment, the data processing request may be a storage request, a read request, and a copy request. S102 and S103 in the present embodiment describe the processing procedure of the storage request. The heterogeneous platform is composed of hardware devices such as a complex programmable logic device, a field programmable logic array and the like; specifically, the heterogeneous platform comprises a control chip and an integrated circuit, wherein the integrated circuit comprises: complex programmable logic devices, field programmable logic arrays, and the like.
S102, if the data processing request is a storage request, determining an available storage structure in the local storage view, and generating an operation identifier according to client information of the client and structure information of the available storage structure.
Wherein the local storage view is obtained in advance from the metadata server. When the heterogeneous platform is initialized, a view application request can be sent to the metadata server, wherein the view application request carries the size of the storage space to be applied. For example: when the heterogeneous platform is initialized, the metadata server side requests the writable storage view with the size of 1G from the metadata server side, and then the metadata server side returns the writable storage view with the size of 1G. It can be seen that the writable storage view is autonomously decided by the metadata server to include information such as the position of each storage unit.
The metadata server may be embodied as a metadata server or a meta server. The metadata server is used to describe where a file is stored in the system. The meta server is a management server and mainly used for managing the storage position relation between the file and the storage block; the design has the advantages of greatly improving expansibility, improving the performance and reliability of the system and being very fast in finding file positions.
In one embodiment, the heterogeneous platforms are a plurality of, and each heterogeneous platform applies for a process of locally storing views includes: if the available storage structure does not exist in the local storage view, sending a view application request to the metadata server; receiving a writable storage view returned by the metadata server according to the view application request; merging the writable storage view to the local storage view; the local storage views of different heterogeneous platform applications are part of the global storage view, and a distributed storage view is formed. The writable memory view includes: a plurality of free storage units, each storage unit recording an associated disk address and an IP (Internet Protocol ) of a storage server where the disk is located; the writable storage view is merged to the local storage view. The memory cells are equal in size, e.g., 1M.
In this embodiment, each two storage units from different storage servers may constitute a storage block, and a plurality of storage blocks may constitute an available storage structure. The structure of the single memory block is shown in fig. 2, and specifically includes: two storage units of block identification and mutual backup, the two storage units respectively correspond to: metadata, IP/location, whether the stored data is valid flag and priority. The IP is the IP of the storage server where the disk is located, and the position is the disk address. When the stored data is valid, whether the valid flag is 1, the priority is 0, and when the stored data is invalid, whether the valid flag is 0, the priority is 1. One available memory structure includes a plurality of free memory blocks, then a single available memory structure can be seen in FIG. 3, as shown in FIG. 2.
It should be noted that, the number of free memory blocks included in the available memory structures determined according to different memory requests is different, and the specific number depends on the data size of the target data to be stored in the memory request. Thus in one embodiment, determining available storage structures in a local storage view includes: selecting an idle target storage unit in the local storage view according to the data quantity of the target data; constructing a plurality of target storage blocks; each target memory block includes: a plurality of target storage units belonging to different storage servers; constructing an available storage structure comprising a target storage block; structural information including block information of the target memory block is generated. The block information of the target storage block is: information of each storage unit included in the memory, such as: storage servers IP and priorities to which the storage units belong, and the like. The structure information of the available storage structure includes the storage server IP and the disk address to which each storage unit belongs.
And S103, if the operation identifier passes the verification, determining a corresponding storage server at the storage server for the target data to be stored for the storage request according to the structure information, and storing the target data to the storage server.
In order to facilitate management of the storage process, the in-use queue and the used queue may be set. The memory locations that are to be used for storing data are being recorded in the in-use queue, and the memory locations that have stored data are recorded in the in-use queue. In one embodiment, after determining the available storage structure, the target storage block included in the available storage structure is added to the in-use queue, and specifically, two storage units included in the target storage block may be added to the in-use queue. Accordingly, in one embodiment, determining, at a storage server, a corresponding storage server for target data to be stored for a storage request according to the structure information includes: determining a target memory block in the in-use queue; and determining a storage server according to the block information of the target storage block. In one embodiment, after storing the target data to the storage server, further comprising: and adding the storage units of the stored data in the target storage block to the used queue, and recycling the storage units of the non-stored data in the target storage block to the available queue. In one embodiment, after adding the storage unit of the stored data in the target storage block to the used queue, the method further comprises: and sending the address of the storage unit of the stored data in the target storage block to the metadata server so that the metadata server marks the storage unit as used. In one embodiment, after adding the storage unit of the stored data in the target storage block to the used queue, the method further comprises: the target memory block is deleted from the in-use queue and the memory location of the stored data is deleted from the local memory view. Each memory block in the respective queues may be represented by two memory locations comprised by the memory block, namely: the elements in a queue are the individual memory locations.
In the embodiment, the heterogeneous platform is used for communicating with the client, the metadata server and the storage server, and the storage view is stored in advance on the heterogeneous platform, so that the heterogeneous platform can determine an available storage structure in the local storage view according to the storage request sent by the client, and generate an operation identifier according to the client information of the client and the structure information of the available storage structure; if the operation identifier passes the verification, determining a corresponding storage server at the storage server according to the structure information for the target data to be stored in the storage request, and storing the target data to the storage server. Therefore, the metadata server does not need to be accessed to determine the data storage position during writing operation, and the data storage position can be determined through the pre-stored storage view on the heterogeneous platform, so that the frequency of the metadata server being accessed can be reduced, the metadata server is prevented from becoming an access bottleneck, and the heterogeneous platform can also improve the access efficiency and performance.
In order to improve the fault tolerance of data storage, erasure calculation can be performed during storage. In one embodiment, determining a corresponding storage server at a storage server according to the structure information for target data to be stored in the storage request, and storing the target data to the storage server, includes: performing erasure calculation on the target data; and determining a corresponding storage server at a storage server according to the structure information for the erasure-calculated data, and storing the erasure-calculated data to the storage server. Erasure calculation is performed by the heterogeneous platform, so that the calculation efficiency can be improved by means of the high-speed calculation capability of the heterogeneous platform, and the data volume to be sent by the client can be reduced. For example: when the erasure calculation is performed by the client, the client needs to send M pieces of original data and N pieces of erasure data, and when the erasure calculation is performed by the heterogeneous platform, the client only needs to send M pieces of original data.
In one embodiment, a heterogeneous platform includes: a control chip and an integrated circuit; accordingly, generating an operation identifier according to client information of the client and structure information of the available storage structure includes: and carrying out hash calculation on the IP information of the client and the metadata information in the structure information by using the control chip to obtain a hash result, and splicing the hash result and the storage unit position information in the structure information to obtain the operation identifier. The hash calculation algorithm may employ MD5, etc. The metadata information in the structure information is: metadata information of each storage unit in the structure information. Control chips such as: ARM (Advanced RISC Machine) chip, etc.
Correspondingly, if the operation identifier passes the verification, determining a corresponding storage server at the storage server for the target data to be stored for the storage request according to the structure information, and storing the target data to the storage server, wherein the method comprises the following steps: and verifying the operation identifier by using the integrated circuit, determining a corresponding storage server at the storage server for the target data to be stored for the storage request according to the structure information after the operation identifier passes the verification, and storing the target data to the storage server. Wherein verifying the operation identifier with the integrated circuit comprises: the integrated circuit judges whether the operation identifier generated by the integrated circuit is consistent with the operation identifier generated by the heterogeneous platform; if the operation identifiers are consistent, determining that the operation identifiers pass verification; otherwise, determining that the operation identifier is not checked, and generating an error prompt message of the operation identifier.
It should be noted that, the control chip can be flexibly configured, so that the flow forwarding can be flexibly performed, and the integrated circuit has parallel high-speed processing capability, so that the heterogeneous platform can serve as a switch or a router to perform flow forwarding, and has better flexibility and processing efficiency.
Further, the present embodiment may also perform security detection on the request, such as: and intercepting the dangerous request according to the source port, the source IP, the protocol type and/or the operation behavior, thereby protecting the metadata server and the storage server from being attacked. In one embodiment, the method further comprises: judging whether the request information of the data processing request is dangerous or not according to a preset security policy; the request information includes: source port, source IP, protocol type, and/or operational behavior; if yes, adding a dangerous mark to the data processing request, and intercepting the data processing request; if not, judging the type of the data processing request, wherein the type comprises the following steps: storage requests, read requests, and copy requests. The dangerous source port, source IP, protocol type and/or operation behavior may be recorded to a blacklist in advance, and then whether the request information of the data processing request is dangerous or not may be judged by using the blacklist.
As previously described, the data processing requests may be storage requests, read requests, and copy requests. When the data processing request is a read request, forwarding the read request to a metadata server; receiving read address information returned by the metadata server; and reading corresponding data from the storage server according to the read address information. When the data processing request is a copy request, forwarding the copy request to the metadata server; receiving copy address information returned by the metadata server; reading corresponding data from a storage server according to the copy address information, taking the read data as target data, executing the determination of an available storage structure in a local storage view, and generating an operation identifier according to the client information of the client and the structure information of the available storage structure; and if the operation identifier passes the verification, determining a corresponding storage server for the target data according to the structure information, and storing the target data to the storage server. In one embodiment, if the replication request requires replication of multiple copies, multiple copies are replicated in parallel using multiple heterogeneous platforms. In one embodiment, if the destination storage server and the source storage server are the same, a notification is sent to the storage server to cause the storage server to autonomously replicate.
In order to realize splitting of the elephant flow, when the heterogeneous platform in the embodiment communicates with any storage server in the storage server, a corresponding outlet port is selected according to the IP of the storage server which is currently determined and a preset redirection strategy; the preset redirection strategy is determined according to the network segment and/or the data transmission protocol to which the IP belongs. In one implementation manner, the heterogeneous platform in this embodiment may further select, when communicating with any storage server in the storage server, a corresponding egress port according to a traffic size and a resource usage condition of a receiving port of each storage server in the storage server, traffic sizes of all egress ports, a security rule, and a preset redirection policy. The elephant flow is: regarding the traffic of data storage, these traffic typically occupy more than 80% of the bandwidth of the overall system, while at the same time requiring a higher throughput rate to be met.
Therefore, the embodiment is not limited to the traditional centralized monitoring any more, and the traffic of each party is monitored through the heterogeneous platform, so that the elephant flow can be effectively identified and split, and network congestion is avoided. The embodiment also reduces the access frequency of the metadata server by utilizing the pre-stored storage view of the heterogeneous platform, and the heterogeneous platform can directly complete the writing operation without accessing the metadata server, thereby reducing the pressure of the metadata server. Meanwhile, the security risk detection can be uniformly carried out on the request sent by the client by the heterogeneous platform with super-strong programmable capacity, so that the security of the system is protected. The erasure correction calculation is completed by the heterogeneous platform, and the storage server only needs to receive the original data and the check data obtained by the erasure correction calculation, and does not need to perform erasure correction calculation, so that the complexity and the pressure of the storage server are reduced. It should be noted that these functions can be implemented in heterogeneous platforms by means of the P4 (Programming Protocol-Independent Packet Processors) programming language. P4 is a high-level programming language of a data plane, and is commonly used in network switching scenarios, such as a switch, an intelligent network card, a DPU (Distributed Processing Unit, a distributed processing unit) and the like, and has better protocol expansibility, supports parallelism (matching-executing), and is easier to develop compared with an openflow protocol. The amount of resources consumed by the P4 entries implemented in the heterogeneous platform to implement redirection is much less than the amount of resources required by the entries at the storage server. In specific implementation, separation of the data plane and the control plane can be realized by means of P4, and unlike conventional simple separation, reading operation and writing operation of the control plane can be further separated, so that the heterogeneous platform bears the task of converting the control plane between the client and the metadata server and recounts the task of bearing the writing operation.
P4 can realize the forwarding function of the existing protocol, can customize the data packet structure, can carry out new function development on the basis of the existing protocol, and supports software style development. Wherein, each part of P4 acts as: the Parser is used to determine what the packet has, what each bit is; the entry is a pipeline of a stack of Match-actions, and determines which bits constitute file information that is desired by the user, for example: if the target IP is 8.8.8.8, the header is rewritten so that the data packet can be forwarded to the place where the data packet is to be forwarded; the Switching Logic is used for realizing Logic wanted by a user or placing a buffer for high performance, and storing a packet which is just processed by the last link and is ready for processing by the next link; egress is also a pipeline of a stack of Match-actions, such as: the source MAC (Media Access Control) address is rewritten, and more operation space for the package is provided; the Deparser is used for rewriting the rewritten header back into the packet and then transmitting the data packet.
Further, a system including heterogeneous platforms, clients, metadata servers, and storage servers may refer to fig. 4. As shown in fig. 4, the system includes a plurality of clients, two heterogeneous platforms, a plurality of storage servers (i.e., storage servers), and a meta server (i.e., metadata server). Each heterogeneous platform comprises a control chip and a hardware integrated circuit (such as an FPGA), and the hardware integrated circuit forwards traffic through a P4 forwarding table entry. The use of the system shown in fig. 4 includes: initializing each storage server and meta-server; initializing a heterogeneous platform, so that the heterogeneous platform obtains a storage view from a meta-server to a local place; the heterogeneous platform processes the data stream from the client, performs security authentication, separates the control plane from the data plane, performs redirection forwarding according to the P4 entry, monitors the traffic size of the data stream, processes the write operation by using the storage view, and the like, and particularly please refer to fig. 5. The problem of congestion can be solved by redirecting and forwarding according to the P4 table entry, the problem of access bottleneck caused by the meta-server can be solved by processing writing operation through the storage view, and other functions are beneficial to facilitating cluster management.
The heterogeneous platform may include a SOC (System On Chip) control chip and an FPGA (Field Programmable Gate Array ), and may also include an ARM control chip and an ASIC (Application Specific Integrated Circuit ). The SOC chip and the ARM chip are used for realizing the work of rule management, view application and management, and the FPGA and the ASIC are responsible for forwarding the data surface. It can be seen that each heterogeneous platform is composed of a control chip and a hardware integrated circuit, which has parallel, real-time processing capability. The heterogeneous platform is communicatively coupled to other devices via a high-speed network interface of 100G or 200G, which can process data center-level network data. When there are multiple heterogeneous platforms, the switch between the heterogeneous platforms and the multiple clients distributes requests issued by the multiple clients to the heterogeneous platforms in a load-balanced manner. The high-speed data processing is completed under the cooperation of software and hardware by utilizing the characteristics of high-speed, parallel and real-time processing of the hardware of the heterogeneous platform and the flexibility of a control chip; the management complexity of the heterogeneous platform to the storage server is reduced by monitoring the whole process of the client-side initiated transaction.
Referring to fig. 6, the meta server is provided with an in-use queue, an available queue and a used queue; the in-use queue comprises all storage units applied by the heterogeneous platform, the available queue comprises all idle storage units of the storage server, and the used queue comprises all storage units of stored data. The heterogeneous platform may also be provided with an in-use queue comprising storage units for data to be stored, an available queue comprising the applied free storage units, and an used queue comprising all storage units for stored data. As shown in fig. 6, each queue may be a circular queue, with shading in each circular queue indicating stored data.
In one example, during initialization, the heterogeneous platform obtains a portion of the storage view from the meta-server via the network, and the heterogeneous platform only needs to indicate to the meta-server the size of the storage space to be obtained (e.g., 10T), from which the meta-server determines and returns the storage view of the corresponding size. The storage view returned by the meta-server is specifically: the number of memory cells and their location information. The heterogeneous platform forms a local storage view according to the number of the storage units and the position information thereof. Then, the in-use queue, the available queue and the used queue of the meta-server are correspondingly updated; while the in-use queue, the available queue, and the used queue of the heterogeneous platform. Such as: after the meta-server allocates the 10T memory view to the heterogeneous platform, deleting the related memory units from the available queue and adding the memory units to the in-use queue; accordingly, the heterogeneous platform adds the memory locations involved to the available queues.
Referring to fig. 7, when the heterogeneous platform receives a 10GB data storage request initiated by a client, the heterogeneous platform selects a storage unit (a storage location where an erasure code is reserved) capable of storing data greater than 10GB from a local storage view (10T) to form an available storage structure, assigns a structure ID, then generates a transaction ID (i.e., an operation identifier) according to the client IP and metadata information (structure information) of the selected storage unit, and then a control chip in the heterogeneous platform issues the transaction ID, request information and metadata information of the selected storage unit to an integrated circuit; meanwhile, responding to the request information of the client, after receiving the response information fed back by the heterogeneous platform, starting to send data (excluding erasure codes), and after receiving the data, carrying out erasure code calculation on the data if the matching transaction ID is successful by the integrated circuit; and then storing data according to the storage server IP in the metadata information of the selected storage unit. The method comprises the following steps: and generating a P4 table entry according to the IP of the storage server, forwarding the original data to be stored to the corresponding storage server according to the P4 table entry in a redirection mode, and directly sending the erasure data to the corresponding storage server by the integrated circuit through a network protocol. The heterogeneous platform also listens for and forwards response information of the corresponding storage server (the response of erasure data does not need to be forwarded). When the heterogeneous platform sends the original data, the client IP is used as a source IP, and the corresponding storage server IP is used as a destination IP; when the heterogeneous platform sends erasure data, the heterogeneous platform IP is used as a source IP, and the corresponding storage server IP is used as a destination IP.
The response information returned by the storage server comprises: the heterogeneous platform can find a specific memory block for storing data according to the address of the address offset successfully written. After receiving the response of successful writing returned by the storage server, the heterogeneous platform analyzes the response information, finds out the corresponding storage block identifier, places the storage unit with the data effectively marked as 1 in the storage block into a used queue, and releases the storage unit with the data effectively marked as 0 into an available queue. When the heterogeneous platform monitors that the whole storage transaction is completed (file writing is completed), notifying a meta-server of the storage unit marked as used through a network, and deleting the part of the storage unit from a local used queue; after receiving this notification, the meta-server writes the portion of the storage unit to the used queue while deleting the portion of the storage unit from the in-use queue. If the writing fails, the heterogeneous platform re-marks the used storage unit as available, and the heterogeneous platform needs to delete the transaction ID after the transaction flow is finished no matter whether the writing is successful or not, so that the transaction ID is eliminated. It should be noted that, one transaction ID may correspond to the following information: operation type (read or write), status (start, go, success or fail) and structure ID, the heterogeneous platform manages the full life cycle of the corresponding transaction by maintaining this information.
It should be noted that, the meta-server establishes a global storage view for all disks in all storage servers, where the global storage view is obtained by dividing all disk spaces with a minimum storage unit (1M), and is marked with information such as whether each storage unit is available and the storage server to which each storage unit belongs. The meta-server shuffles and combines all storage units into a global storage view.
Referring to fig. 7, the heterogeneous platform handles the client read process: the client initiates a read request; the heterogeneous platform receives the client read request, then carries out security detection and forwards the security detection to the meta-server; the meta server retrieves the storage unit where the file is located and directly gives the information of the storage unit to the client through the heterogeneous platform; the client side directly reads data from the corresponding storage server in parallel through the heterogeneous platform by the network information in the storage unit information. In the process, the heterogeneous platform is mainly responsible for the function of data forwarding.
In this embodiment, the heterogeneous platform may also monitor traffic in real time on its port leading to the storage server, and collect server traffic information and server network processing performance information that are counted by the storage server in real time, so as to determine whether an elephant flow exists. And when the flow is overloaded or the performance is insufficient, the flow can be judged as an elephant flow, and the heterogeneous platform performs diversion at the moment. The heterogeneous platform may also receive traffic indications from the SDN (Software Defined Network) controller to offload so as to avoid network end congestion.
FIG. 8 provides a queue management flow, as shown in FIG. 8, the meta-server may fetch a portion of the storage units (with control information) from its corresponding available queue and send the storage units to the heterogeneous platform; after the storage unit is taken out, the meta-server takes the storage unit out of the available queue and hangs the storage unit to the in-use queue; when the meta-server receives the writing transaction completion information, marking the relevant storage unit as completed, and taking the relevant storage unit off the in-use queue to be hung on the used queue; when the meta-server receives a read transaction request, retrieving read file information from the used queue and returning the information to the client through the heterogeneous platform; the meta server and the storage server keep the synchronization of conventional information, such as disk damage and the like, and timely process related faults; meanwhile, the meta-server is responsible for management work such as capacity expansion, copy replication and the like of the cluster, and particularly related data carrying work is completed by assistance of heterogeneous platforms.
The heterogeneous platform can monitor the storage flow initiated by the client to the storage server and the information of the corresponding used disk, and the specific is: monitoring whether the storage nodes in the communication protocols of the two parties reply the confirmation information of successful storage; further, when the storage is monitored to be successful, the heterogeneous platform can mark the corresponding disk information as a successful state, otherwise, the disk information is marked as an available state. The main purpose of the monitoring is: timely acquiring the service condition of the disk, and avoiding the corresponding disk from being occupied for a long time under the condition of failure; another object is to obtain new available disks from the storage cluster in time, supplementing already consumed disks, in real time in case of success. The heterogeneous platform determines whether one data stream normally completes disk operation or not by monitoring communication between the client and the storage server, frequent data confirmation is not needed between the storage server and the meta-server, the heterogeneous platform directly informs the final completion state of the storage transaction, and the monitoring and management burden of the meta-server is reduced.
It can be seen that, in this embodiment, the depth of the programming network path and the distributed storage technology may be combined, and each storage block stored in one data stream has corresponding storage server IP information, so that the redirected target storage server may be determined according to the storage server IP information, so that a plurality of available forwarding links may exist in the same data stream, the P4 forwarding plane may be flexibly selected according to the forwarding policy and the actual situation, and redirecting each storage block included in the data stream, thereby implementing network equalization, which is very beneficial to processing operations such as copy, read-write, and the like. Since there are fewer flow entries for the storage class, the P4 entry rule generally does not suffer from inadequacy. When the client initiates a storage request, the heterogeneous platform acquires and locks a storage unit from a part of storage views mapped in advance, maps the part of storage unit with the requested data stream, and when the actual storage data arrives at the heterogeneous platform, the heterogeneous platform extracts the IP and the position from the locked storage unit, and redirects the corresponding data stream to a storage server corresponding to the IP. Copy replication is used for storing one file (files are usually stored across multiple hosts) in a cluster for one or more copies, and a heterogeneous platform can easily divide one data stream into multiple copies; when a client initiates a copy copying request, a heterogeneous platform acquires a storage view corresponding to a file through a meta-server, wherein the view contains IP and position information stored by an original file, and the heterogeneous platform can read contents from a relevant host according to the two information and copy the contents to an idle storage view; based on the characteristic of parallel processing of heterogeneous platforms, multiple copies can be copied at one time. If the destination host and the source host of the copy are found to be the same, the heterogeneous platform may also directly notify the source host of the copy itself. The P4 table rule is used to implement flow forwarding, where a forwarding path may be set according to a network segment and a protocol, for example: only the protocol of the storage class is allowed to pass, only the protocol of the specific destination port is allowed to pass, and the like; the final rule table is a combination of rules.
Further, the present embodiment may monitor and split the object flow in conjunction with multi-modal data. Specifically, the heterogeneous platform calculates the traffic of the network outlet online, receives the traffic information of the self-receiving port counted and fed back by the storage server, self-processing capacity information, reads the information of multiple aspects of control strategies (such as outlet bandwidth limitation, safety rules and the like) customized by the user in the controller, and comprehensively judges whether to forward the traffic to the specific network outlet. Such as: storage server 0 feeds back that the traffic is too large or that the processing power is approaching a limit, the heterogeneous platform will not forward traffic to this server in combination with the redirection rules. And the following steps: the traffic of an outlet of the heterogeneous platform is about to approach the maximum bandwidth set by the controller, and then the traffic of the outlet is suspended. Therefore, the traffic can be dispersed to different storage servers through different outlets of the heterogeneous platform, the elephant flow disassembly is realized, and the network load of the storage servers is reduced.
In the distributed mapping of storage views in this embodiment, for a write request of a client, the heterogeneous platform only needs to notify the meta-server of a result when it is required to map a part of writable storage views from the meta-server in advance, and writing succeeds or fails. The read request of the client is directly analyzed into a flow table rule, and the flow table rule is directly processed by the heterogeneous platform. The meta-server only needs to face the heterogeneous platform, the client does not need to store more information from the meta-server, and the heterogeneous platform is focused on communication between the meta-server and the heterogeneous platform.
The embodiment also carries out erasure code calculation by virtue of the characteristic of parallelism and rapidness of the heterogeneous platform, so that erasure codes are not required to be generated by a client, the implementation complexity of the client is reduced, and the network bandwidth from the client to the storage cluster is reduced. Specifically, taking client writing as an example: after receiving the data of the client, the heterogeneous platform completes the calculation of erasure codes in FPGA hardware by utilizing the parallel real-time processing capability of the FPGA, generates check fragments P and Q, distributes storage blocks for the check fragments, fills in metadata information, wherein the metadata information refers to the information of the file, a meta-server can be positioned to the storage blocks related to the file through the metadata information, and finally the heterogeneous platform stores the check fragments into a cluster through a network. From the client side, the client side only needs to transmit the original file.
The unified network security policy is also realized in the heterogeneous platform, and the client can not directly access the meta-server and the storage server, so that the security of cluster equipment such as the meta-server and the storage server is protected. The heterogeneous platform realizes the identification and filtration of known or unknown protocols and malicious attacks by means of the protocol independence of P4, and effectively protects the security of the meta-server and the storage server. For example, coping with DDoS (Distribution Denial of Service, distributed denial of service attacks), the P4 programmability allows customers to flexibly customize DDoS detection methods and mitigation measures; specifically, the source IP port is defined by a black-and-white list, DDoS is identified and processed in real time by a state machine in combination with a viewing window, and an API provided by P4 can extract a network packet protocol and discard an unknown protocol packet.
A data processing apparatus according to an embodiment of the present invention is described below, and a data processing apparatus described below and other embodiments described herein may be referred to with reference to each other.
Referring to fig. 9, an embodiment of the present invention discloses a data processing apparatus, which is applied to a heterogeneous platform, wherein the heterogeneous platform is in communication connection with a client, a metadata server and a storage server, and includes:
a receiving module 901, configured to receive a data processing request sent by a client;
a determining module 902, configured to determine an available storage structure in the local storage view if the data processing request is a storage request, and generate an operation identifier according to client information of the client and structure information of the available storage structure; the local storage view is obtained in advance from a metadata server;
the storage module 903 is configured to determine, if the operation identifier passes the verification, a corresponding storage server at the storage server according to the structure information for the target data to be stored in the storage request, and store the target data to the storage server.
In one embodiment, the determining module is specifically configured to:
selecting an idle target storage unit in the local storage view according to the data quantity of the target data;
Constructing a plurality of target storage blocks; each target memory block includes: a plurality of target storage units belonging to different storage servers;
constructing an available storage structure comprising a target storage block;
structural information including block information of the target memory block is generated.
In one embodiment, the method further comprises:
and the queue updating module is used for adding the target storage block to the in-use queue.
In one embodiment, the storage module is specifically configured to:
determining a target memory block in the in-use queue;
and determining a storage server according to the block information of the target storage block.
In one embodiment, the storage module is specifically configured to:
performing erasure calculation on the target data;
and determining a corresponding storage server at a storage server according to the structure information for the erasure-calculated data, and storing the erasure-calculated data to the storage server.
In one embodiment, the queue update module is further to:
after storing the target data to the storage server, adding the storage units of the stored data in the target storage block to the used queue, and recycling the storage units of the non-stored data in the target storage block to the available queue.
In one embodiment, the method further comprises:
And the synchronization module is used for sending the address of the storage unit of the stored data in the target storage block to the metadata server side so that the metadata server side marks the storage unit as used.
In one embodiment, the queue update module is further to:
the target memory block is deleted from the in-use queue.
In one embodiment, the determining module is specifically configured to:
and generating an operation identifier according to the IP information and the structure information of the client.
In one embodiment, the heterogeneous platform comprises: a control chip and an integrated circuit; the storage module is specifically used for: and carrying out hash calculation on the IP information of the client and the metadata information in the structure information by using the control chip to obtain a hash result, and splicing the hash result and the storage unit position information in the structure information to obtain the operation identifier.
In one embodiment, the heterogeneous platform comprises: a control chip and an integrated circuit; correspondingly, the storage module is specifically used for: and verifying the operation identifier by using the integrated circuit, determining a corresponding storage server at the storage server for target data to be stored in the storage request according to the structure information after the operation identifier passes the verification, and storing the target data to the storage server. Correspondingly, the integrated circuit judges whether the operation identifier generated by the integrated circuit is consistent with the operation identifier generated by the control chip; if the operation identifiers are consistent, determining that the operation identifiers pass verification; otherwise, determining that the operation identifier is not checked, and generating an error prompt message of the operation identifier.
In one embodiment, the method further comprises:
the safety detection module is used for judging whether the request information of the data processing request is dangerous or not according to a preset safety strategy; the request information includes: source port, source IP, protocol type, and/or operational behavior; if yes, adding a dangerous mark to the data processing request, and intercepting the data processing request; if not, judging the type of the data processing request, wherein the type comprises the following steps: storage requests, read requests, and copy requests.
In one embodiment, the method further comprises:
the reading module is used for forwarding the reading request to the metadata server if the data processing request is the reading request; receiving read address information returned by the metadata server; and reading corresponding data from the storage server according to the read address information.
In one embodiment, the method further comprises:
the copying module is used for forwarding the copying request to the metadata server if the data processing request is a copying request; receiving copy address information returned by the metadata server; reading corresponding data from a storage server according to the copy address information, taking the read data as target data, executing the determination of an available storage structure in a local storage view, and generating an operation identifier according to the client information of the client and the structure information of the available storage structure; and if the operation identifier passes the verification, determining a corresponding storage server for the target data according to the structure information, and storing the target data to the storage server.
In one embodiment, the replication module is further to:
if the replication request requires replication of multiple copies, multiple copies are replicated in parallel using multiple heterogeneous platforms.
In one embodiment, the replication module is further to:
if the target storage server and the source storage server are the same, sending a notice to the storage server so that the storage server autonomously performs copying.
In one embodiment, the method further comprises:
the distribution module is used for selecting a corresponding outlet port according to the IP of the storage server currently determined and a preset redirection strategy when communicating with any storage server in the storage server; the preset redirection strategy is determined according to the network segment and/or the data transmission protocol to which the IP belongs.
In one embodiment, the splitting module is further configured to:
when the method is used for communicating with any storage server in the storage server, corresponding outlet ports are selected according to the flow size and resource use condition of the receiving ports of all the storage servers in the storage server, the flow size of all the outlet ports, the security rules and the preset redirection policy.
In one embodiment, the method further comprises:
the application module is used for a plurality of heterogeneous platforms, and the process of applying for the local storage view by each heterogeneous platform comprises the following steps: if the available storage structure does not exist in the local storage view, sending a view application request to the metadata server; receiving a writable storage view returned by the metadata server according to the view application request; merging the writable storage view to the local storage view; the local storage views of different heterogeneous platform applications are part of the global storage view, and a distributed storage view is formed.
The more specific working process of each module and unit in this embodiment may refer to the corresponding content disclosed in the foregoing embodiment, and will not be described herein.
Therefore, the embodiment provides a data processing device, which can solve the system access bottleneck and improve the access efficiency and performance.
An electronic device provided in the embodiments of the present invention is described below, and an electronic device described below may refer to other embodiments described herein.
Referring to fig. 10, an embodiment of the present invention discloses an electronic device, including:
a memory 1001 for storing a computer program;
a processor 1002 for executing the computer program to implement the method disclosed in any of the embodiments above.
Further, the embodiment of the invention also provides electronic equipment. The electronic device may be the server 50 shown in fig. 11 or the terminal 60 shown in fig. 12. Fig. 11 and 12 are structural diagrams of electronic devices according to an exemplary embodiment, and the contents of the drawings should not be construed as any limitation on the scope of use of the present invention.
Fig. 11 is a schematic structural diagram of a server according to an embodiment of the present invention. The server 50 may specifically include: at least one processor 51, at least one memory 52, a power supply 53, a communication interface 54, an input output interface 55, and a communication bus 56. Wherein the memory 52 is adapted to store a computer program that is loaded and executed by the processor 51 to implement the relevant steps in the data processing disclosed in any of the foregoing embodiments.
In this embodiment, the power supply 53 is configured to provide an operating voltage for each hardware device on the server 50; the communication interface 54 can create a data transmission channel between the server 50 and an external device, and the communication protocol to be followed is any communication protocol applicable to the technical solution of the present invention, which is not specifically limited herein; the input/output interface 55 is used for acquiring external input data or outputting external output data, and the specific interface type thereof may be selected according to the specific application needs, which is not limited herein.
The memory 52 may be a carrier for storing resources, such as a read-only memory, a random access memory, a magnetic disk, or an optical disk, and the resources stored thereon include an operating system 521, a computer program 522, and data 523, and the storage may be temporary storage or permanent storage.
The operating system 521 is used for managing and controlling various hardware devices on the Server 50 and the computer program 522 to implement the operation and processing of the data 523 in the memory 52 by the processor 51, which may be Windows Server, netware, unix, linux, etc. The computer program 522 may further comprise a computer program capable of performing other specific tasks in addition to a computer program capable of performing the data processing method disclosed in any of the preceding embodiments. The data 523 may include data such as application program developer information in addition to data such as application program update information.
Fig. 12 is a schematic structural diagram of a terminal according to an embodiment of the present invention, and the terminal 60 may specifically include, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, or the like.
Generally, the terminal 60 in this embodiment includes: a processor 61 and a memory 62.
Processor 61 may include one or more processing cores, such as a 4-core processor, an 8-core processor, etc. The processor 61 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 61 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 61 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 61 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 62 may include one or more computer-readable storage media, which may be non-transitory. Memory 62 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In the present embodiment, the memory 62 is at least used for storing a computer program 621 that, when loaded and executed by the processor 61, is capable of implementing the relevant steps in the data processing method performed by the terminal side as disclosed in any of the foregoing embodiments. In addition, the resources stored by the memory 62 may also include an operating system 622, data 623, and the like, and the storage manner may be transient storage or permanent storage. The operating system 622 may include Windows, unix, linux, among others. The data 623 may include, but is not limited to, update information of the application.
In some embodiments, the terminal 60 may further include a display 63, an input-output interface 64, a communication interface 65, a sensor 66, a power supply 67, and a communication bus 68.
Those skilled in the art will appreciate that the structure shown in fig. 12 is not limiting of the terminal 60 and may include more or fewer components than shown.
A readable storage medium provided by embodiments of the present invention is described below, and the readable storage medium described below may be referred to with respect to other embodiments described herein.
A readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the data processing method disclosed in the foregoing embodiments. The readable storage medium is a computer readable storage medium, and can be used as a carrier for storing resources, such as read-only memory, random access memory, magnetic disk or optical disk, wherein the resources stored on the readable storage medium comprise an operating system, a computer program, data and the like, and the storage mode can be transient storage or permanent storage.
A data processing system according to embodiments of the present invention is described below, and reference may be made to other embodiments described herein.
The embodiment of the invention discloses a data processing system, which comprises: the system comprises a client, a hardware execution end comprising a plurality of heterogeneous platforms, a metadata server and a storage server; the heterogeneous platform implements the method described in any of the preceding embodiments based on a data plane programming language.
The data processing system disclosed in this embodiment is specifically a distributed storage system. Advantages of distributed storage: the expansion is convenient, and PB level or above can be easily achieved; the read-write performance is improved; the data is high available; the problem of the whole architecture caused by single node faults is avoided; relatively inexpensive, and a large number of inexpensive devices may be provided. The distributed storage system can be divided into a completely centerless architecture and an intermediate control node architecture according to the architecture, the completely centerless architecture realizes capacity expansion mainly through a consistent hash method and the like, and the intermediate control node architecture is provided with a meta-server to uniformly manage all the magnetic disks. Distributed storage can be classified into block storage, object storage and file storage according to the type thereof, and files, blocks and objects are three different data storage modes. The block storage is suitable for clients, and the data is split into volumes which are arbitrarily divided and have the same size, and is used for dock containers, virtual machine remote mounted disk storage allocation, log storage and the like. The file storage is suitable for multi-client directory structure data, the data is arranged and presented in a hierarchical structure of files and folders, and the file storage is used for log storage, file storage sharing of a plurality of users with directory structures and the like. The object store is suitable for updating less-varying data, has no directory structure, cannot directly open/modify files, will manage the data and link it to associated metadata for storage of pictures, videos, files, software installation packages, archive data, and the like.
In one embodiment, the heterogeneous platform is specifically for: receiving a data processing request sent by the client; if the data processing request is a storage request, determining an available storage structure in a local storage view, and generating an operation identifier according to client information of the client and structure information of the available storage structure; the local storage view is obtained in advance from the metadata server; and if the operation identifier passes the verification, determining a corresponding storage server at the storage server for the target data to be stored for the storage request according to the structure information, and storing the target data to the storage server.
In one embodiment, the heterogeneous platform is specifically for: selecting an idle target storage unit in the local storage view according to the data quantity of the target data; constructing a plurality of target storage blocks; each target memory block includes: a plurality of target storage units belonging to different storage servers; constructing an available storage structure comprising a target storage block; structural information including block information of the target memory block is generated.
In one embodiment, the heterogeneous platform is specifically for: the target memory block is added to the in-use queue.
In one embodiment, the heterogeneous platform is specifically for: determining a target memory block in the in-use queue; and determining a storage server according to the block information of the target storage block.
In one embodiment, the heterogeneous platform is specifically for: performing erasure calculation on the target data; and determining a corresponding storage server at a storage server according to the structure information for the erasure-calculated data, and storing the erasure-calculated data to the storage server.
In one embodiment, the heterogeneous platform is specifically for: and adding the storage units of the stored data in the target storage block to the used queue, and recycling the storage units of the non-stored data in the target storage block to the available queue.
In one embodiment, the heterogeneous platform is specifically for: and sending the address of the storage unit of the stored data in the target storage block to the metadata server so that the metadata server marks the storage unit as used.
In one embodiment, the heterogeneous platform is specifically for: the target memory block is deleted from the in-use queue.
In one embodiment, the heterogeneous platform is specifically for: and generating an operation identifier according to the IP information and the structure information of the client.
In one embodiment, the heterogeneous platform is specifically for: and carrying out hash calculation on the IP information of the client and the metadata information in the structure information by using the control chip to obtain a hash result, and splicing the hash result and the storage unit position information in the structure information to obtain the operation identifier.
In one embodiment, the heterogeneous platform is specifically for: and verifying the operation identifier by using the integrated circuit, determining a corresponding storage server at the storage server for target data to be stored in the storage request according to the structure information after the operation identifier passes the verification, and storing the target data to the storage server. Correspondingly, the integrated circuit judges whether the operation identifier generated by the integrated circuit is consistent with the operation identifier generated by the control chip; if the operation identifiers are consistent, determining that the operation identifiers pass verification; otherwise, determining that the operation identifier is not checked, and generating an error prompt message of the operation identifier.
In one embodiment, the heterogeneous platform is specifically for: judging whether the request information of the data processing request is dangerous or not according to a preset security policy; the request information includes: source port, source IP, protocol type, and/or operational behavior; if yes, adding a dangerous mark to the data processing request, and intercepting the data processing request; if not, judging the type of the data processing request, wherein the type comprises the following steps: storage requests, read requests, and copy requests.
In one embodiment, the heterogeneous platform is specifically for: if the data processing request is a read request, forwarding the read request to a metadata server; receiving read address information returned by the metadata server; and reading corresponding data from the storage server according to the read address information.
In one embodiment, the heterogeneous platform is specifically for: if the data processing request is a copy request, forwarding the copy request to the metadata server; receiving copy address information returned by the metadata server; reading corresponding data from a storage server according to the copy address information, taking the read data as target data, executing the determination of an available storage structure in a local storage view, and generating an operation identifier according to the client information of the client and the structure information of the available storage structure; and if the operation identifier passes the verification, determining a corresponding storage server for the target data according to the structure information, and storing the target data to the storage server.
In one embodiment, the heterogeneous platform is specifically for: if the replication request requires replication of multiple copies, multiple copies are replicated in parallel using multiple heterogeneous platforms.
In one embodiment, the heterogeneous platform is specifically for: if the target storage server and the source storage server are the same, sending a notice to the storage server so that the storage server autonomously performs copying.
Replication can have both strongly synchronous replication and asynchronous replication. Strong synchronous replication requires that the user's write request be synchronized to all standby copies before success can be returned; strong consistency between the primary and backup copies can be ensured, but normal write services of the storage system can also be blocked, affecting system availability. When asynchronous replication is performed, the master copy does not need to wait for the response of the standby copy, and the client can be informed of successful write operation only by successful local modification; availability is relatively good, but data consistency is not guaranteed, and the last part of data may be lost if the primary copy fails unrecoverable.
In one embodiment, the heterogeneous platform is specifically for: when communicating with any storage server in the storage server, selecting a corresponding outlet port according to the IP of the currently determined storage server and a preset redirection strategy; the preset redirection strategy is determined according to the network segment and/or the data transmission protocol to which the IP belongs.
In one embodiment, the heterogeneous platform is specifically for: when the method is used for communicating with any storage server in the storage server, corresponding outlet ports are selected according to the flow size and resource use condition of the receiving ports of all the storage servers in the storage server, the flow size of all the outlet ports, the security rules and the preset redirection policy.
In one embodiment, each heterogeneous platform is specifically configured to: if the available storage structure does not exist in the local storage view, sending a view application request to the metadata server; receiving a writable storage view returned by the metadata server according to the view application request; merging the writable storage view to the local storage view; the local storage views of different heterogeneous platform applications are part of the global storage view, and a distributed storage view is formed.
The more specific working process of the related content in this embodiment may refer to the corresponding content disclosed in the foregoing embodiment, and will not be described herein.
Therefore, the embodiment provides a data processing system, which can solve the system access bottleneck and improve the access efficiency and performance.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of readable storage medium known in the art.
The principles and embodiments of the present invention have been described herein with reference to specific examples, the description of which is intended only to assist in understanding the methods of the present invention and the core ideas thereof; meanwhile, as those skilled in the art will have variations in the specific embodiments and application scope in accordance with the ideas of the present invention, the present description should not be construed as limiting the present invention in view of the above.

Claims (19)

1. The data processing method is characterized by being applied to a heterogeneous platform, wherein the heterogeneous platform is in communication connection with a client, a metadata server and a storage server; the heterogeneous platform comprises: a control chip and an integrated circuit; comprising the following steps:
receiving a data processing request sent by the client;
if the data processing request is a storage request, determining an available storage structure in a local storage view, performing hash calculation on IP information of the client and metadata information in structural information of the available storage structure by using the control chip to obtain a hash result, and splicing the hash result and storage unit position information in the structural information to obtain an operation identifier; the local storage view is obtained in advance from the metadata server;
If the operation identifier passes the verification, determining a corresponding storage server on the storage server for target data to be stored for the storage request according to the structure information, and storing the target data to the storage server;
wherein selecting a target storage unit from the local storage view capable of storing a data amount greater than the target data forms the available storage structure; each free target memory block included in the available memory structure includes: a plurality of target storage units belonging to different storage servers.
2. The method of claim 1, wherein the determining available storage structures in the local storage view comprises:
selecting an idle target storage unit in the local storage view according to the data amount of the target data;
constructing a plurality of target storage blocks; each target memory block includes: a plurality of target storage units belonging to different storage servers;
building the available storage structure including the target storage block;
the structure information including block information of the target storage block is generated.
3. The method according to claim 2, wherein the determining, at the storage server, a corresponding storage server for the target data to be stored for the storage request according to the structure information includes:
Adding the target memory block to an in-use queue;
determining the target memory block in the in-use queue;
and determining the storage server according to the block information of the target storage block.
4. The method of claim 3, wherein after storing the target data to the storage server, further comprising:
adding the storage units of the stored data in the target storage block to an used queue, and recycling the storage units of the non-stored data in the target storage block to the available queue;
transmitting an address of a storage unit of stored data in the target storage block to the metadata server so that the metadata server marks the storage unit as used;
deleting the target storage block from the in-use queue.
5. The method according to claim 1, wherein the determining, at the storage server, the corresponding storage server for the target data to be stored for the storage request according to the structure information, and storing the target data to the storage server includes:
performing erasure calculation on the target data;
and determining a corresponding storage server at the storage server for the data subjected to erasure correction calculation according to the structure information, and storing the data subjected to erasure correction calculation to the storage server.
6. The method of claim 1, wherein the heterogeneous platform comprises a control chip and an integrated circuit;
correspondingly, if the operation identifier passes the verification, determining a corresponding storage server at the storage server for the target data to be stored in the storage request according to the structure information, and storing the target data to the storage server, including:
and verifying the operation identifier by using the integrated circuit, determining a corresponding storage server at the storage server for target data to be stored in the storage request according to the structure information after the operation identifier passes the verification, and storing the target data to the storage server.
7. The method of claim 6, wherein verifying the operation identifier with the integrated circuit comprises:
the integrated circuit judges whether the operation identifier generated by the integrated circuit is consistent with the operation identifier generated by the control chip;
if the operation identifiers are consistent, determining that the operation identifiers pass verification; otherwise, determining that the operation identifier is not checked, and generating an error prompt message of the operation identifier.
8. The method as recited in claim 1, further comprising:
judging whether the request information of the data processing request is dangerous or not according to a preset security policy; the request information includes: source port, source IP, protocol type, and/or operational behavior;
if yes, adding a dangerous mark to the data processing request, and intercepting the data processing request;
if not, judging the type of the data processing request, wherein the type comprises the following steps: storage requests, read requests, and copy requests.
9. The method as recited in claim 1, further comprising:
if the data processing request is a read request, forwarding the read request to the metadata server;
receiving read address information returned by the metadata server;
and reading corresponding data from the storage server according to the read address information.
10. The method as recited in claim 1, further comprising:
if the data processing request is a copy request, forwarding the copy request to the metadata server;
receiving copy address information returned by the metadata server;
reading corresponding data from the storage server according to the copy address information, taking the read data as the target data, executing the determination of an available storage structure in a local storage view, and generating an operation identifier according to the client information of the client and the structure information of the available storage structure; and if the operation identifier passes the verification, determining a corresponding storage server for the target data at the storage server according to the structure information, and storing the target data to the storage server.
11. The method of claim 10, wherein the heterogeneous platform is a plurality of;
correspondingly, the method further comprises the steps of:
if the copy request requires copying multiple copies, then multiple copies are copied in parallel using multiple heterogeneous platforms.
12. The method as recited in claim 10, further comprising:
if the target storage server and the source storage server are the same, sending a notice to the storage server so that the storage server autonomously performs copying.
13. The method according to any one of claims 1 to 12, further comprising:
when communicating with any storage server in the storage server, selecting a corresponding outlet port according to the IP of the currently determined storage server and a preset redirection strategy; the preset redirection strategy is determined according to the network segment to which the IP belongs and/or the data transmission protocol.
14. The method according to any one of claims 1 to 12, further comprising:
when communicating with any storage server in the storage server, selecting a corresponding outlet port according to the flow size and resource use condition of the receiving ports of each storage server in the storage server, the flow size of all outlet ports, the security rule and the preset redirection policy.
15. The method of any one of claims 1 to 12, wherein the heterogeneous platforms are a plurality of, and wherein the process of applying for the locally stored view by each heterogeneous platform comprises:
if the available storage structure does not exist in the local storage view, sending a view application request to the metadata server;
receiving a writable storage view returned by the metadata server according to the view application request;
merging the writable storage view to the local storage view;
the local storage views of different heterogeneous platform applications are part of the global storage view, and a distributed storage view is formed.
16. The data processing device is characterized by being applied to a heterogeneous platform, wherein the heterogeneous platform is in communication connection with a client, a metadata server and a storage server; the heterogeneous platform comprises: a control chip and an integrated circuit; comprising the following steps:
the receiving module is used for receiving the data processing request sent by the client;
the determining module is used for determining an available storage structure in a local storage view if the data processing request is a storage request, carrying out hash calculation on the IP information of the client and the metadata information in the structural information of the available storage structure by using the control chip to obtain a hash result, and splicing the hash result and the storage unit position information in the structural information to obtain an operation identifier; the local storage view is obtained in advance from the metadata server;
The storage module is used for determining a corresponding storage server at the storage server for the target data to be stored of the storage request according to the structure information if the operation identifier passes the verification, and storing the target data to the storage server;
wherein selecting a target storage unit from the local storage view capable of storing a data amount greater than the target data forms the available storage structure; each free target memory block included in the available memory structure includes: a plurality of target storage units belonging to different storage servers.
17. An electronic device, comprising:
a memory for storing a computer program;
a processor for executing the computer program to implement the method of any one of claims 1 to 15.
18. A readable storage medium for storing a computer program, wherein the computer program when executed by a processor implements the method of any one of claims 1 to 15.
19. A data processing system, comprising: the system comprises a client, a hardware execution end comprising a plurality of heterogeneous platforms, a metadata server and a storage server; the heterogeneous platform implements the method of any of claims 1 to 15 based on a data plane programming language.
CN202311034910.7A 2023-08-17 2023-08-17 Data processing method, device, equipment, medium and system Active CN116760850B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311034910.7A CN116760850B (en) 2023-08-17 2023-08-17 Data processing method, device, equipment, medium and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311034910.7A CN116760850B (en) 2023-08-17 2023-08-17 Data processing method, device, equipment, medium and system

Publications (2)

Publication Number Publication Date
CN116760850A CN116760850A (en) 2023-09-15
CN116760850B true CN116760850B (en) 2024-01-12

Family

ID=87961239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311034910.7A Active CN116760850B (en) 2023-08-17 2023-08-17 Data processing method, device, equipment, medium and system

Country Status (1)

Country Link
CN (1) CN116760850B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103023982A (en) * 2012-11-22 2013-04-03 中国人民解放军国防科学技术大学 Low-latency metadata access method of cloud storage client
CN104283960A (en) * 2014-10-15 2015-01-14 福建亿榕信息技术有限公司 System for achieving heterogeneous network storage virtualization integration and hierarchical management
US9507799B1 (en) * 2009-12-08 2016-11-29 Netapp, Inc. Distributed object store for network-based content repository
CN109327539A (en) * 2018-11-15 2019-02-12 上海天玑数据技术有限公司 A kind of distributed block storage system and its data routing method
CN112988683A (en) * 2021-02-07 2021-06-18 北京金山云网络技术有限公司 Data processing method and device, electronic equipment and storage medium
WO2021169113A1 (en) * 2020-02-26 2021-09-02 平安科技(深圳)有限公司 Data management method and apparatus, and computer device and storage medium
CN114827145A (en) * 2022-04-24 2022-07-29 阿里巴巴(中国)有限公司 Server cluster system, and metadata access method and device
CN115484276A (en) * 2022-08-24 2022-12-16 国家气象信息中心(中国气象局气象数据中心) Meteorological data file directory service generation method and system based on virtual data lake
CN116436962A (en) * 2023-03-31 2023-07-14 之江实验室 Method and device for persistent caching of global aggregation namespaces crossing computing nodes facing DFS

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8819208B2 (en) * 2010-03-05 2014-08-26 Solidfire, Inc. Data deletion in a distributed data storage system
US8533231B2 (en) * 2011-08-12 2013-09-10 Nexenta Systems, Inc. Cloud storage system with distributed metadata
US20150106468A1 (en) * 2012-05-17 2015-04-16 Nec Corporation Storage system and data access method
US10223035B2 (en) * 2015-08-28 2019-03-05 Vmware, Inc. Scalable storage space allocation in distributed storage systems
CN106850710B (en) * 2015-12-03 2020-02-28 杭州海康威视数字技术股份有限公司 Data cloud storage system, client terminal, storage server and application method

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9507799B1 (en) * 2009-12-08 2016-11-29 Netapp, Inc. Distributed object store for network-based content repository
CN103023982A (en) * 2012-11-22 2013-04-03 中国人民解放军国防科学技术大学 Low-latency metadata access method of cloud storage client
CN104283960A (en) * 2014-10-15 2015-01-14 福建亿榕信息技术有限公司 System for achieving heterogeneous network storage virtualization integration and hierarchical management
CN109327539A (en) * 2018-11-15 2019-02-12 上海天玑数据技术有限公司 A kind of distributed block storage system and its data routing method
WO2021169113A1 (en) * 2020-02-26 2021-09-02 平安科技(深圳)有限公司 Data management method and apparatus, and computer device and storage medium
CN112988683A (en) * 2021-02-07 2021-06-18 北京金山云网络技术有限公司 Data processing method and device, electronic equipment and storage medium
CN114827145A (en) * 2022-04-24 2022-07-29 阿里巴巴(中国)有限公司 Server cluster system, and metadata access method and device
CN115484276A (en) * 2022-08-24 2022-12-16 国家气象信息中心(中国气象局气象数据中心) Meteorological data file directory service generation method and system based on virtual data lake
CN116436962A (en) * 2023-03-31 2023-07-14 之江实验室 Method and device for persistent caching of global aggregation namespaces crossing computing nodes facing DFS

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Metadata Management for Distributed Multimedia Storage System;Ling Zhan等;《2008 International Symposium on Electronic Commerce and Security》;全文 *
SAN文件系统模型及实现方法;刘跃军;河南科技大学学报(自然科学版)(04);全文 *
可扩展的分布式元数据管理系统设计;黄秋兰;程耀东;杜然;陈刚;;计算机工程(05);全文 *
集群多媒体存储系统的两级元数据管理;万继光;詹玲;;小型微型计算机系统(04);全文 *

Also Published As

Publication number Publication date
CN116760850A (en) 2023-09-15

Similar Documents

Publication Publication Date Title
US10700979B2 (en) Load balancing for a virtual networking system
US11088944B2 (en) Serverless packet processing service with isolated virtual network integration
US9535871B2 (en) Dynamic routing through virtual appliances
US9454392B2 (en) Routing data packets between virtual machines using shared memory without copying the data packet
US9363172B2 (en) Managing a configurable routing scheme for virtual appliances
US9880870B1 (en) Live migration of virtual machines using packet duplication
US9264495B2 (en) Apparatus and methods for handling network file operations over a fibre channel network
US11095716B2 (en) Data replication for a virtual networking system
CN113326101B (en) Thermal migration method, device and equipment based on remote direct data storage
US10666503B1 (en) Network connection and termination system
US11947425B2 (en) Storage volume snapshot object management
US11461123B1 (en) Dynamic pre-copy and post-copy determination for live migration between cloud regions and edge locations
US10652094B2 (en) Network traffic management for virtualized graphics devices
US10692168B1 (en) Availability modes for virtualized graphics processing
US11662928B1 (en) Snapshot management across cloud provider network extension security boundaries
US11573839B1 (en) Dynamic scheduling for live migration between cloud regions and edge locations
US11734038B1 (en) Multiple simultaneous volume attachments for live migration between cloud regions and edge locations
US11296981B2 (en) Serverless packet processing service with configurable exception paths
CN110795209B (en) Control method and device
CN116760850B (en) Data processing method, device, equipment, medium and system
US20220141080A1 (en) Availability-enhancing gateways for network traffic in virtualized computing environments
US11809735B1 (en) Snapshot management for cloud provider network extensions
US11474857B1 (en) Accelerated migration of compute instances using offload cards
US10848418B1 (en) Packet processing service extensions at remote premises
US11971902B1 (en) Data retrieval latency management system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant