WO2016192573A1 - 一种数据处理方法和装置 - Google Patents

一种数据处理方法和装置 Download PDF

Info

Publication number
WO2016192573A1
WO2016192573A1 PCT/CN2016/083471 CN2016083471W WO2016192573A1 WO 2016192573 A1 WO2016192573 A1 WO 2016192573A1 CN 2016083471 W CN2016083471 W CN 2016083471W WO 2016192573 A1 WO2016192573 A1 WO 2016192573A1
Authority
WO
WIPO (PCT)
Prior art keywords
acceleration
module
group
group routing
intra
Prior art date
Application number
PCT/CN2016/083471
Other languages
English (en)
French (fr)
Inventor
陈显波
袁宏辉
姚滨滨
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP16802496.6A priority Critical patent/EP3291089B1/en
Publication of WO2016192573A1 publication Critical patent/WO2016192573A1/zh
Priority to US15/824,032 priority patent/US10432506B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/302Route determination based on requested QoS
    • H04L45/306Route determination based on the nature of the carried application
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/546Message passing systems or structures, e.g. queues
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/54Organization of routing tables
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/64Routing or path finding of packets in data switching networks using an overlay routing layer

Definitions

  • the present invention relates to the field of data processing technologies, and in particular, to a data processing method and a data processing device.
  • the CPU (which can be considered as “host”) is used to execute the business layer (generally referred to as “application layer”, “upper layer”) and the underlying driver code.
  • the service layer is used to generate original acceleration source data that needs to be accelerated or original acceleration source data for receiving other service layers that are scheduled.
  • the underlying driver is used to cooperate with the business layer to complete the analysis, data conversion, data encapsulation, data transmission, etc. of the scheduling instructions.
  • the FPGA is configured to receive data sent by the underlying driver, complete the accelerated processing of the data, and return the processed data to the service layer through the underlying driver.
  • the solution needs to rely on the underlying driver for accelerating different types of services, that is, the service layer must complete the FPGA acceleration of the corresponding function through a dedicated underlying driver matched with different service types. Therefore, each type of service in the prior art solution that needs to be accelerated requires a customized underlying driver with poor dynamics and flexibility.
  • the embodiment of the invention provides a data processing method and device, which are used to solve the problem that the existing underlying driver needs to be customized for each service acceleration scenario in the prior art, resulting in poor dynamics and flexibility.
  • an embodiment of the present invention provides a data processing method, which is applied to a scenario in which a hardware processing unit accelerates service data sent by a host, where the method is performed by the hardware processing unit, and includes:
  • the request message includes multiple acceleration type identifiers, and the request message further includes: an acceleration order identifier corresponding to each acceleration type identifier, the The acceleration order identifier is used to indicate the order of acceleration processing;
  • the performing at least one acceleration processing corresponding to the at least one acceleration type identifier on the service data includes:
  • the hardware processing unit includes a receiving module, an inter-group routing module, and at least one accelerated processing group;
  • the inter-group routing module includes an inter-group routing table, where the inter-group routing table includes a correspondence between an acceleration type identifier and an acceleration processing group.
  • the receiving module Receiving the request message sent by the host service layer and transparently transmitting the host driver layer, the receiving module receiving a request message sent by the host service layer and transparently transmitted by the host driver layer;
  • the at least one acceleration processing that performs the one-to-one correspondence with the at least one acceleration type identifier on the service data includes:
  • the inter-group routing module receives the request message sent by the receiving module
  • the inter-group routing module parses out an acceleration type identifier in the request message
  • the inter-group routing module forwards the request message to the destination acceleration processing group according to the parsed acceleration type identifier and the inter-group routing table;
  • the purpose acceleration processing group accelerates the service data.
  • the acceleration processing group includes a parsing module, intra-group routing a module, at least one acceleration processing module, wherein each acceleration processing module is configured to perform different types of acceleration processing on the same service;
  • the intra-group routing module includes an intra-group routing table, where the intra-group routing table includes a correspondence between an acceleration type identifier and an acceleration processing module.
  • the purpose of the acceleration processing group is to accelerate the processing of the service data, including:
  • the parsing module of the purpose acceleration processing group parses the request message, caches the service data, and generates an internal forwarding message according to the parsing result, where the internal forwarding message includes the acceleration type identifier and the service data.
  • the parsing module sends the internal forwarding message to an intra-group routing module of the destination acceleration processing group;
  • the intra-group routing module sends the internal forwarding message to the destination acceleration processing module according to the acceleration type identifier and the intra-group routing table;
  • the destination acceleration processing module acquires the service data according to the cache address included in the internal forwarding message and performs acceleration processing on the service data.
  • the destination acceleration processing group accelerates the service data, and further includes:
  • the destination acceleration processing module caches the accelerated processed service data and notifies the intra-group routing module
  • the intra-group routing module sends the internal forwarding message to the next destination acceleration processing module according to the acceleration order identifier, so that the next destination acceleration processing module accelerates the data buffered by the destination acceleration processing module. Until the acceleration sequence indicated by the acceleration order indicator ends.
  • the method further includes:
  • the purpose acceleration processing module caches the processed service data
  • the routing module in the destination group reads the cached processed service data
  • the routing module in the destination group generates a feedback message corresponding to the request message according to the processed service data
  • the destination group routing module sends the feedback message to the inter-group routing module, so that the inter-group routing module sends the feedback message to the host.
  • the feedback message has the same message structure as the request message, where the message structure includes a function for distinguishing the feedback The message and the message type field of the request message.
  • the request message is provided with a field field and a data field, where the field field A field including a field of the service header and a control header, where the field of the service header includes the acceleration type identifier, where the data field is used to carry the service data.
  • the inter-group routing table is further An aging switch and an aging time are configured; the method further includes:
  • the inter-group routing module reports the host to request the host to configure a new inter-group routing table.
  • the intra-group routing table is further configured with an aging switch and an aging time; the method further includes:
  • the intra-group routing module reports the host to request the host to configure a new intra-group routing table.
  • the embodiment of the present invention further provides a data processing device, which is applied to a scenario for accelerating service data sent by a host, where the device includes:
  • a receiving module configured to receive a request message sent from the host service layer and transparently transmitted through the host driver layer, where the request message includes at least one acceleration type identifier and service data to be accelerated, where each acceleration type The identification corresponds to an acceleration process;
  • a processing module configured to perform, by using the service data received by the receiving module, at least one acceleration processing that is in one-to-one correspondence with the at least one acceleration type identifier.
  • the request message includes multiple acceleration type identifiers, and the request message further includes: an acceleration order identifier corresponding to each acceleration type identifier, the The acceleration order identifier is used to indicate the order of acceleration processing;
  • the processing module is further configured to perform acceleration processing corresponding to the plurality of acceleration type identifiers on the service data in an order indicated by the multiple acceleration order identifiers.
  • the processing module includes an inter-group routing module, and at least one accelerated processing group;
  • the inter-group routing module includes an inter-group routing table, where the inter-group routing table includes a correspondence between an acceleration type identifier and an acceleration processing group, and the inter-group routing module is configured to receive a location sent by the receiving module. Decoding the request message; parsing the acceleration type identifier in the request message; forwarding the request message to the destination acceleration processing group according to the parsed acceleration type identifier and the inter-group routing table;
  • the acceleration processing group is configured to perform acceleration processing on the service data.
  • the acceleration processing group includes a parsing module, intra-group routing a module, at least one acceleration processing module, wherein each acceleration processing module is configured to perform different types of acceleration processing on the same service;
  • the parsing module is configured to parse the request message sent by the inter-group routing module, cache the service data, and generate an internal forwarding message according to the parsing result, where the internal forwarding message includes the acceleration type identifier And a cache address of the service data; sending the internal forwarding message to the intra-group routing module;
  • the intra-group routing module includes an intra-group routing table, where the intra-group routing table includes a correspondence between an acceleration type identifier and an acceleration processing module, and the intra-group routing module is configured to identify and according to the acceleration type
  • the intra-group routing table sends the internal forwarding message received from the parsing module to the destination acceleration processing module;
  • the acceleration processing module is configured to acquire the service data according to the cache address included in the internal forwarding message received from the intra-group routing module, and perform acceleration processing on the service data.
  • the acceleration processing module is further configured to: when the internal forwarding message includes an acceleration order identifier, cache the accelerated service data, and notify the intra-group routing module;
  • the intra-group routing module is further configured to, after receiving the notification sent by the acceleration processing module, send the internal forwarding message to the next destination acceleration processing module according to the acceleration order identifier, so that the next step is performed.
  • the purpose acceleration processing module accelerates the data buffered by the destination acceleration processing module until the acceleration sequence indicated by the acceleration order identifier ends.
  • the acceleration processing module is further configured to cache the processed service data.
  • the intra-group routing module is further configured to read the processed service data buffered by the acceleration processing module when the service data is all accelerated, and generate the request according to the processed service data. And a feedback message corresponding to the message; sending the feedback message to the inter-group routing module, so that the inter-group routing module sends the feedback message to the host.
  • the feedback message has the same message structure as the request message, where the message structure includes a function for distinguishing the feedback The message and the message type field of the request message.
  • the request message is provided with a field field and a data field, where the field field A field including a field of the service header and a control header, where the field of the service header includes the acceleration type identifier, where the data field is used to carry the service data.
  • the inter-group routing table is further Configure an aging switch and aging time.
  • the inter-group routing module is further configured to report the host to configure the new inter-group routing table when the aging switch of the inter-group routing table is enabled and the aging time is reached.
  • the intra-group routing table is also configured with an aging switch and an aging time;
  • the intra-group routing module is further configured to report the host to configure the new intra-group routing table when the aging switch of the intra-group routing table is enabled and the aging time is reached.
  • the embodiments of the invention include the following advantages:
  • the message structure is agreed between the host service layer and the hardware processing unit, so that the host can transparently transmit the message to the hardware processing unit after being transparently transmitted through the host driver layer, and the hardware processing unit performs the message according to the corresponding identifier in the message.
  • Speed up processing Therefore, the interaction between the service layer of the host and the hardware processing unit is not required in the method.
  • a dedicated driver is required to shield the business layer from the dependencies of the specific underlying driver.
  • the hardware processing unit can run on different service platforms, and the heterogeneous capabilities of the logic are enhanced, thereby improving the dynamics and flexibility in the business process.
  • FIG. 1 is a flow chart showing the steps of an embodiment of a data processing method of the present invention
  • FIG. 2 is a schematic structural diagram of a message according to an embodiment of the present invention.
  • FIG. 3 is a flow chart of steps of a method for speeding up processing of service data in an embodiment of the present invention
  • FIG. 4 is a schematic diagram showing the internal structure of an inter-group routing module according to an embodiment of the present invention.
  • FIG. 5 is a flow chart showing the steps of a method for the acceleration processing group to notify each acceleration processing module to accelerate the service data in the embodiment of the present invention
  • FIG. 6 is a schematic diagram showing the internal structure of an acceleration processing group in an embodiment of the present invention.
  • FIG. 7 is a flowchart of a method for sending a feedback message by an intra-group routing module according to an embodiment of the present invention
  • FIG. 8 is a schematic structural diagram of a data processing system according to an embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of another data processing system according to an embodiment of the present invention.
  • Figure 10 is a block diagram showing the structure of an embodiment of a data processing apparatus of the present invention.
  • FIG. 11 is a structural block diagram of a processing module in an embodiment of the present invention.
  • FIG. 12 is a structural block diagram of an acceleration processing group in an embodiment of the present invention.
  • FIG. 13 is a schematic structural diagram of another data processing system according to an embodiment of the present invention.
  • FIG. 1 is a flowchart of a data processing method according to an embodiment of the present invention.
  • the method is applied to a scenario in which the hardware processing unit accelerates the service data sent by the host.
  • the concept of the host and the hardware processing unit is the same as the prior art, that is, the host generally refers to a system mainly composed of one or more CPUs.
  • the function of the service layer and the driver layer is implemented by executing the software code stored in the memory by the CPU; and the hardware processing unit refers to a unit implemented by a hardware device such as an FPGA or an ASIC for transmitting data to the host service layer.
  • Processing mainly acceleration processing
  • the host and the hardware processing unit are connected through an interconnection interface.
  • the data processing method in the embodiment of the present invention is implemented by a hardware processing unit, and may include:
  • Step 101 Receive a request message sent by the host service layer and transparently transmitted through the host driver layer, where the request message includes at least one acceleration type identifier and service data to be accelerated, where each acceleration type identifier corresponds to An accelerated process.
  • the "transparent transmission" in this step means that when the request message passes through the driver layer in the host, the driver layer does not change the content of the request message, but only encapsulates the message and then transmits it to the driver layer. In this process, regardless of the acceleration task, the driver layer only completes the encapsulation and transmission of the request message, and does not involve parsing and changing the content. Therefore, in this embodiment, even if the hardware processing unit occurs The change does not require changing the functionality of the driver layer, so that the business layer's reliance on specific underlying drivers can be shielded.
  • the "request message” in this embodiment refers to a request message with a fixed message structure agreed between the host and the hardware processing unit, and the request message sent by the host service layer does not need the driver layer to perceive the specific content of the request message and the data.
  • the processing can be "transparently transmitted" to the hardware processing unit, and the hardware processing unit can parse the request message and perform data processing according to the parsing result.
  • the request message includes at least an acceleration type identifier and service data to be accelerated, wherein each acceleration type identifier corresponds to an acceleration processing, and the hardware processing unit can learn to perform according to the acceleration type identifier. Accelerate business.
  • Step 102 Perform at least one acceleration processing corresponding to the at least one acceleration type identifier on the service data.
  • the hardware processing unit After the hardware processing unit parses the acceleration type identifier and the service data in the request message, the hardware processing unit can perform acceleration processing corresponding to the acceleration type identifier on the service data.
  • the message structure is agreed between the host and the hardware processing unit, so that the host can directly send a request message to the hardware processing unit, and the hardware processing unit performs request message parsing and data processing.
  • the interaction between the service layer of the host and the hardware processing unit does not require a dedicated driver cooperation, so that the business layer can be shielded from the specific underlying driver.
  • the hardware processing unit can run on different service platforms, and the heterogeneous capabilities of the logic are enhanced, thereby improving the dynamics and flexibility in the business process.
  • the request message sent by the host to the hardware processing unit includes multiple acceleration type identifiers, and the hardware processing unit is required to perform multiple acceleration processing, the request message may further include Each acceleration type identifies a one-to-one corresponding acceleration order identifier that is used to indicate the order of acceleration processing.
  • the acceleration processing corresponding to the multiple acceleration type identifiers may be performed on the service data in the order indicated by the multiple acceleration order identifiers.
  • the hardware processing unit may perform acceleration processing corresponding to the multiple acceleration type identifiers on the service data in the order indicated by the multiple acceleration order identifiers, thereby implementing the service data. Flowing water treatment improves processing efficiency.
  • a field and a data field may be set in a message transmitted between the host and the hardware processing unit, where the field field includes a field of a service header and a control header, and the data field is used for Carry business data and processed business data.
  • the message structure of the message may include a service header, a control header, and service data.
  • service header a service header
  • control header a control header
  • service data a service data that specifies the status of the message.
  • other information may also be included in the message.
  • the service header includes a Ser_type field, a Ser_cntn field, an ACC_seqn field, a Type_accn field, a slice_numn field, and a port_numn field.
  • the Ser_type field indicates the direction of the message, for example, is sent by the host to the hardware processing unit, or is sent back to the host by the hardware processing unit. The field value of the field can be used to distinguish the request message and hardware sent by the host. The feedback message sent by the processing unit.
  • the ACC_seqn field indicates a specific acceleration order.
  • the Type_accn field indicates a specific acceleration type
  • Ser_cntn indicates the logarithm of slice_numn and port_numn.
  • the slice_numn field indicates the identification of the accelerated processing group, and the port_numn field indicates the identification of the acceleration processing module.
  • the control header (Reg_cmd) is used to construct a virtual register read and write channel.
  • the control header contains Reg_act field, Reg_cntn field, Re_err field, Addrn field and valuen field.
  • the Reg_act field indicates the type of the control message, whether it is configuration information or other read/write information;
  • the Re_err field indicates the flag information of the control state is correct or incorrect;
  • the Reg_cntn field indicates the logarithm of the Addrn field and the valuen field;
  • the Addrn field indicates that the acceleration logic can operate.
  • Address information, valuen field indicates the value of the corresponding Addrn field.
  • the acceleration data (Acc_data) is used to carry the service data that needs to be processed or the result of the processing of the completed data.
  • Len represents the data length
  • Checksum represents the verification.
  • the host can send the request message to the hardware processing unit via the interconnect interface.
  • the hardware processing unit may specifically include a receiving module, an inter-group routing module, and at least one accelerated processing group.
  • the inter-group routing module may have one or more, and each inter-group routing module may be programmed by an FPGA chip to select different intra-group routing modules; the intra-group routing module may also be programmed by an FPGA resource.
  • the intra-group routing module in each acceleration processing group is connected to multiple acceleration processing modules, and different acceleration processing modules can implement different acceleration processing, and each acceleration processing module can be identical or partially identical.
  • the inter-group routing module includes an inter-group routing table, where the inter-group routing table includes a correspondence between the acceleration type identifier and the acceleration processing group.
  • the hardware processing unit when receiving the message sent by the host, may specifically receive the message sent by the host by the receiving module of the hardware processing unit.
  • the process of the at least one acceleration processing corresponding to the at least one acceleration type identifier of the service data by the hardware processing unit, as shown in FIG. 3, may include:
  • Step 301 The inter-group routing module receives the request message sent by the receiving module.
  • the inter-group routing module can be programmed by an FPGA chip, and its structure can be as shown in FIG. 4, and is mainly composed of four parts:
  • the adaptation module (Adaption) mainly completes the protocol stack interface adaptation work, and the user group routing is adapted to the transmission interface protocol.
  • the service header parsing engine parses the message structure constructed by the host's service layer, and the service header parsing engine performs different process operations through different acceleration type identifiers in the service header.
  • the slice forwarding table (Slice Table), that is, the inter-group routing configuration information, the Slice forwarding table marks the forwarding relationship between the Type_acc and the intra-group route (Slice_num) in the message structure.
  • the forwarding table may be sent by the service layer to the inter-group route in advance through a configuration message, and obtained by the service parsing engine from the configuration message.
  • the scheduling module includes:
  • the service layer sends the data direction (Switch_out): forwards the message to the intra-group routing module in the corresponding service aggregation resource pool by using the slice information of the forwarding table.
  • Accelerate data return direction (Switch_in): Acquire the reported acceleration data result from the intra-group routing module and pass the result to the internal adaptation module.
  • the inter-group routing module After receiving the message, the inter-group routing module performs step 302.
  • Step 302 The inter-group routing module parses the acceleration type identifier in the request message.
  • the inter-group routing module parses the request message through the service header parsing engine to obtain an acceleration type identifier.
  • Step 303 The inter-group routing module forwards the request message to the destination acceleration processing group according to the parsed acceleration type identifier and the inter-group routing table.
  • the inter-group routing module uses the inter-group routing table, that is, the slice forwarding table, to find the routing module in the group corresponding to the acceleration type identifier, that is, the routing module in the destination group, and the identification number Slice_num of the routing module in the group can be obtained.
  • the inter-group routing module sends the message to the destination acceleration processing group to which the routing module belongs in the destination group through Switch_out.
  • the processing may be performed in sequence according to the acceleration order identifier.
  • step 304 the purpose acceleration processing group accelerates the service data.
  • the acceleration processing group includes a parsing module, an intra-group routing module, and at least one acceleration processing module, wherein each acceleration processing module is configured to perform different types of acceleration processing on the same service.
  • the intra-group routing module includes an intra-group routing table, and the intra-group routing table includes a correspondence between the acceleration type identifier and the acceleration processing module.
  • the process of accelerating the processing of the service data by the processing group may include:
  • Step 501 The parsing module of the destination acceleration processing group parses the request message, caches the service data, and generates an internal forwarding message according to the parsing result, where the internal forwarding message includes an acceleration type identifier and a cache address of the service data.
  • the acceleration processing group is mainly composed of three parts:
  • the parsing module is configured to complete the parsing of the request message, the separation of the service header, the control header, and the service data.
  • the acceleration processing group is provided with a unified cache space, and after the parsing module applies for the cache address of the cache space, the parsed service is parsed.
  • the data is buffered according to the cache address, such as the corresponding cache space; the cached address of the application is forwarded to the service header, the service information in the control header, and the control information generation internal forwarding message is forwarded to the intra-group routing module.
  • the accelerated processing group has a unified cache space.
  • An intra-group routing module configured to store a correspondence between each Type_acc and an acceleration processing module (Port) in the group, that is, an intra-group routing table (Acc Table), where the information may be sent by the service layer to the group through a configuration message in advance. Routing module.
  • the Accelerated Processing Module which is the acceleration logic, is a unit that implements specific business functions or business logic.
  • the entry is composed of a data channel and a virtual register channel.
  • the register channel is used for internal register configuration and reading, and the data channel is used. Accelerate processing within the accelerated data incoming acceleration logic.
  • the data parsing engine caches the parsed service data request address; and the application address information and the business information in the service header and control header, and the control information are generated internally. Forward the message.
  • Step 502 The parsing module sends an internal forwarding message to the intra-group routing module of the destination acceleration processing group.
  • Step 503 The intra-group routing module sends an internal forwarding message to the destination acceleration processing module according to the acceleration type identifier and the intra-group routing table.
  • the routing module in the destination group parses the received internal forwarding message, learns the acceleration type identifier, and then searches for an acceleration processing module (Port) corresponding to the acceleration type identifier according to the intra-group routing table, that is, the destination acceleration processing module, and then An internal forwarding message is sent to the purpose acceleration processing module.
  • an acceleration processing module Port
  • Step 504 The destination acceleration processing module acquires service data according to the cache address included in the internal forwarding message and accelerates the service data.
  • the purpose acceleration processing module reads the service data from the cache space according to the cache address included in the internal forwarding message and processes the service data, and then caches the processed data into the cache space according to the cache address.
  • the cache space may also identify the processed data sent by the acceleration processing module to indicate that the data stored in the cache address is processed data.
  • the destination acceleration processing module may cache the accelerated processed service data, and notify the intra-group routing module, and the intra-group routing module identifies the internal according to the acceleration order.
  • the forwarding message is sent to the next destination acceleration processing module, and the next destination acceleration processing module accelerates the data cached by the previous purpose acceleration processing module, and repeats the action of the previous purpose acceleration processing module until the acceleration order identifier indicates The acceleration sequence ends.
  • the intra-group routing module may also send the internal forwarding message to the destination acceleration processing module corresponding to the acceleration type identifier of the group of services simultaneously or according to the acceleration order identifier included in the service control information.
  • the purpose of the acceleration processing module is to obtain the data processing in the cache space corresponding to the cache address according to the cache address in the internal forwarding message, and store the processed data in the cache space corresponding to the cache address.
  • the acceleration processing module can read the identification information of the data in the cache space corresponding to the cache address to determine whether the acceleration processing module of the previous purpose in the acceleration order identifier has processed the data, and after processing The data of the cache space corresponding to the cache address is read and processed until all the data of the acceleration processing module in the acceleration order identifier is processed.
  • the method further includes the following steps:
  • Step 701 The destination acceleration processing module caches the processed service data.
  • Step 702 When the service data is all accelerated, the routing module in the destination group reads the cached processed service data.
  • the routing module in the destination group learns that the service data has been completely processed according to the identification information of the data in the cache space, the processed data in the cache space is read.
  • Step 703 The routing module in the destination group generates a feedback message of the request message according to the processed service data.
  • the routing module in the destination group generates a feedback message according to the processed data according to the same fixed message structure as the request message.
  • Step 704 The routing module in the destination group sends a feedback message to the inter-group routing module, so that the inter-group routing module sends the feedback message to the host.
  • the destination group routing module returns the feedback message to the host service layer according to the reverse path of the message transmission path.
  • the inter-group routing table and the intra-group routing table can be obtained by:
  • the host service layer sends a message to the inter-group routing module and the intra-group routing module, where the message carries the correspondence between the acceleration type identifier, the intra-group routing module, and the acceleration processing module, for example, service number + Slice number + Port number. .
  • the inter-group routing module and the intra-group routing module respectively establish an inter-group routing table and an intra-group routing table.
  • the host service layer can also be configured with an aging switch and an aging time of the inter-group routing table and the intra-group routing table.
  • the aging time and the aging switch are configured by the service layer through the register channel, and the information of the register channel is carried in the Reg_cmd of the message.
  • the inter-group routing module reports the host to request the host to configure a new inter-group routing table.
  • the intra-group routing module reports the host to request the host to configure a new intra-group routing table.
  • the inter-group routing module and the intra-group routing module maintain the aging switch and the aging time of the routing table. In some scenarios, if the routing entries in the inter-group routing table or the intra-group routing table are aged out, the new service data is down.
  • the sending, the inter-group routing module, or the intra-group routing module collects the abnormality scenario and returns the abnormality to the service layer, requesting the service layer to deliver the configuration message again.
  • the interconnection manner between the host and the hardware processing unit may be a PCIe interface, or may be another interface protocol
  • the FPGA of the inter-group routing module passes through the interconnection interface (may be PCIE, It can also be other interconnect interfaces) interconnected with the acceleration processing group, and the intra-group routing module and the acceleration processing module can be interconnected by using common interface resources.
  • the specific structure is shown in FIG. 8 and FIG. 9, for example.
  • a server chassis has a backplane.
  • the backplane is equipped with multiple sets of CPU resources, memory, south bridge and other chips.
  • the host is interconnected with an FPGA resource as an inter-group routing module.
  • the interconnection mode may be a PCIe interface or other interface protocol.
  • the FPGA of the inter-group routing module is connected to the FPGA of each group of routing modules through an interconnection interface (which may be a PCIe interface or other interconnection interface).
  • Interconnection, the intra-group routing module and multiple FPGA acceleration resources, that is, multiple acceleration processing modules, are interconnected by a common interface resource. Each intra-group routing module and its acceleration processing module can form an accelerated processing group.
  • the above structure is a structure of the routing module in the group by the inter-group routing module, and each of the intra-group routing modules can be extended to integrate a new acceleration processing module, and each of the inter-group routing modules can also be extended and integrated.
  • the new accelerated processing group consisting of the intra-group routing module and the acceleration processing module enhances the sustainable integration and scalability of the entire system.
  • the structure shown in FIG. 9 differs from the structure shown in FIG. 8 only in that the host in this example is interconnected by a network element such as a network or a virtual cloud or an intermediate device and an inter-group routing module.
  • a network element such as a network or a virtual cloud or an intermediate device and an inter-group routing module.
  • this example has a device at the front end of the inter-group routing module that is peered with the sender of the host, and the device can be a peer-to-peer protocol.
  • the stack which can also be a custom other interactive protocol, interconnects the received messages through the internal bus structure with the inter-group routing module. After the message is sent to the inter-group routing module, the subsequent processing is identical.
  • the entire structure of the host and the inter-group routing module and the acceleration processing group are completely separated. Therefore, multiple server groups in the network may share the accelerated processing group, so the hierarchy of the inter-group routing module may also be extended.
  • the type can be defined in the business header of the message.
  • the entire system can be Continuous integration and scalability enhancements.
  • the host service layer can organize an accelerated processing group according to the needs of the service and the urgency of the acceleration, and form an accelerated processing group with different functions; according to different services, different resources can be allocated, and the service can be refined. Acceleration capability.
  • the embodiment of the present invention discloses a data processing apparatus.
  • FIG. 10 it is a structural block diagram of a data processing apparatus in this embodiment.
  • the data processing apparatus in this embodiment is applied to a service sent by a host.
  • the scenario in which the data is accelerated, the device includes:
  • the receiving module 1001 is configured to receive a request message sent by the host service layer and transparently transmitted through the host driver layer, where the request message includes at least one acceleration type identifier and service data to be accelerated, where each acceleration The type identifier corresponds to an acceleration process;
  • the processing module 1002 is configured to perform, by using the service data received by the receiving module 1101, at least one acceleration processing that is in one-to-one correspondence with the at least one acceleration type identifier.
  • the message structure is agreed between the host and the data processing device, so that the host can directly send a message to the data processing device, and the data processing device performs message parsing and data processing according to the foregoing unit.
  • the interaction between the data processing device and the host does not require a dedicated driver cooperation, so that the host business layer can be shielded from the specific underlying driver.
  • the data processing device can run on different service platforms, and the heterogeneous capabilities of the logic are enhanced, thereby improving the dynamics and flexibility in the business process.
  • the message may include multiple acceleration type identifiers, and the message further includes: an acceleration order identifier corresponding to each acceleration type identifier, the acceleration order identifier indicating an order of acceleration processing;
  • the processing module 1002 is further configured to perform acceleration processing corresponding to the plurality of acceleration type identifiers on the service data in an order indicated by the multiple acceleration order identifiers.
  • the processing module 1002 can include:
  • the inter-group routing module 1101 is at least one accelerated processing group 1102.
  • the inter-group routing module 1101 includes an inter-group routing table, where the inter-group routing table includes a correspondence between an acceleration type identifier and an acceleration processing group 1102.
  • the inter-group routing module 1101 is configured to receive and send from the receiving module 1001. The request message; parsing the acceleration type identifier in the request message; forwarding the request message to the destination acceleration processing group 1102 according to the parsed acceleration type identifier and the inter-group routing table;
  • the acceleration processing group 1102 is configured to perform acceleration processing on the service data.
  • each acceleration processing group 1102 includes a parsing module 1120, an intra-group routing module 1121, and at least one acceleration processing module 1122, wherein each acceleration processing module 1122 is configured to perform the same service.
  • a parsing module 1120 parsing module 1120
  • an intra-group routing module 1121 at least one acceleration processing module 1122, wherein each acceleration processing module 1122 is configured to perform the same service.
  • the parsing module 1120 is configured to parse the request message sent by the inter-group routing module 1101, cache the service data, and generate an internal forwarding message according to the parsing result, where the internal forwarding message includes the acceleration type identifier And the cache address of the service data; the internal forwarding message is sent to the intra-group routing module 1121;
  • the intra-group routing module 1121 includes an intra-group routing table, where the intra-group routing table includes a correspondence between an acceleration type identifier and an acceleration processing module, and the intra-group routing module 1121 is configured to The intra-group routing table sends the internal forwarding message received from the parsing module to the destination acceleration processing module 1122;
  • the acceleration processing module 1122 is configured to acquire the service data according to the cache address included in the internal forwarding message received from the intra-group routing module 1121, and perform acceleration processing on the service data.
  • the acceleration processing module 1122 is further configured to: when the internal forwarding message includes an acceleration order identifier, cache the accelerated service data, and notify the intra-group routing module 1121;
  • the intra-group routing module 1121 is further configured to: after receiving the notification sent by the acceleration processing module, send the internal forwarding message to the next destination acceleration processing module according to the acceleration order identifier, so that the The one-stage acceleration processing module accelerates the data buffered by the destination acceleration processing module until the acceleration sequence indicated by the acceleration order identifier ends.
  • the acceleration processing module 1122 is further configured to cache the processed service data.
  • the intra-group routing module 1121 is further configured to read the processed service data cached by the acceleration processing module 1122 when the service data is all accelerated, and generate the request according to the processed service data. a feedback message corresponding to the message; the feedback message is sent to the inter-group routing module 1101, so that the inter-group routing module 1101 sends the feedback message to the host.
  • the inter-group routing table is further configured with an aging switch and an aging time
  • the inter-group routing module 1101 is further configured to report the host to configure the new inter-group routing table when the aging switch of the inter-group routing table is enabled and the aging time is reached.
  • the routing table in the group is also configured with an aging switch and an aging time
  • the intra-group routing module 1121 is further configured to report the host to configure the new intra-group routing table when the aging switch of the intra-group routing table is enabled and the aging time is reached.
  • the receiving module 1001 may be configured to receive a message sent by the host through a PCIe interface; or receive a message sent by the host through a network or a virtual cloud or an intermediate device.
  • the feedback message has the same message structure as the request message, wherein the message structure includes a message type field for distinguishing the feedback message from the request message.
  • the request message is provided with a field field and a data field, where the field field includes a field of a service header and a field of a control header, where the field of the service header includes the acceleration type identifier,
  • the data field is used to carry the service data.
  • the embodiment of the present invention discloses a data processing apparatus.
  • the apparatus includes a CPU 131, a memory 132, and a hardware processing unit 133.
  • the CPU is respectively connected to a memory and a hardware processing unit;
  • the processing unit can be an FPGA, an ASIC, or the like;
  • the CPU and the memory can be considered to constitute a host, and the CPU reads the code in the memory to execute the program, and the executed program includes a host driver layer and a host service layer at the software level, and these software and hardware architectures are known to those skilled in the art. , no longer repeat them here.
  • the hardware processing unit is configured to receive a request message sent from the host service layer and transparently transmitted through the host driver layer, where the request message includes at least one acceleration type identifier and service data to be accelerated, where each acceleration type The identification corresponds to an acceleration process;
  • the hardware processing unit is further configured to perform at least one acceleration processing on the service data in one-to-one correspondence with the at least one acceleration type identifier.
  • the device by agreeing on a message structure between the CPU and the hardware processing unit, enables the CPU to directly send a request message to the hardware processing unit, and the hardware processing unit performs request message parsing and data processing.
  • the interaction between the CPU and the hardware processing unit in the device does not require a dedicated driver cooperation, thereby shielding the CPU from the dependence on the specific underlying driver.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.
  • the disclosed systems, devices, and methods may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明实施例提供了一种数据处理方法和装置。该数据处理方法,应用于硬件处理单元对主机发送的业务数据进行加速的场景,所述方法由所述硬件处理单元执行,包括:接收来自所述主机业务层发送且透传过所述主机驱动层的请求消息,所述请求消息包括至少一个加速类型标识及待加速处理的业务数据,其中,每个加速类型标识对应于一种加速处理;对所述业务数据进行与所述至少一个加速类型标识一一对应的至少一种加速处理。该方法中主机的业务层与硬件处理单元之间的交互不需要专用的驱动配合,从而可以屏蔽业务层对具体底层驱动的依赖。本方法中硬件处理单元可以运行于不同的业务平台,逻辑的异构能力增强,从而可以提升业务处理过程中的动态性和灵活性。

Description

一种数据处理方法和装置
本申请要求于2015年5月29日提交中国专利局、申请号为CN201510288092.2、发明名称为“一种数据处理方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及数据处理技术领域,特别是涉及一种数据处理方法和一种数据处理装置。
背景技术
随着互联网技术的发展,各种新型业务不断发展,网络数据类型不断丰富,网络流量剧增,从而对处理设备的处理能力提出了更高的要求。为了提高处理能力,目前的处理设备开始越来越多地使用硬件处理单元(如FPGA、ASIC等)来对一些业务进行加速。
现有的基于CPU+FPGA进行硬件加速的方案的中,CPU(可认为是“主机”)用于执行业务层(一般也可称为“应用层”、“上层”)以及底层驱动的代码,其中,业务层用于产生需要进行加速的原始加速源数据或者用于接收调度过来的其他业务层的原始加速源数据。底层驱动用于配合业务层完成调度指令的解析、数据转换、数据封装、数据传输等工作。FPGA用于接收底层驱动下发的数据,完成对数据的加速处理,并将处理后的数据通过底层驱动返回给业务层。
然而,该方案针对不同的业务类型进行加速时需要依赖于底层驱动,也即业务层必须要通过与不同业务类型配套的专用底层驱动才能完成相应功能的FPGA加速。因此,现有技术方案中每一种需要加速的业务类型都需要定制化的底层驱动,动态性与灵活性差。
发明内容
本发明实施例提供了一种数据处理方法和装置,用于解决现有技术存在着的针对每一种业务加速场景都需要定制化底层驱动,导致动态性与灵活性差的问题。
第一方面,本发明实施例提供一种数据处理方法,应用于硬件处理单元对主机发送的业务数据进行加速的场景,所述方法由所述硬件处理单元执行,包括:
接收来自所述主机业务层发送且透传过所述主机驱动层的请求消息,所述请求消息包括至少一个加速类型标识及待加速处理的业务数据,其中,每个加速类型标识对应于一种加速处理;
对所述业务数据进行与所述至少一个加速类型标识一一对应的至少一种加速处理。
结合上述第一方面,在第一种可能的实现方式中,所述请求消息包括多个加速类型标识,所述请求消息还包括:与每个加速类型标识一一对应的加速次序标识,所述加速次序标识用于指示加速处理的顺序;
所述对所述业务数据进行与所述至少一个加速类型标识分别对应的至少一种加速处理包括:
按所述多个加速次序标识指示的顺序对所述业务数据进行与所述多个加速类型标识一一对应的多个加速处理。
结合上述第一方面,和/或第一种可能的实现方式,在第二种可能的实现方式中,所述硬件处理单元包括接收模块,组间路由模块,至少一个加速处理组;
所述组间路由模块包括组间路由表,所述组间路由表包括加速类型标识与加速处理组之间的对应关系;
所述接收来自所述主机业务层发送且透传过所述主机驱动层的请求消息包括:所述接收模块接收来自所述主机业务层发送且透传过所述主机驱动层的请求消息;
所述对所述业务数据进行与所述至少一个加速类型标识一一对应的至少一种加速处理包括:
所述组间路由模块接收来自所述接收模块发送的所述请求消息;
所述组间路由模块解析出所述请求消息中的加速类型标识;
所述组间路由模块根据解析出的加速类型标识以及所述组间路由表将所述请求消息转发至目的加速处理组;
所述目的加速处理组对所述业务数据进行加速处理。
结合上述第一方面,和/或第一种可能的实现方式,和/或第二种可能的实现方式,在第三种可能的实现方式中,所述加速处理组包括解析模块,组内路由模块,至少一个加速处理模块,其中,各个加速处理模块用于对同一业务进行不同类型的加速处理;
所述组内路由模块包括组内路由表,所述组内路由表包括加速类型标识与加速处理模块之间的对应关系;
所述目的加速处理组对所述业务数据进行加速处理,包括:
所述目的加速处理组的解析模块对所述请求消息进行解析,将业务数据进行缓存,并根据解析结果生成内部转发消息,所述内部转发消息中包括所述加速类型标识及所述业务数据的缓存地址;
所述解析模块将所述内部转发消息发送至所述目的加速处理组的组内路由模块;
所述组内路由模块根据所述加速类型标识及所述组内路由表将所述内部转发消息发送至目的加速处理模块;
所述目的加速处理模块根据所述内部转发消息中包含的所述缓存地址获取所述业务数据并对所述业务数据进行加速处理。
结合上述第一方面,和/或第一种可能的实现方式,和/或第二种可能的实现方式,和/或第三种可能的实现方式,在第四种可能的实现方式中,当所述内部转发消息中包含加速次序标识时,所述目的加速处理组对所述业务数据进行加速处理,还包括:
所述目的加速处理模块将加速处理后的业务数据进行缓存,并通知所述组内路由模块;
所述组内路由模块根据所述加速次序标识将所述内部转发消息发送至下一目的加速处理模块,以使所述下一目的加速处理模块对所述目的加速处理模块缓存的数据进行加速处理,直至所述加速次序标识指示的加速顺序结束。
结合上述第一方面,和/或第一种可能的实现方式,和/或第二种可能的实现方式,和/或第三种可能的实现方式,和/或第四种可能的实现方式,在第五种可能的实现方式中,所述方法还包括:
所述目的加速处理模块将处理后的业务数据进行缓存;
当所述业务数据全部加速处理完毕时,所述目的组内路由模块读取缓存的所述处理后的业务数据;
所述目的组内路由模块根据所述处理后的业务数据生成所述请求消息对应的反馈消息;
所述目的组内路由模块将所述反馈消息发送至所述组间路由模块,以使所述组间路由模块将所述反馈消息发送至所述主机。
结合上述第一方面,和/或第一种可能的实现方式,和/或第二种可能的实现方式,和/或第三种可能的实现方式,和/或第四种可能的实现方式,和/或第五种可能的实现方式,在第六种可能的实现方式中,所述反馈消息与所述请求消息具有相同的消息结构,其中,所述消息结构中包括用于区分所述反馈消息与所述请求消息的消息类型字段。
结合上述第一方面,和/或第一种可能的实现方式,和/或第二种可能的实现方式,和/或第三种可能的实现方式,和/或第四种可能的实现方式,和/或第五种可能的实现方式,和/或第六种可能的实现方式,在第七种可能的实现方式中,所述请求消息中设置有字段域和数据域,所述字段域中包含业务头的字段和控制头的字段,所述业务头的字段中包含所述加速类型标识,所述数据域用于携带所述业务数据。
结合上述第一方面,和/或第一种可能的实现方式,和/或第二种可能的实现方式,和/或第三种可能的实现方式,和/或第四种可能的实现方式,和/或第五种可能的实现方式,和/或第六种可能的实现方式,和/或第七种可能的实现方式,在第八种可能的实现方式中,所述组间路由表还配置有老化开关和老化时间;所述方法还包括:
当所述组间路由表的老化开关开启且到达所述老化时间时,所述组间路由模块上报所述主机,以请求所述主机配置新的组间路由表。
结合上述第一方面,和/或第一种可能的实现方式,和/或第二种可能的实现方式,和/或第三种可能的实现方式,和/或第四种可能的实现方式,和/或第五种可能的实现方式,和/或第六种可能的实现方式,和/或第七种可能的实现方式,和/或第八种可能的实现方式,在第九种可能的实现方式中,所述组内路由表还配置有老化开关和老化时间;所述方法还包括:
当所述组内路由表的老化开关开启且到达所述老化时间时,所述组内路由模块上报所述主机,以请求所述主机配置新的组内路由表。
第二方面,本发明实施例还提供了一种数据处理装置,应用于对主机发送的业务数据进行加速的场景,所述装置包括:
接收模块,用于接收来自所述主机业务层发送且透传过所述主机驱动层的请求消息,所述请求消息包括至少一个加速类型标识及待加速处理的业务数据,其中,每个加速类型标识对应于一种加速处理;
处理模块,用于对所述接收模块接收到的所述业务数据进行与所述至少一个加速类型标识一一对应的至少一种加速处理。
结合上述第二方面,在第一种可能的实现方式中,所述请求消息包括多个加速类型标识,所述请求消息还包括:与每个加速类型标识一一对应的加速次序标识,所述加速次序标识用于指示加速处理的顺序;
所述处理模块,还用于按所述多个加速次序标识指示的顺序对所述业务数据进行与所述多个加速类型标识标分别对应的加速处理。
结合上述第二方面,和/或第一种可能的实现方式,在第二种可能的实现方式中,所述处理模块包括组间路由模块,至少一个加速处理组;
所述组间路由模块包括组间路由表,所述组间路由表包括加速类型标识与加速处理组之间的对应关系;所述组间路由模块,用于接收来自所述接收模块发送的所述请求消息;解析出所述请求消息中的加速类型标识;根据解析出的加速类型标识以及所述组间路由表将所述请求消息转发至目的加速处理组;
所述加速处理组,用于对业务数据进行加速处理。
结合上述第二方面,和/或第一种可能的实现方式,和/或第二种可能的实现方式,在第三种可能的实现方式中,所述加速处理组包括解析模块,组内路由模块,至少一个加速处理模块,其中,各个加速处理模块用于对同一业务进行不同类型的加速处理;
所述解析模块,用于对所述组间路由模块发送的所述请求消息进行解析,将业务数据进行缓存,并根据解析结果生成内部转发消息,所述内部转发消息中包括所述加速类型标识及所述业务数据的缓存地址;将所述内部转发消息发送至所述组内路由模块;
所述组内路由模块包括组内路由表,所述组内路由表包括加速类型标识与加速处理模块之间的对应关系;所述组内路由模块,用于根据所述加速类型标识及所述组内路由表将从所述解析模块接收到的所述内部转发消息发送至目的加速处理模块;
所述加速处理模块,用于根据从所述组内路由模块接收到的所述内部转发消息中包含的所述缓存地址获取所述业务数据并对所述业务数据进行加速处理。
结合上述第二方面,和/或第一种可能的实现方式,和/或第二种可能的实现方式,和/或第三种可能的实现方式,在第四种可能的实现方式中,所述加速处理模块,还用于当所述内部转发消息中包含加速次序标识时,将加速处理后的业务数据进行缓存,并通知所述组内路由模块;
所述组内路由模块,还用于在接收到所述加速处理模块发送的通知后,根据所述加速次序标识将所述内部转发消息发送至下一目的加速处理模块,以使所述下一目的加速处理模块对所述目的加速处理模块缓存的数据进行加速处理,直至所述加速次序标识指示的加速顺序结束。
结合上述第二方面,和/或第一种可能的实现方式,和/或第二种可能的实现方式,和/或第三种可能的实现方式,和/或第四种可能的实现方式,所述加速处理模块,还用于将处理后的业务数据进行缓存;
所述组内路由模块,还用于当所述业务数据全部加速处理完毕时,读取所述加速处理模块缓存的所述处理后的业务数据;根据所述处理后的业务数据生成所述请求消息对应的反馈消息;将所述反馈消息发送至所述组间路由模块,以使所述组间路由模块将所述反馈消息发送至所述主机。
结合上述第二方面,和/或第一种可能的实现方式,和/或第二种可能的实现方式,和/或第三种可能的实现方式,和/或第四种可能的实现方式,和/或第五种可能的实现方式,在第六种可能的实现方式中,所述反馈消息与所述请求消息具有相同的消息结构,其中,所述消息结构中包括用于区分所述反馈消息与所述请求消息的消息类型字段。
结合上述第二方面,和/或第一种可能的实现方式,和/或第二种可能的实现方式,和/或第三种可能的实现方式,和/或第四种可能的实现方式,和/或第五种可能的实现方式,和/或第六种可能的实现方式,在第七种可能的实现方式中,所述请求消息中设置有字段域和数据域,所述字段域中包含业务头的字段和控制头的字段,所述业务头的字段中包含所述加速类型标识,所述数据域用于携带所述业务数据。
结合上述第二方面,和/或第一种可能的实现方式,和/或第二种可能的实现方式,和/或第三种可能的实现方式,和/或第四种可能的实现方式,和/或第五种可能的实现方式,和/或第六种可能的实现方式,和/或第七种可能的实现方式,在第八种可能的实现方式中,所述组间路由表还配置有老化开关和老化时间;
所述组间路由模块,还被配置为当所述组间路由表的老化开关开启且到达所述老化时间时,上报所述主机,以请求所述主机配置新的组间路由表。
结合上述第二方面,和/或第一种可能的实现方式,和/或第二种可能的实现方式,和/或第三种可能的实现方式,和/或第四种可能的实现方式,和/或第五种可能的实现方式,和/或第六种可能的实现方式,和/或第七种可能的实现方式,和/或第八种可能的实现方式,在第九种可能的实现方式中,所述组内路由表还配置有老化开关和老化时间;
所述组内路由模块,还被配置为当所述组内路由表的老化开关开启且到达所述老化时间时,上报所述主机,以请求所述主机配置新的组内路由表。
与现有技术相比,本发明实施例包括以下优点:
本发明实施例通过在主机业务层与硬件处理单元之间约定消息结构,使得主机可以将消息透传过主机驱动层后直接向硬件处理单元发送,并由硬件处理单元根据消息中的相应标识进行加速处理。因此,该方法中主机的业务层与硬件处理单元之间的交互不需 要专用的驱动配合,从而可以屏蔽业务层对具体底层驱动的依赖。本方法中硬件处理单元可以运行于不同的业务平台,逻辑的异构能力增强,从而可以提升业务处理过程中的动态性和灵活性。
附图说明
图1是本发明的一种数据处理方法实施例的步骤流程图;
图2是本发明实施例中一种消息的结构示意图;
图3是本发明实施例中一种对业务数据进行加速处理的方法步骤流程图;
图4是本发明实施例中一种组间路由模块的内部结构示意图;
图5是本发明实施例中一种目的加速处理组通知各个加速处理模块对业务数据进行加速处理的方法步骤流程图;
图6是本发明实施例中一种加速处理组的内部结构示意图;
图7是本发明实施例中一种组内路由模块发送反馈消息的方法步骤流程图;
图8是本发明实施例中一种数据处理系统的结构示意图;
图9是本发明实施例中另一种数据处理系统的结构示意图;
图10是本发明的一种数据处理装置实施例的结构框图;
图11是本发明实施例中一种处理模块的结构框图;
图12是本发明实施例中一种加速处理组的结构框图;
图13是本发明实施例中另一种数据处理系统的结构示意图。
具体实施方式
为使本发明的上述目的、特征和优点能够更加明显易懂,下面结合附图和具体实施方式对本发明作进一步详细的说明。
实施例一
参照图1,为本发明实施例一种数据处理方法的流程图。
该方法应用于硬件处理单元对主机发送的业务数据进行加速的场景,其中,主机以及硬件处理单元的概念与现有技术相同,即主机一般是指主要由一个或多个CPU构成的系统,用于通过CPU执行存储在存储器中的软件代码来实现业务层和驱动层的功能;而硬件处理单元是指由FPGA或ASIC等硬件器件实现的单元,用于对主机业务层发送的数据 进行处理(主要是加速处理),主机与硬件处理单元通过互连接口相连。本发明实施例中的数据处理方法由硬件处理单元来完成,可以包括:
步骤101,接收来自所述主机业务层发送且透传过所述主机驱动层的请求消息,该请求消息包括至少一个加速类型标识及待加速处理的业务数据,其中,每个加速类型标识对应于一种加速处理。
其中,本步骤中的“透传”是指请求消息经过主机中的驱动层时,驱动层并不对请求消息的内容进行更改,而是仅仅对消息进行封装,然后传给驱动层。由于在这过程中,无论针对何种加速任务,驱动层作用都只是完成对请求消息的封装及传输,并不会涉及对内容的解析及更改,因此,本实施例中,即使硬件处理单元发生了改变,也不需要改变驱动层的功能,从而可以屏蔽业务层对具体底层驱动的依赖。
本实施例中的“请求消息”是指主机与硬件处理单元之间约定的、具有固定的消息结构的请求消息,主机业务层发送的请求消息无需驱动层对请求消息的具体内容进行感知和数据处理,可“透传”至硬件处理单元,硬件处理单元即可解析该请求消息并根据解析结果进行数据处理。
其中,本实施例中,该请求消息中至少包括加速类型标识和待加速处理的业务数据,其中,每个加速类型标识对应一种加速处理,硬件处理单元根据该加速类型标识可以获知要执行的加速业务。
步骤102,对业务数据进行与至少一个加速类型标识分别对应的至少一种加速处理。
硬件处理单元在解析获得请求消息中的加速类型标识及业务数据后,即可对对业务数据进行与加速类型标识对应的加速处理。
本发明实施例通过在主机与硬件处理单元之间约定消息结构,使得主机可直接向硬件处理单元发送请求消息,由硬件处理单元进行请求消息解析和数据处理。该方法中主机的业务层与硬件处理单元之间的交互不需要专用的驱动配合,从而可以屏蔽业务层对具体底层驱动的依赖。本方法中硬件处理单元可以运行于不同的业务平台,逻辑的异构能力增强,从而可以提升业务处理过程中的动态性和灵活性。
实施例二
基于以上实施例,在本实施例中,若主机发送至硬件处理单元的请求消息中包括多个加速类型标识,需要硬件处理单元进行多种加速处理,则该请求消息中还可以包括与 每个加速类型标识一一对应的加速次序标识,该加速次序标识用于指示加速处理的顺序。
硬件处理单元在解析获得业务数据、加速类型标识以及该加速次序标识后,即可按多个加速次序标识指示的顺序对业务数据进行与多个加速类型标识分别对应的加速处理。
本实施例通过在请求消息中增加加速次序标识,使得硬件处理单元可以按多个加速次序标识指示的顺序对业务数据进行与多个加速类型标识分别对应的加速处理,从而实现了对业务数据的流水处理,提高了处理效率。
实施例三
基于以上所有实施例,在本实施例中,主机与硬件处理单元之间传输的消息中可以设置有字段域和数据域,其中,字段域中包含业务头和控制头的字段,数据域用于携带业务数据及处理后的业务数据。
在一具体实施方式中,如图2所示,该消息的消息结构可以包括业务头、控制头和业务数据。当然,在其它实施例中,该消息中还可以包括其它信息。
其中,业务头(Ser_header)中包含了Ser_type字段,Ser_cntn字段,ACC_seqn字段,Type_accn字段,slice_numn字段,port_numn字段。其中,Ser_type字段表明了消息的指向,例如是由主机下发给硬件处理单元的,还是由硬件处理单元反馈至主机的,通过该字段的字段值的不同可以区分主机下发的请求消息及硬件处理单元发送的反馈消息。ACC_seqn字段表示具体的加速次序。Type_accn字段表示具体的加速类型,Ser_cntn表示slice_numn与port_numn的对数。slice_numn字段表示加速处理组的标识,port_numn字段表示加速处理模块的标识。
控制头(Reg_cmd),用于构造虚拟的寄存器读写通道。控制头中包含了Reg_act字段,Reg_cntn字段,Re_err字段,Addrn字段和valuen字段。其中,Reg_act字段表示控制消息的类型,是配置信息还是其他读写信息;Re_err字段表示控制状态正确或错误的标记信息;Reg_cntn字段表示Addrn字段和valuen字段的对数;Addrn字段表示加速逻辑可以操作的地址信息,valuen字段表示对应的Addrn字段的值。
加速数据(Acc_data),是用来承载需要处理的业务数据或者处理完成数据的上报结果。其中,Len表示数据长度,Checksum表示校验。
主机可以将该请求消息通过互连接口发送至硬件处理单元。
实施例四
基于以上所有实施例,在本实施例中,硬件处理单元具体可以包括接收模块,组间路由模块,和至少一个加速处理组。
具体的,组间路由模块可以有一个或多个,每个组间路由模块可以由一个FPGA芯片编程完成,用于选择不同的组内路由模块;组内路由模块也可以是由一个FPGA资源编程完成,每个加速处理组中的组内路由模块连接多个加速处理模块,不同加速处理模块可以实现不同的加速处理,各加速处理模块可以是完全相同或部分相同。
组间路由模块包括组间路由表,该组间路由表包括加速类型标识与加速处理组之间的对应关系。
本实施例中,硬件处理单元在接收主机发送的消息时,具体可以是由硬件处理单元的接收模块接收主机发送的消息。
硬件处理单元对业务数据进行与至少一个加速类型标识分别对应的至少一种加速处理的过程,如图3所示,可以包括:
步骤301,组间路由模块接收来自接收模块发送的请求消息。
本实施例中,组间路由模块可以由一个FPGA芯片编程完成,其结构可以如图4所示,主要由四部分组成:
适配模块(Adaption),主要完成协议栈接口适配工作,用户组间路由适应传输接口协议。
业务头解析引擎,对主机的业务层构造的消息结构进行解析,通过业务头中不同的加速类型标识,业务头解析引擎执行不同的流程操作。
Slice转发表(Slice Table),也即组间路由配置信息,Slice转发表标记的是加速业务在消息结构中的Type_acc与组内路由(Slice_num)之间的转发关系。该转发表可以是由业务层预先通过配置消息发送至该组间路由,并由业务解析引擎从配置消息中获得。
调度模块包括:
业务层下发数据方向(Switch_out):通过转发表的Slice信息将消息转发到对应的业务聚合资源池中的组内路由模块。
加速数据返回方向(Switch_in):从组内路由模块获取上报的加速数据结果,并将结果传递给内部的适配模块。
组间路由模块接收到消息后,执行步骤302。
步骤302,组间路由模块解析出请求消息中的加速类型标识。
组间路由模块通过业务头解析引擎对请求消息进行解析,获得加速类型标识。
步骤303,组间路由模块根据解析出的加速类型标识以及组间路由表将请求消息转发至目的加速处理组。
组间路由模块通过组间路由表即Slice转发表,查找到与加速类型标识对应组内路由模块即目的组内路由模块,具体可以获得该组内路由模块的标识号Slice_num。组间路由模块通过Switch_out将该消息发送至该目的组内路由模块所属的目的加速处理组。当组间路由模块解析出的加速类型标识有多个时,可以按照加速次序标识依次进行处理。
步骤304,目的加速处理组对业务数据进行加速处理。
实施例五
基于以上所有实施例,在本实施例中,加速处理组包括解析模块,组内路由模块,至少一个加速处理模块,其中,各个加速处理模块用于对同一业务进行不同类型的加速处理。组内路由模块包括组内路由表,组内路由表包括加速类型标识与加速处理模块之间的对应关系。
目的加速处理组对业务数据进行加速处理的过程,如图5所示,可以包括:
步骤501,目的加速处理组的解析模块对请求消息进行解析,将业务数据进行缓存,并根据解析结果生成内部转发消息,该内部转发消息中包括加速类型标识及业务数据的缓存地址。
如图6所示,该加速处理组主要由三部分组成:
解析模块,用于完成对请求消息的解析,业务头、控制头、业务数据三部分的分离;加速处理组中设置有统一的缓存空间,解析模块申请缓存空间的缓存地址后,将解析的业务数据按照缓存地址缓存如对应的缓存空间;将申请的缓存地址与业务头、控制头中的业务信息,控制信息生成内部转发消息转发到组内路由模块。其中,该加速处理组中具有统一的缓存空间。
组内路由模块,用于存储组内各个Type_acc与加速处理模块(Port)的对应关系,也即组内路由表(Acc Table),该信息可以是由业务层预先通过配置消息发送至该组内路由模块的。
加速处理模块(FPGAn),也即加速逻辑,是实现具体业务功能或业务逻辑的单元,其入口由数据通道和虚拟的寄存器通道构成,寄存器通道用于内部的寄存器配置与读取,数据通道用于加速数据传入加速逻辑内部做加速处理。
目的加速处理组接收到组间路由模块发送的消息后,数据解析引擎将解析的业务数据申请地址进行缓存;并会将申请的地址信息与业务头、控制头中的业务信息,控制信息生成内部转发消息。
步骤502,解析模块将内部转发消息发送至目的加速处理组的组内路由模块。
步骤503,目的组内路由模块根据加速类型标识及组内路由表将内部转发消息发送至目的加速处理模块。
目的组内路由模块解析接收到的内部转发消息,获知其中的加速类型标识,然后根据组内路由表查找与该加速类型标识对应的加速处理模块(Port),即为目的加速处理模块,然后将内部转发消息发送至该目的加速处理模块。
步骤504,目的加速处理模块根据内部转发消息中包含的缓存地址获取业务数据并对业务数据进行加速处理。
该目的加速处理模块根据内部转发消息中包含的缓存地址从缓存空间中读取业务数据并对该业务数据进行处理,然后将处理后的数据按照该缓存地址缓存至缓存空间中。缓存空间还可以对该目的加速处理模块发送来的处理后的数据进行标识,以表明该缓存地址存储的数据是处理后的数据。
在另一实施例中,当内部转发消息中包含加速次序标识时,目的加速处理模块可以将加速处理后的业务数据进行缓存,并通知组内路由模块,组内路由模块根据加速次序标识将内部转发消息发送至下一目的加速处理模块,下一目的加速处理模块对上一目的加速处理模块缓存的数据进行加速处理,并重复上一目的加速处理模块的动作,直至所述加速次序标识指示的加速顺序结束。
组内路由模块也还可以将该内部转发消息同时或按照业务控制信息中包含的加速次序标识将内部转发消息发送至与该组业务的加速类型标识对应的目的加速处理模块。这些目的加速处理模块依次依据内部转发消息中的缓存地址获取该缓存地址对应的缓存空间中的业务数据进行数据处理,并把处理后的数据再存储至该缓存地址对应的缓存空间,其中,目的加速处理模块可以读取该缓存地址对应的缓存空间中数据的标识信息,以确定加速次序标识中上一目的加速处理模块是否已经对数据处理完毕,在处理完毕后 再读取该缓存地址对应的缓存空间的数据进行处理,直至加速次序标识中的所有目的加速处理模块数据处理完毕。
实施例六
基于以上所有实施例,本实施例中,在目的加速处理模块对业务数据处理完毕后,如图7所示,还可以进一步包括以下步骤:
步骤701,目的加速处理模块将处理后的业务数据进行缓存。
步骤702,当业务数据全部加速处理完毕时,目的组内路由模块读取缓存的处理后的业务数据。
当目的组内路由模块根据缓存空间中数据的标识信息获知业务数据已经全部处理完毕时,读取该缓存空间中的处理后的数据。
步骤703,目的组内路由模块根据处理后的业务数据生成请求消息的反馈消息。
目的组内路由模块按照与请求消息相同的固定的消息结构,根据处理后的数据生成反馈消息。
步骤704,目的组内路由模块将反馈消息发送至组间路由模块,以使组间路由模块将反馈消息发送至主机。
目的组内路由模块将该反馈消息按照消息传输路径的逆路径返回至主机业务层。
实施例七
基于以上所有实施例,组间路由表和组内路由表可以通过以下方式获得:
主机业务层向组间路由模块和组内路由模块分别发送消息,该消息中携带有加速类型标识、组内路由模块和加速处理模块之间的对应关系,例如:业务号+Slice号+Port号。
组间路由模块以及组内路由模块在接收到该消息后,分别建立组间路由表及组内路由表。
另外,主机业务层还可以分别配置有组间路由表及组内路由表的老化开关和老化时间。其中,老化时间与老化开关是业务层通过寄存器通道进行配置,寄存器通道的信息是携带在消息的Reg_cmd中。
当组间路由表的老化开关开启且到达老化时间时,组间路由模块上报主机,以请求主机配置新的组间路由表。
同理,当组内路由表的老化开关开启且到达老化时间时,组内路由模块上报主机,以请求主机配置新的组内路由表。
组间路由模块和组内路由模块各自维护路由表的老化开关和老化时间,在某些场景下,若组间路由表或组内路由表的表项被老化掉,而新的业务数据又下发,组间路由模块或组内路由模块会统计该异常场景,向业务层返回该种异常情况,请求业务层再次进行配置消息下发。
实施例八
基于以上所有实施例,在本实施例中,主机与硬件处理单元之间的互连方式可以是PCIe接口,也可以是其他接口协议,组间路由模块的FPGA通过互连接口(可以是PCIE,也可以是其他的互连接口)与加速处理组进行互连,组内路由模块与加速处理模块之间可以采用公共的接口资源进行互连。其具体结构例如图8、图9所示。
如图8所示的应用场景中,一个服务器机框里面有一块底板,底板上安装有多组CPU资源、内存、南桥等芯片,主机与一个作为组间路由模块的FPGA资源进行互连(互连方式可以是PCIe接口,也可以是其他接口协议),组间路由模块的FPGA通过互连接口(可以是PCIe接口,也可以是其他的互连接口)与各组内路由模块的FPGA进行互连,组内路由模块与多个FPGA加速资源也即多个加速处理模块之间采用公共的接口资源进行互连。每个组内路由模块及其加速处理模块可以构成一个加速处理组。
由此可见,上述结构为通过组间路由模块级联组内路由模块的结构,每一个组内路由模块下可以扩展集成新的加速处理模块,每一个组间路由模块下也可以扩展集成新的由组内路由模块和加速处理模块构成的新的加速处理组,整个系统的可持续集成与扩展性增强。
如图9所示的结构中,该结构与如图8所示的结构的区别仅在于:本例中的主机是通过网络或者虚拟云等网元或者是中间设备与组间路由模块互连,从实施的方案中可以看到,与图8所示本地的加速方案不同的是,本例在组间路由模块前端有一个与主机的发送端对等的设备,该设备可以是对等的协议栈,也可以是自定义的其他交互协议,通过该对等设备将接收到的消息通过内部的总线结构与组间路由模块进行互连。消息发送到组间路由模块之后,后续的处理过程完全相同。本实施例中,主机与组间路由模块及加速处理组的整个结构完全分离,因此,网络中可能有多个服务器组会共享加速处理组,因此组间路由模块的层次也可以进行扩展,扩展的类型可以在消息的业务头中定义。
上述实施例中不仅可以屏蔽主机业务层对具体底层驱动的依赖,提升了业务处理过程中的动态性和灵活性。
而且,通过组间路由模块级联组内路由模块的结构,不仅可以对组内路由模块下的加速处理模块进行扩展,也可以在组间路由模块下扩展新的加速处理组,整个系统的可持续集成与扩展性增强。
再者,针对资源有限的业务,主机业务层可以根据业务的需求与加速的紧急程度组织加速处理组,形成不同功能的加速处理组;根据不同的业务,调配不同的资源,可以实现业务精细化的加速能力。
实施例九
基于以上各实施例,本发明实施例公开了一种数据处理装置,参照图10,为本实施例中的数据处理装置的结构框图,本实施例中的数据处理装置应用于对主机发送的业务数据进行加速的场景,该装置包括:
接收模块1001,用于接收来自所述主机业务层发送且透传过所述主机驱动层的请求消息,所述请求消息包括至少一个加速类型标识及待加速处理的业务数据,其中,每个加速类型标识对应于一种加速处理;
处理模块1002,用于对所述接收模块1101接收到的所述业务数据进行与所述至少一个加速类型标识一一对应的至少一种加速处理。
本发明实施例通过在主机与该数据处理装置之间约定消息结构,使得主机可直接向该数据处理装置发送消息,由数据处理装置根据上述单元进行消息解析和数据处理。该数据处理装置与主机之间的交互不需要专用的驱动配合,从而可以屏蔽主机业务层对具体底层驱动的依赖。本数据处理装置可以运行于不同的业务平台,逻辑的异构能力增强,从而可以提升业务处理过程中的动态性和灵活性。
在另一实施例中,该消息可以包括多个加速类型标识,所述消息还包括:与每个加速类型标识一一对应的加速次序标识,所述加速次序标识用于指示加速处理的顺序;
处理模块1002,还用于按所述多个加速次序标识指示的顺序对所述业务数据进行与所述多个加速类型标识标分别对应的加速处理。
在另一实施例中,如图11所示,处理模块1002可以包括:
组间路由模块1101,至少一个加速处理组1102。
组间路由模块1101包括组间路由表,所述组间路由表包括加速类型标识与加速处理组1102之间的对应关系;所述组间路由模块1101,用于接收来自所述接收模块1001发送的所述请求消息;解析出所述请求消息中的加速类型标识;根据解析出的加速类型标识以及所述组间路由表将所述请求消息转发至目的加速处理组1102;
所述加速处理组1102,用于对业务数据进行加速处理。
在另一实施例中,如图12所示,每个加速处理组1102包括解析模块1120,组内路由模块1121,至少一个加速处理模块1122,其中,各个加速处理模块1122用于对同一业务进行不同类型的加速处理;
解析模块1120,用于对所述组间路由模块1101发送的所述请求消息进行解析,将业务数据进行缓存,并根据解析结果生成内部转发消息,所述内部转发消息中包括所述加速类型标识及所述业务数据的缓存地址;将所述内部转发消息发送至所述组内路由模块1121;
所述组内路由模块1121包括组内路由表,所述组内路由表包括加速类型标识与加速处理模块之间的对应关系;所述组内路由模块1121,用于根据所述加速类型标识及所述组内路由表将从所述解析模块接收到的所述内部转发消息发送至目的加速处理模块1122;
所述加速处理模块1122,用于根据从所述组内路由模块1121接收到的所述内部转发消息中包含的所述缓存地址获取所述业务数据并对所述业务数据进行加速处理。
在另一实施例中,加速处理模块1122,还用于当所述内部转发消息中包含加速次序标识时,将加速处理后的业务数据进行缓存,并通知所述组内路由模块1121;
所述组内路由模块1121,还用于在接收到所述加速处理模块发送的通知后,根据所述加速次序标识将所述内部转发消息发送至下一目的加速处理模块,以使所述下一目的加速处理模块对所述目的加速处理模块缓存的数据进行加速处理,直至所述加速次序标识指示的加速顺序结束。
在另一实施例中,加速处理模块1122,还用于将处理后的业务数据进行缓存;
组内路由模块1121,还用于当所述业务数据全部加速处理完毕时,读取所述加速处理模块1122缓存的所述处理后的业务数据;根据所述处理后的业务数据生成所述请求消息对应的反馈消息;将所述反馈消息发送至所述组间路由模块1101,以使所述组间路由模块1101将所述反馈消息发送至所述主机。
在另一实施例中,组间路由表还配置有老化开关和老化时间;
组间路由模块1101,还被配置为当所述组间路由表的老化开关开启且到达所述老化时间时,上报所述主机,以请求所述主机配置新的组间路由表。
所述组内路由表还配置有老化开关和老化时间;
组内路由模块1121,还被配置为当所述组内路由表的老化开关开启且到达所述老化时间时,上报所述主机,以请求所述主机配置新的组内路由表。
在另一实施例中,接收模块1001,具体可以被配置为通过PCIe接口接收所述主机发送的消息;或者,通过网络或虚拟云或中间设备接收所述主机发送的消息。
在另一实施例中,反馈消息与请求消息具有相同的消息结构,其中,消息结构中包括用于区分所述反馈消息与所述请求消息的消息类型字段。
在另一实施例中,请求消息中设置有字段域和数据域,所述字段域中包含业务头的字段和控制头的字段,所述业务头的字段中包含所述加速类型标识,所述数据域用于携带所述业务数据。
实施例十
基于以上各实施例,本发明实施例公开了一种数据处理装置,如图13所示,该装置包括CPU 131、存储器132和硬件处理单元133,CPU分别与存储器、硬件处理单元相连;该硬件处理单元可以是FPGA、ASIC等;
其中,CPU和存储器可认为构成一个主机,CPU读取存储器中的代码来执行程序,执行的程序在软件层面包括主机驱动层以及主机业务层,这些软硬件架构为本领域技术人员所知的技术,这里不再赘述。
硬件处理单元用于接收来自所述主机业务层发送且透传过所述主机驱动层的请求消息,所述请求消息包括至少一个加速类型标识及待加速处理的业务数据,其中,每个加速类型标识对应于一种加速处理;
硬件处理单元还用于对所述业务数据进行与所述至少一个加速类型标识一一对应的至少一种加速处理。
该装置通过在CPU与硬件处理单元之间约定消息结构,使得CPU可直接向硬件处理单元发送请求消息,由硬件处理单元进行请求消息解析和数据处理。该装置中CPU与硬件处理单元之间的交互不需要专用的驱动配合,从而可以屏蔽CPU对具体底层驱动的依赖。
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统、装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
尽管已描述了本发明实施例的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例做出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本发明实施例范围的所有变更和修改。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者终端设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者终端设备所固有的要素。在没有更多限制的情况下,由语句“包 括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者终端设备中还存在另外的相同要素。
以上对本发明所提供的一种数据处理方法和一种数据处理装置,进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (20)

  1. 一种数据处理方法,应用于硬件处理单元对主机发送的业务数据进行加速的场景,其特征在于,所述方法由所述硬件处理单元执行,包括:
    接收来自所述主机业务层发送且透传过所述主机驱动层的请求消息,所述请求消息包括至少一个加速类型标识及待加速处理的业务数据,其中,每个加速类型标识对应于一种加速处理;
    对所述业务数据进行与所述至少一个加速类型标识一一对应的至少一种加速处理。
  2. 根据权利要求1所述的方法,其特征在于,所述请求消息包括多个加速类型标识,所述请求消息还包括:与每个加速类型标识一一对应的加速次序标识,所述加速次序标识用于指示加速处理的顺序;
    所述对所述业务数据进行与所述至少一个加速类型标识分别对应的至少一种加速处理包括:
    按所述多个加速次序标识指示的顺序对所述业务数据进行与所述多个加速类型标识一一对应的多个加速处理。
  3. 根据权利要求1或2所述的方法,其特征在于:所述硬件处理单元包括接收模块,组间路由模块,至少一个加速处理组;
    所述组间路由模块包括组间路由表,所述组间路由表包括加速类型标识与加速处理组之间的对应关系;
    所述接收来自所述主机业务层发送且透传过所述主机驱动层的请求消息包括:所述接收模块接收来自所述主机业务层发送且透传过所述主机驱动层的请求消息;
    所述对所述业务数据进行与所述至少一个加速类型标识一一对应的至少一种加速处理包括:
    所述组间路由模块接收来自所述接收模块发送的所述请求消息;
    所述组间路由模块解析出所述请求消息中的加速类型标识;
    所述组间路由模块根据解析出的加速类型标识以及所述组间路由表将所述请求消息转发至目的加速处理组;
    所述目的加速处理组对所述业务数据进行加速处理。
  4. 根据权利要求3所述的方法,其特征在于,所述加速处理组包括解析模块,组内路由模块,至少一个加速处理模块,其中,各个加速处理模块用于对同一业务进行不同类型的加速处理;
    所述组内路由模块包括组内路由表,所述组内路由表包括加速类型标识与加速处理模块之间的对应关系;
    所述目的加速处理组对所述业务数据进行加速处理,包括:
    所述目的加速处理组的解析模块对所述请求消息进行解析,将业务数据进行缓存,并根据解析结果生成内部转发消息,所述内部转发消息中包括所述加速类型标识及所述业务数据的缓存地址;
    所述解析模块将所述内部转发消息发送至所述目的加速处理组的组内路由模块;
    所述组内路由模块根据所述加速类型标识及所述组内路由表将所述内部转发消息发送至目的加速处理模块;
    所述目的加速处理模块根据所述内部转发消息中包含的所述缓存地址获取所述业务数据并对所述业务数据进行加速处理。
  5. 根据权利要求4所述的方法,其特征在于,当所述内部转发消息中包含加速次序标识时,所述目的加速处理组对所述业务数据进行加速处理,还包括:
    所述目的加速处理模块将加速处理后的业务数据进行缓存,并通知所述组内路由模块;
    所述组内路由模块根据所述加速次序标识将所述内部转发消息发送至下一目的加速处理模块,以使所述下一目的加速处理模块对所述目的加速处理模块缓存的数据进行加速处理,直至所述加速次序标识指示的加速顺序结束。
  6. 根据权利要求4所述的方法,其特征在于,所述方法还包括:
    所述目的加速处理模块将处理后的业务数据进行缓存;
    当所述业务数据全部加速处理完毕时,所述目的组内路由模块读取缓存的所述处理后的业务数据;
    所述目的组内路由模块根据所述处理后的业务数据生成所述请求消息对应的反馈消息;
    所述目的组内路由模块将所述反馈消息发送至所述组间路由模块,以使所述组间路由模块将所述反馈消息发送至所述主机。
  7. 根据权利要求6所述的方法,其特征在于,所述反馈消息与所述请求消息具有相同的消息结构,其中,所述消息结构中包括用于区分所述反馈消息与所述请求消息的消息类型字段。
  8. 根据权利要求1至7中任意一项所述的方法,其特征在于,所述请求消息中设置有字段域和数据域,所述字段域中包含业务头的字段和控制头的字段,所述业务头的字段中包含所述加速类型标识,所述数据域用于携带所述业务数据。
  9. 根据权利要求1至8中任意一项所述的方法,其特征在于,所述组间路由表还配置有老化开关和老化时间;所述方法还包括:
    当所述组间路由表的老化开关开启且到达所述老化时间时,所述组间路由模块上报所述主机,以请求所述主机配置新的组间路由表。
  10. 根据权利要求1至9中任意一项所述的方法,其特征在于,所述组内路由表还配置有老化开关和老化时间;所述方法还包括:
    当所述组内路由表的老化开关开启且到达所述老化时间时,所述组内路由模块上报所述主机,以请求所述主机配置新的组内路由表。
  11. 一种数据处理装置,应用于对主机发送的业务数据进行加速的场景,其特征在于,所述装置包括:
    接收模块,用于接收来自所述主机业务层发送且透传过所述主机驱动层的请求消息,所述请求消息包括至少一个加速类型标识及待加速处理的业务数据,其中,每个加速类型标识对应于一种加速处理;
    处理模块,用于对所述接收模块接收到的所述业务数据进行与所述至少一个加速类型标识一一对应的至少一种加速处理。
  12. 根据权利要求11所述的装置,其特征在于,所述请求消息包括多个加速类型标识,所述请求消息还包括:与每个加速类型标识一一对应的加速次序标识,所述加速次序标识用于指示加速处理的顺序;
    所述处理模块,还用于按所述多个加速次序标识指示的顺序对所述业务数据进行与所述多个加速类型标识标分别对应的加速处理。
  13. 根据权利要求11或12所述的装置,其特征在于:所述处理模块包括组间路由模块,至少一个加速处理组;
    所述组间路由模块包括组间路由表,所述组间路由表包括加速类型标识与加速处理组之间的对应关系;所述组间路由模块,用于接收来自所述接收模块发送的所述请求消息;解析出所述请求消息中的加速类型标识;根据解析出的加速类型标识以及所述组间路由表将所述请求消息转发至目的加速处理组;
    所述加速处理组,用于对业务数据进行加速处理。
  14. 根据权利要求13所述的装置,其特征在于,所述加速处理组包括解析模块,组内路由模块,至少一个加速处理模块,其中,各个加速处理模块用于对同一业务进行不同类型的加速处理;
    所述解析模块,用于对所述组间路由模块发送的所述请求消息进行解析,将业务数据进行缓存,并根据解析结果生成内部转发消息,所述内部转发消息中包括所述加速类型标识及所述业务数据的缓存地址;将所述内部转发消息发送至所述组内路由模块;
    所述组内路由模块包括组内路由表,所述组内路由表包括加速类型标识与加速处理模块之间的对应关系;所述组内路由模块,用于根据所述加速类型标识及所述组内路由表将从所述解析模块接收到的所述内部转发消息发送至目的加速处理模块;
    所述加速处理模块,用于根据从所述组内路由模块接收到的所述内部转发消息中包含的所述缓存地址获取所述业务数据并对所述业务数据进行加速处理。
  15. 根据权利要求14所述的装置,其特征在于,
    所述加速处理模块,还用于当所述内部转发消息中包含加速次序标识时,将加速处理后的业务数据进行缓存,并通知所述组内路由模块;
    所述组内路由模块,还用于在接收到所述加速处理模块发送的通知后,根据所述加速次序标识将所述内部转发消息发送至下一目的加速处理模块,以使所述下一目的加速处理模块对所述目的加速处理模块缓存的数据进行加速处理,直至所述加速次序标识指示的加速顺序结束。
  16. 根据权利要求14所述的装置,其特征在于,
    所述加速处理模块,还用于将处理后的业务数据进行缓存;
    所述组内路由模块,还用于当所述业务数据全部加速处理完毕时,读取所述加速处理模块缓存的所述处理后的业务数据;根据所述处理后的业务数据生成所述请求消息对应的反馈消息;将所述反馈消息发送至所述组间路由模块,以使所述组间路由模块将所述反馈消息发送至所述主机。
  17. 根据权利要求16所述的装置,其特征在于,所述反馈消息与所述请求消息具有相同的消息结构,其中,所述消息结构中包括用于区分所述反馈消息与所述请求消息的消息类型字段。
  18. 根据权利要求11至17中任意一项所述的装置,其特征在于,所述请求消息中设置有字段域和数据域,所述字段域中包含业务头的字段和控制头的字段,所述业务头的字段中包含所述加速类型标识,所述数据域用于携带所述业务数据。
  19. 根据权利要求11至18中任意一项所述的装置,其特征在于,所述组间路由表还配置有老化开关和老化时间;
    所述组间路由模块,还被配置为当所述组间路由表的老化开关开启且到达所述老化时间时,上报所述主机,以请求所述主机配置新的组间路由表。
  20. 根据权利要求11至19中任意一项所述的装置,其特征在于,所述组内路由表还配置有老化开关和老化时间;
    所述组内路由模块,还被配置为当所述组内路由表的老化开关开启且到达所述老化时间时,上报所述主机,以请求所述主机配置新的组内路由表。
PCT/CN2016/083471 2015-05-29 2016-05-26 一种数据处理方法和装置 WO2016192573A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP16802496.6A EP3291089B1 (en) 2015-05-29 2016-05-26 Data processing method and apparatus
US15/824,032 US10432506B2 (en) 2015-05-29 2017-11-28 Data processing method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510288092.2 2015-05-29
CN201510288092.2A CN104899085B (zh) 2015-05-29 2015-05-29 一种数据处理方法和装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/824,032 Continuation US10432506B2 (en) 2015-05-29 2017-11-28 Data processing method and apparatus

Publications (1)

Publication Number Publication Date
WO2016192573A1 true WO2016192573A1 (zh) 2016-12-08

Family

ID=54031763

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/083471 WO2016192573A1 (zh) 2015-05-29 2016-05-26 一种数据处理方法和装置

Country Status (4)

Country Link
US (1) US10432506B2 (zh)
EP (1) EP3291089B1 (zh)
CN (1) CN104899085B (zh)
WO (1) WO2016192573A1 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104899085B (zh) * 2015-05-29 2018-06-26 华为技术有限公司 一种数据处理方法和装置
CN111865657B (zh) 2015-09-28 2022-01-11 华为技术有限公司 一种加速管理节点、加速节点、客户端及方法
CN105373498B (zh) * 2015-10-09 2018-04-06 上海瀚之友信息技术服务有限公司 一种数据处理系统及方法
CN107835125B (zh) * 2017-10-24 2020-12-01 郑州市公安局 一种用于住宅小区安全防范系统的网关
CN109005448B (zh) * 2018-06-28 2021-06-15 武汉斗鱼网络科技有限公司 弹幕消息分发方法、装置、设备及存储介质
CN111404979B (zh) * 2019-09-29 2023-04-07 杭州海康威视系统技术有限公司 业务请求处理的方法、装置及计算机可读存储介质
CN111143078B (zh) * 2019-12-31 2023-05-12 深圳云天励飞技术有限公司 一种数据处理方法、装置及计算机可读存储介质
CN111160546B (zh) * 2019-12-31 2023-06-13 深圳云天励飞技术有限公司 一种数据处理系统
WO2023215960A1 (en) * 2022-05-09 2023-11-16 Eidetic Communications Inc. System and method for performing data processing
US20240095104A1 (en) * 2022-09-15 2024-03-21 Red Hat, Inc. Asynchronous communication in cluster infrastructures

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101262352A (zh) * 2008-03-04 2008-09-10 浙江大学 一体化安全管理中数据统一加速处理方法
US7925863B2 (en) * 2003-03-28 2011-04-12 Lsi Corporation Hardware device comprising multiple accelerators for performing multiple independent hardware acceleration operations
CN102932458A (zh) * 2012-11-02 2013-02-13 上海电机学院 一种ppp协议的硬件加速系统及其实现方法
CN104899085A (zh) * 2015-05-29 2015-09-09 华为技术有限公司 一种数据处理方法和装置

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8488446B1 (en) * 2010-10-27 2013-07-16 Amazon Technologies, Inc. Managing failure behavior for computing nodes of provided computer networks
CN103399758B (zh) * 2011-12-31 2016-11-23 华为数字技术(成都)有限公司 硬件加速方法、装置和系统
US9286472B2 (en) * 2012-05-22 2016-03-15 Xockets, Inc. Efficient packet handling, redirection, and inspection using offload processors
CN102769574B (zh) * 2012-08-06 2015-04-08 华为技术有限公司 一种能够进行业务硬件加速的装置及其方法
US10037222B2 (en) * 2013-09-24 2018-07-31 University Of Ottawa Virtualization of hardware accelerator allowing simultaneous reading and writing
US9584482B2 (en) * 2014-03-03 2017-02-28 Qualcomm Connected Experiences, Inc. Access control lists for private networks of system agnostic connected devices

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7925863B2 (en) * 2003-03-28 2011-04-12 Lsi Corporation Hardware device comprising multiple accelerators for performing multiple independent hardware acceleration operations
CN101262352A (zh) * 2008-03-04 2008-09-10 浙江大学 一体化安全管理中数据统一加速处理方法
CN102932458A (zh) * 2012-11-02 2013-02-13 上海电机学院 一种ppp协议的硬件加速系统及其实现方法
CN104899085A (zh) * 2015-05-29 2015-09-09 华为技术有限公司 一种数据处理方法和装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3291089A4 *

Also Published As

Publication number Publication date
CN104899085B (zh) 2018-06-26
EP3291089B1 (en) 2021-03-17
CN104899085A (zh) 2015-09-09
EP3291089A1 (en) 2018-03-07
EP3291089A4 (en) 2018-04-04
US10432506B2 (en) 2019-10-01
US20180083864A1 (en) 2018-03-22

Similar Documents

Publication Publication Date Title
WO2016192573A1 (zh) 一种数据处理方法和装置
CN109074330B (zh) 网络接口卡、计算设备以及数据包处理方法
US10397120B2 (en) Service link selection control method and device
CN109479028B (zh) 网络接口卡、计算设备以及数据包处理方法
CN111131037B (zh) 基于虚拟网关的数据传输方法、装置、介质与电子设备
JP6881861B2 (ja) パケット処理方法および装置
CN108768667B (zh) 一种用于多核处理器片内核间网络通信的方法
US20190324930A1 (en) Method, device and computer program product for enabling sr-iov functions in endpoint device
US20160149806A1 (en) Software-defined network (sdn) system using host abstraction, and method for implementing the same
US20190158627A1 (en) Method and device for generating forwarding information
US11206216B2 (en) Flexible ethernet frame forwarding method and apparatus
CN106713183B (zh) 网络设备的接口板以及该网络设备和报文转发方法
WO2022121707A1 (zh) 报文传输方法、设备及系统
WO2016095571A1 (zh) 一种建立组播隧道的方法及装置
US10339091B2 (en) Packet data processing method, apparatus, and system
WO2023179457A1 (zh) 业务连接的标识方法、装置、系统及存储介质
CN105704023B (zh) 一种堆叠系统的报文转发方法、装置及堆叠设备
CN103475561B (zh) 虚拟通信链路动态开关方法和装置
WO2023093065A1 (zh) 数据传输方法、计算设备及计算系统
WO2021110056A1 (zh) 数据处理方法、装置、设备及存储介质
CN116016030A (zh) 以太虚拟专用网的数据处理方法、装置、交换机及存储介质
CN116346719A (zh) 一种mac表项的同步方法及装置
CN118113131A (zh) 芯片功耗的管理方法、装置、系统及计算机可读存储介质
CN112866180A (zh) 数据处理电路、装置以及方法
EP2860920A1 (en) Method and device for generating forwarding table

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16802496

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2016802496

Country of ref document: EP