WO2016107598A1 - 一种业务加速方法及装置 - Google Patents

一种业务加速方法及装置 Download PDF

Info

Publication number
WO2016107598A1
WO2016107598A1 PCT/CN2015/100116 CN2015100116W WO2016107598A1 WO 2016107598 A1 WO2016107598 A1 WO 2016107598A1 CN 2015100116 W CN2015100116 W CN 2015100116W WO 2016107598 A1 WO2016107598 A1 WO 2016107598A1
Authority
WO
WIPO (PCT)
Prior art keywords
acceleration
processing unit
service processing
service
accelerated
Prior art date
Application number
PCT/CN2015/100116
Other languages
English (en)
French (fr)
Inventor
石仔良
王�琦
刘涛
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP15875274.1A priority Critical patent/EP3226468B1/en
Publication of WO2016107598A1 publication Critical patent/WO2016107598A1/zh
Priority to US15/639,274 priority patent/US10545896B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/50Network services
    • H04L67/51Discovery or management thereof, e.g. service location protocol [SLP] or web services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/36Handling requests for interconnection or transfer for access to common bus or bus system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/38Information transfer, e.g. on bus
    • G06F13/42Bus transfer protocol, e.g. handshake; Synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/50Reducing energy consumption in communication networks in wire-line communication networks, e.g. low power modes or reduced link rate

Definitions

  • the present invention relates to the field of electronic technologies, and in particular, to a service acceleration method and apparatus.
  • communication networks usually have different services deployed on different hardware. Therefore, communication networks are usually implemented by different services on different dedicated hardware, such as firewalls, load balancing devices, switches, routers, network management, etc., and complex and complex dedicated hardware.
  • Equipment such as strong hardware and software coupling, high maintenance costs, and slow service deployment, it is more difficult to find deployment space and provide power for these dedicated hardware.
  • hardware-based dedicated equipment is quickly reaching the life cycle, which requires operators to continuously “design-integrate-deploy”, and the cost is getting higher and higher, and the revenue is getting more and more Less, the operators face enormous challenges.
  • the operator camp proposed the concept of Network Functions Virtualisation (NFV), which leverages IT virtualization technology to implement software such as routers, switches, firewalls and software on standard IT servers.
  • Network functions such as network storage devices to standardize and simplify communication network hardware devices to achieve cost reduction and rapid deployment and innovation of services.
  • software running on standard IT servers is difficult to meet the performance of communication networks in many scenarios. Delay targets, so hardware acceleration devices are required to accelerate the business.
  • the hardware acceleration device is connected by the form of the card and the business process, or the dedicated acceleration chip is connected to the service processing unit through the PCB routing, and the service processing unit exclusively enjoys the accelerator resource.
  • accelerators connected to the service unit through the network, so that each service processing unit shares the accelerator resources in a time sharing manner.
  • the prior art provides a dynamic method and information processing system for managing accelerator resources.
  • the system initially associates the set of accelerator resources with the service processing unit one by one, such as initially assigning the accelerator resource A to the service processing unit A, and assigning the accelerator resource B to the service.
  • Processing unit B assigns accelerator resource C to service processing unit C.
  • the allocation manager is responsible for monitoring the performance statistics of the business operations on all business units.
  • the distribution manager analyzes the workload of other business processing units, if another business is to be
  • the accelerator resource corresponding to the processing unit C is reassigned to the service processing unit A that cannot meet the performance performance target, so that the performance degradation value of the service processing unit C is greater than the added value of the service processing unit A, or the performance increase value of the service processing unit A is greater than
  • the set threshold is to allocate the accelerator resource corresponding to the service processing unit C to the service processing unit A, and the service processing unit C will lose the accelerator resource.
  • the above prior art collects the working conditions of all service processing units through the system allocation manager, dynamically allocates the binding relationship between the accelerator resources and the service processing unit, and improves the utilization of the accelerator resources to some extent, but at any moment, one at a time Accelerator resources can only be used by one business processing unit, increasing the accelerator resources of one business processing unit, improving its working performance, and also degrading the performance of another business processing unit, so the accelerator resources cannot be rationally utilized, resulting in resources. The problem of wasting.
  • the present invention provides a service acceleration method and apparatus.
  • the method and apparatus provided by the present invention solve the problem that the accelerator resources cannot be rationally utilized in the prior art, thereby causing waste of resources.
  • a service acceleration device being connected to a plurality of service processing units and a plurality of acceleration engines, the device comprising:
  • a resource pool forming module configured to query acceleration type and idle acceleration capability information of the multiple acceleration engines, and form an accelerator resource pool according to the acceleration type and idle acceleration capability information;
  • a determining module configured to determine, according to the acceleration request, a first acceleration capability and a first acceleration type requested by the first service processing unit, after receiving an acceleration request of the first service processing unit of the plurality of service processing units Determining whether the first quantity of the idle acceleration capability corresponding to the first acceleration type in the accelerator resource pool is greater than the second quantity required by the first acceleration capability;
  • An acceleration capability allocation module configured to: when the first quantity is greater than the second quantity, allocate a first idleness corresponding to the first acceleration type of the first acceleration capability from the accelerator resource pool according to a preset allocation granularity Acceleration capability and connection number; wherein the allocation granularity is a minimum allocation unit that allocates idle acceleration capability in the accelerator resource pool by default;
  • a chain building module configured to send the connection number to the first service processing unit, so that the first service processing unit establishes a link with the service acceleration device according to the connection number;
  • a forwarding module configured to send the to-accelerated message received through the link to at least one acceleration engine that provides the space acceleration capability in the multiple acceleration engines, perform acceleration processing, and obtain an accelerated processing result
  • the message is fed back to the first service processing unit.
  • the apparatus further includes:
  • an indication information adding module configured to determine, when the acceleration request request performs acceleration processing of multiple acceleration types on the to-be-accelerated message, determining, according to the multiple acceleration engines, the multiple acceleration types
  • the target acceleration engine generates routing information according to the identification information of the plurality of target acceleration engines, and adds the routing information to the to-be-accelerated message; and the acceleration engine that receives the to-be-accelerated message is configured according to the The routing information is forwarded to the target acceleration engine indicated by the routing information for acceleration processing.
  • the forwarding module is further configured to add, in the to-be-accelerated packet, a flag to be accelerated.
  • the serial number of the text and after receiving the result message obtained after the acceleration processing, whether the acceleration processing of the to-be-accelerated message is abnormal according to whether the serial number is consecutively determined, and if the abnormality is sent to the first service processing unit Send a retransmission indication.
  • the forwarding module is further configured to: after receiving the to-be-accelerated packet, obtain a storage address carried in the to-be-accelerated packet, where the storage address corresponds to a first one in the first service processing unit.
  • the result message obtained after the acceleration processing is fed back to the first service processing unit, the result message is written into the first storage area by the RDMA method.
  • the apparatus further includes:
  • a recovery module configured to delete a link between the first service processing unit and the service acceleration device after receiving the acceleration resource release request sent by the first service processing unit, and release the first idle acceleration capability After that, the accelerator resource pool is updated.
  • a service acceleration method is provided, the method being applied to a service acceleration device, the device being connected to a plurality of service processing units and a plurality of acceleration engines, the method comprising:
  • the allocation granularity is a minimum allocation unit that allocates idle acceleration capability in the accelerator resource pool by default;
  • the to-be-accelerated message received through the link is sent to at least one acceleration engine of the plurality of acceleration engines that provides the space acceleration capability for acceleration.
  • the method further includes:
  • Determining, by the plurality of acceleration engines, a plurality of target acceleration engines corresponding to the plurality of acceleration types, when determining that the acceleration request is to perform acceleration processing of the acceleration type The identification information of the plurality of target acceleration engines generates routing information, and adds the routing information to the to-be-accelerated message; and the acceleration engine that receives the to-be-accelerated message receives the to-acceleration report according to the routing information.
  • the text is forwarded to the target acceleration engine indicated by the routing information for acceleration processing.
  • the to-be-accelerated message received through the link is distributed to the multiple acceleration engines
  • the method further includes before the at least one acceleration engine providing the space acceleration capability performs acceleration processing
  • the method further includes:
  • the storage address is carried in the to-be-accelerated packet, where the storage address corresponds to the first storage area in the first service processing unit;
  • the feeding back the result message obtained after the acceleration processing to the first service processing unit includes:
  • the result message is written to the first storage area by the RDMA method.
  • the method further comprises:
  • the method and apparatus provided by the embodiments of the present invention integrate the acceleration resources provided by the multiple acceleration engines into one accelerated resource pool, and then manage the accelerated resources in the accelerated resource pool and quantize from the accelerated resource pool.
  • the allocation accelerates the resources to the various business processing units that apply for acceleration of the business.
  • the acceleration engine and the service processing unit are connected through a network, and the link connection relationship between the service processing unit and the accelerator resource pool is established in real time according to the requirements of the service processing unit, and the quantitative direction of the service processing unit is applied when the connection is established.
  • the accelerator resource pool applies for acceleration capability.
  • the device provided by the present invention completes the quantitative allocation of the accelerator resource pool acceleration capability by means of flow control, and the service processing unit releases the connection between the accelerators after the acceleration is completed, thereby fully sharing all the service processing units. Accelerator resources.
  • FIG. 1 is a schematic structural diagram of a service acceleration system for dynamically allocating accelerator resources provided by the prior art
  • FIG. 2 is a schematic structural diagram of a service acceleration apparatus according to an embodiment of the present invention.
  • FIG. 3 is a schematic flowchart of a service acceleration method according to an embodiment of the present disclosure
  • FIG. 4 is a schematic structural diagram of an acceleration management apparatus according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of another acceleration management apparatus according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of an acceleration system according to an embodiment of the present invention.
  • the service acceleration device integrates the acceleration resources provided by the multiple acceleration engines into one accelerated resource pool, and then manages the accelerated resources in the accelerated resource pool in a unified manner, and from the accelerated resource pool.
  • Quantitative allocation accelerates resources to individual business processing units that apply for acceleration of the business.
  • the acceleration engine and the service processing unit are connected through a network, and the link connection relationship between the service processing unit and the accelerator resource pool is established in real time according to the requirements of the service processing unit, and the quantitative direction of the service processing unit is applied when the connection is established.
  • the accelerator resource pool applies for acceleration capability.
  • the device provided by the present invention completes the quantitative allocation of the accelerator resource pool acceleration capability by means of flow control, and the service processing unit releases the connection between the accelerators after the acceleration is completed, thereby fully sharing all the service processing units. Accelerator resources.
  • an embodiment of the present invention provides a service acceleration device, where the device is connected to multiple service processing units and multiple acceleration engines, and the device includes:
  • the service acceleration device provided in the embodiment of the present invention may be disposed on the same hardware device as the multiple acceleration engines.
  • the service acceleration device and the multiple acceleration engines are disposed on different hardware devices, and services are set.
  • the hardware device of the acceleration device and the hardware device provided with the acceleration engine can be connected through a certain interconnection manner. When the service acceleration device needs to use one or several acceleration engines, the connection can be established with the acceleration engine to be used and then the acceleration engine is called. .
  • the resource pool forming module 201 is configured to query the acceleration type and the idle acceleration capability information of the multiple acceleration engines, and form an accelerator resource pool according to the acceleration type and the idle acceleration capability information;
  • the acceleration resource pool is composed of multiple acceleration engines
  • different acceleration engines may correspond to different types (in this embodiment, the type of the acceleration engine refers to the acceleration engine performing the data message.
  • the acceleration engine performs encoding acceleration processing, and the acceleration engine is of the encoding acceleration type), so in the accelerator resource pool, the acceleration resources can be classified, provided by the same type of acceleration engine. Accelerate resources together.
  • the acceleration engine of the encoding type includes three, and in the accelerated resource pool, the statistical encoding acceleration resource is the sum of all the idle acceleration capabilities of the acceleration engine.
  • the determining module 202 is configured to receive acceleration of the first service processing unit of the plurality of service processing units After the application, determining, according to the acceleration request, the first acceleration capability and the first acceleration type that are requested by the first service processing unit, determining whether the first quantity of the idle acceleration capability corresponding to the first acceleration type in the accelerator resource pool is a second quantity greater than the first acceleration capability required;
  • the service processing units are all corresponding to each individual acceleration engine. If a single acceleration engine in the system cannot meet the requirements of the service processing unit, it cannot respond to the application of the service processing unit.
  • the acceleration engine is integrated into a resource pool, and if the integrated idle resources of several acceleration engines of the same type can meet the requirements of the service processing order, the application of the service processing unit can be responded to.
  • the acceleration capability allocation module 203 is configured to: when the first quantity is greater than the second quantity, allocate a first one corresponding to the first acceleration type of the first acceleration capability from the accelerator resource pool according to a preset allocation granularity An idle acceleration capability and a connection number; wherein the allocation granularity is a minimum allocation unit that allocates idle acceleration capability in the accelerator resource pool by default;
  • the allocation granularity corresponds to the acceleration type corresponding to the actual acceleration engine.
  • the acceleration capability corresponding to the acceleration engine 1 is the codec acceleration processing capability of the two channels H265, and the corresponding allocation granularity may be 1 channel H265 codec speed processing capability.
  • the allocation granularity can take a default value or a system configuration value.
  • the allocation granularity of other acceleration types may be: codec capability of 1 channel video, compression and decompression capability of 1 Gbps, encryption and decryption processing capability of 1 Gbps, and the like.
  • a chain building module 204 configured to send the connection number to the first service processing unit, so that the first service processing unit establishes a link with the service acceleration device according to the connection number;
  • the forwarding module 205 is configured to send the to-accelerated message received through the link to at least one acceleration engine that provides the space acceleration capability in the multiple acceleration engines, perform acceleration processing, and obtain an acceleration process.
  • the result message is fed back to the first service processing unit.
  • the forwarding module 205 is further configured to: add a sequence number indicating each to-be-accelerated message in the to-be-accelerated message; and, after receiving the result message obtained after the acceleration process, according to whether the serial number is continuous Determining whether the acceleration processing of the to-be-accelerated message is abnormal, and if the abnormality is, sending a retransmission indication to the first service processing unit.
  • the device in order to ensure that each acceleration engine can process in time after receiving the message to be accelerated, the device performs quality of service on each stream according to the acceleration capability information allocated by the service processing unit.
  • Of Service Qos
  • Qos Quality of Service
  • the service processing unit exceeds the acceleration of the application when accelerating the establishment of the business unit
  • NAK information indicating that the service processing unit is accelerating and indicating that the service processing unit waits for a period of time to retransmit the message, that is, the traffic flow of the corresponding acceleration unit exceeds the traffic, according to the service processing unit.
  • the device in order to realize the real-time sharing of the accelerated resources, the idle and waste of the accelerator resources are reduced, and the device provided by the embodiment of the present invention releases the corresponding acceleration resources after completing the acceleration task applied by each service processing unit. Then the device further comprises:
  • a recovery module configured to delete a link between the first service processing unit and the service acceleration device after receiving the acceleration resource release request sent by the first service processing unit, and release the first idle acceleration capability After that, the accelerator resource pool is updated.
  • the service acceleration device only accelerates the processing of the message completion service, and does not need to cache the message, and the resource allocation is completed in the chain-building request phase of the service processing unit applying for the acceleration resource, and the service acceleration device will Controlling the traffic of the packets between the service processing unit and the acceleration engine, so that the packets entering the acceleration engine can be processed by the acceleration engine in time and returned to the service processing unit, so the service acceleration device does not need to first transmit the address to the service processing unit. .
  • the service processing unit does not need to obtain the cache address of the service acceleration device according to the standard RDMA protocol, and the service acceleration device feedback report
  • the storage address service processing unit of the text may also apply before sending the message, so as to avoid obtaining the address information through multiple interactions later, when the link established by the first service processing unit and the service acceleration device is an RDMA connection, the device is in the device.
  • the forwarding module 205 is further configured to: after receiving the to-be-accelerated packet, obtain a storage address carried in the to-be-accelerated packet, where the storage address corresponds to a first storage area in the first service processing unit; Correspondingly, when the result message obtained after the acceleration processing is fed back to the first service processing unit, the result message is written into the first storage area by the RDMA method.
  • the acceleration of the packet to be accelerated is carried in the packet header of the packet to be accelerated.
  • the routing information is implemented, and the packets are directly transmitted between the acceleration engines in turn, and various acceleration processing is performed, and then returned to the service processing unit to avoid multiple transmissions of the packets between the service processing unit and the accelerator resource pool.
  • the device further comprises:
  • an indication information adding module configured to determine, when the acceleration request request performs acceleration processing of multiple acceleration types on the to-be-accelerated message, determining, according to the multiple acceleration engines, the multiple acceleration types a target acceleration engine, generating routing information according to the identification information of the plurality of target acceleration engines, and adding the routing information
  • the acceleration engine that receives the to-be-accelerated message forwards the to-be-accelerated message to the target acceleration engine indicated by the routing information to perform acceleration processing according to the routing information.
  • All the acceleration engines connected to the service acceleration device provided by the embodiment of the present invention can be allocated to multiple users for sharing and use at the same time to realize virtualization and quantitative allocation of accelerator resources.
  • real-time on-demand accelerators are required by each service processing unit. Applying for accelerating resources, releasing the accelerating resources after the acceleration of the service is completed, and controlling the traffic of the service to be accelerated sent by the service processing unit according to the application amount, realizing the real-time sharing of the accelerated resources, reducing the idleness and waste of the accelerator resources.
  • the service processing unit and the service acceleration device are RDMA connections, based on the characteristics of the service processing between the service processing unit and the service acceleration device of the present invention, the interaction process of the RDMA protocol is simplified, and the service processing unit and the service are reduced.
  • the packet transmission delay between the acceleration devices reduces the load on the CPU of the service processing unit and improves the performance of the system.
  • the routing information is added to the to-be-accelerated packet, so that the packet can be forwarded between the multiple acceleration engines through the routing information, so that the service packet that needs to be accelerated multiple times is not required.
  • Multiple transmissions between the service processing unit and the accelerator do not require multiple forwarding of the message in the accelerator management module inside the accelerator, which simplifies the processing delay of the multiple acceleration services and improves the performance of the system.
  • an embodiment of the present invention further provides a service acceleration method, where the method is applied to a service acceleration device, where the device is connected to multiple service processing units and multiple acceleration engines, and the method includes:
  • Step 301 Query an acceleration type and idle acceleration capability information of the multiple acceleration engines, and form an accelerator resource pool according to the acceleration type and idle acceleration capability information;
  • the acceleration resource pool is composed of multiple acceleration engines
  • different acceleration engines may correspond to different types (in this embodiment, the type of the acceleration engine refers to the acceleration engine performing the data message.
  • the acceleration engine performs encoding acceleration processing, and the acceleration engine is of the encoding acceleration type), so in the accelerator resource pool, the acceleration resources can be classified, provided by the same type of acceleration engine. Accelerate resources together.
  • the acceleration engine of the encoding type includes three, and in the accelerated resource pool, the statistical encoding acceleration resource is the sum of all the idle acceleration capabilities of the acceleration engine.
  • the acceleration capability and the acceleration type of each acceleration engine can be counted in the form of a table. As shown in Table 1:
  • Table 1 is formed according to the acquired acceleration capability information and acceleration type information in each acceleration engine, where each row represents the current state of an acceleration engine, for example, the first row corresponds to the acceleration engine 1 (acceleration engine number AE1), and the acceleration type is ( Type1, specifically: encoding, decoding, compression, decompression, etc.); acceleration capability (Availability1) refers to the total acceleration capability of AE1; allocation granularity (Cfg1) refers to the minimum resource corresponding to the acceleration type of the acceleration engine.
  • Non-matching unit for example: 1-channel H262 decoding acceleration processing capability, or 2Gbps encryption and decryption processing capability
  • idle acceleration capability refers to the acceleration capability of the current acceleration engine idle. If the value is 0, the acceleration engine is positive. It is not busy.
  • Step 302 After receiving the acceleration request of the first service processing unit of the multiple service processing units, determine, according to the acceleration request, the first acceleration capability and the first acceleration type requested by the first service processing unit, and determine Whether the first quantity of the idle acceleration capability corresponding to the first acceleration type in the accelerator resource pool is greater than the second quantity required by the first acceleration capability;
  • the service processing units are all corresponding to each individual acceleration engine. If a single acceleration engine in the system cannot meet the requirements of the service processing unit, it cannot respond to the application of the service processing unit.
  • the acceleration engine is integrated into a resource pool, and if the integrated idle resources of several acceleration engines of the same type can meet the requirements of the service processing order, the application of the service processing unit can be responded to.
  • Step 303 When the first quantity is greater than the second quantity, allocate a first idle acceleration capability and a connection corresponding to the first acceleration type of the first acceleration capability from the accelerator resource pool according to a preset allocation granularity.
  • the allocation granularity is a minimum allocation unit that allocates idle acceleration capability in the accelerator resource pool by default;
  • the allocation granularity corresponds to the acceleration type corresponding to the actual acceleration engine.
  • the acceleration capability corresponding to the acceleration engine 1 is the codec acceleration processing capability of the two channels H265, and the corresponding allocation granularity may be 1 channel H265 codec speed processing capability.
  • the allocation granularity can take a default value or a system configuration value.
  • the allocation granularity of other acceleration types may be: codec capability of 1 channel video, compression and decompression capability of 1 Gbps, encryption and decryption processing capability of 1 Gbps, and the like.
  • Step 304 Send the connection number to the first service processing unit, so that the first service processing unit establishes a link with the service acceleration device according to the connection number.
  • the content in Table 1 can also be refreshed to facilitate other business processing orders.
  • the acceleration engine 2 (identification AE2) and the acceleration engine 3 (identification) in Table 1 may be used.
  • the idle acceleration capability of AE3) is marked as 0.
  • the specific parameters are shown in Table 2:
  • Step 305 The to-accelerated message received by the link is sent to at least one acceleration engine of the plurality of acceleration engines that provides the space acceleration capability, and the result message obtained after the acceleration process is accelerated. Feedback to the first service processing unit.
  • the acceleration resource applied by the service processing unit may be formed by the idle resources of multiple acceleration engines. Therefore, after receiving the to-be-accelerated packet sent by the service processing unit, the acceleration relationship between the application and the acceleration engine may be required.
  • the message to be accelerated is sent to the corresponding acceleration engine for processing. Therefore, in the method provided by the embodiment of the present invention, the to-accelerated message received through the link is sent to the at least one acceleration engine of the plurality of acceleration engines that provides the space acceleration capability for acceleration processing, The method further includes
  • the packet for the same service processing unit needs to be marked with a stream number and a packet sequence number.
  • the destination AE for sending the packet is identified, and the stream numbers AE11 and AE12 indicate that the packet needs to be sent.
  • stream numbers AE33 and AE34 indicate that they need to be sent to AE3 processing.
  • the method further includes:
  • the acceleration of the packet to be accelerated is carried in the packet header of the packet to be accelerated.
  • the routing information is implemented, and the packets are directly transmitted between the acceleration engines in turn, and various acceleration processing is performed, and then returned to the service processing unit to avoid multiple transmissions of the packets between the service processing unit and the accelerator resource pool.
  • the method further includes:
  • Determining, by the plurality of acceleration engines, a plurality of target acceleration engines corresponding to the plurality of acceleration types, when determining that the acceleration request is to perform acceleration processing of the acceleration type The identification information of the plurality of target acceleration engines generates routing information, and adds the routing information to the to-be-accelerated message; and the acceleration engine that receives the to-be-accelerated message receives the to-acceleration report according to the routing information.
  • the text is forwarded to the target acceleration engine indicated by the routing information for acceleration processing.
  • the service acceleration device only accelerates the processing of the message completion service, and does not need to cache the message, and the resource allocation is completed in the chain-building request phase of the service processing unit applying for the acceleration resource, and the service acceleration device will Controlling the traffic of the packets between the service processing unit and the acceleration engine, so that the packets entering the acceleration engine can be processed by the acceleration engine in time and returned to the service processing unit, so the service acceleration device does not need to first transmit the address to the service processing unit. .
  • the service processing unit does not need to obtain the cache address of the service acceleration device according to the standard RDMA protocol, and the service acceleration device feedback report
  • the storage address service processing unit of the text may also apply before sending the message, so as to avoid obtaining the address information through multiple interactions later, when the link established by the first service processing unit and the service acceleration device is an RDMA connection, the device will pass through
  • the method further includes: before the at least one acceleration engine of the plurality of acceleration engines that provides the space acceleration capability is accelerated, and the method further includes:
  • the storage address is carried in the to-be-accelerated packet, where the storage address corresponds to the first storage area in the first service processing unit;
  • the feeding back the result message obtained after the acceleration processing to the first service processing unit includes:
  • the result message is written to the first storage area by the RDMA method.
  • the present invention further provides another acceleration management apparatus, which is connected to a plurality of service processing units and a plurality of acceleration engines, the apparatus including at least one processor 401 (for example, a CPU), at least one network. Interface 402 or other communication interface, memory 403, and at least one communication bus 404 are used to effect connection communication between these devices.
  • the processor 401 is configured to execute an executable module, such as a computer program, stored in the memory 403.
  • the memory 403 may include a high speed random access memory (RAM), and may also include a non-volatile memory such as at least one disk memory.
  • the communication connection between the system gateway and at least one other network element is implemented by using at least one network interface 402 (which may be wired or wireless), and may use an Internet, a wide area network, a local network, a metropolitan area network, etc.; a file system for managing directories and files, and each directory corresponds to a directory storage object, the directory storage object includes a list of attributes of files or directories included in the corresponding directory, and the attribute list includes the The name and attribute information of the file or directory.
  • the memory stores a program 4031, and the program can be executed by the processor, the program comprising:
  • the allocation granularity is a minimum allocation unit that allocates idle acceleration capability in the accelerator resource pool by default;
  • the acceleration management device may also be implemented based on an FPGA.
  • a schematic diagram of implementing an acceleration management device 50 based on an FPGA according to an embodiment of the present invention includes an FPGA chip 51 and other auxiliary circuits 52. (such as the power circuit), through the programming of the FPGA chip, so that it has the above implementation
  • the functions mentioned in the example such as having the function of performing the method shown in Figure 3).
  • an embodiment of the present invention discloses an acceleration system 60, including an acceleration management device 61 and a plurality of acceleration engines 62.
  • the acceleration management device 61 can be implemented based on the manners of FIG. 4 and FIG. 5, and the acceleration engine 62 preferentially It is implemented based on the FPGA chip, which can improve the processing speed compared to the general-purpose CPU chip, thereby better accelerating the business.
  • the acceleration management device 61 and the respective acceleration engines 62 refer to the description in the above embodiments, and details are not described herein again.
  • All the acceleration engines connected to the service acceleration device provided by the embodiment of the present invention can be allocated to multiple users for sharing and use at the same time to realize virtualization and quantitative allocation of accelerator resources.
  • real-time on-demand accelerators are required by each service processing unit. Applying for accelerating resources, releasing the accelerating resources after the acceleration of the service is completed, and controlling the traffic of the service to be accelerated sent by the service processing unit according to the application amount, realizing the real-time sharing of the accelerated resources, reducing the idleness and waste of the accelerator resources.
  • the service processing unit and the service acceleration device are RDMA connections, based on the characteristics of the service processing between the service processing unit and the service acceleration device of the present invention, the interaction process of the RDMA protocol is simplified, and the service processing unit and the service are reduced.
  • the packet transmission delay between the acceleration devices reduces the load on the CPU of the service processing unit and improves the performance of the system.
  • the routing information is added to the to-be-accelerated packet, so that the packet can be forwarded between the multiple acceleration engines through the routing information, so that the service packet that needs to be accelerated multiple times is not required.
  • Multiple transmissions between the service processing unit and the accelerator do not require multiple forwarding of the message in the accelerator management module inside the accelerator, which simplifies the processing delay of the multiple acceleration services and improves the performance of the system.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本发明公开一种业务加速方法及装置,该方法应用于业务加速装置中,该装置与多个业务处理单元和多个加速引擎相连,该方法包括:查询所述多个加速引擎的加速类型和空闲加速能力信息,根据所述加速类型和空闲加速能力信息形成加速器资源池;在接收到所述多个业务处理单元中的第一业务处理单元的加速申请后,则按照预设的分配粒度从所述加速器资源池中分配与所述第一加速能力的第一加速类型对应的第一空闲加速能力和连接号;将加速处理后得到的结果报文反馈给所述第一业务处理单元。本发明提供的方法和装置解决现有技术中加速器资源不能合理利用,从而出现资源浪费的问题。

Description

一种业务加速方法及装置
本申请要求于2015年12月26日提交中国专利局、申请号为CN201510999854.X、发明名称为“一种业务加速方法及装置”的中国专利申请的优先权,其中,申请号为201510999854.X的中国专利申请要求于2014年12月31号提交中国专利局、申请号为CN201410856653.X、发明名称为“一种业务加速方法及装置”的中国专利申请的优先权,这两个专利申请的全部内容通过引用结合在本申请中。
技术领域
本发明涉及电子技术领域,尤其涉及一种业务加速方法及装置。
背景技术
目前通信网络通常不同的业务都在不同的硬件上部署,因此通信网络通常由许多业务在不同的专用硬件上实现,如防火墙、负载均衡设备、交换机、路由器、网管等,繁多而复杂的专用硬件设备导致的软硬件耦合强、维护成本高、业务部署慢等问题,而为这些专用硬件寻找部署空间、提供电源也变更加困难。同时,随着业务的多样性的快速创新发展,基于硬件的专用设备很快就到了生命周期,这需要运营商不断地“设计-集成-部署”,成本越来越高,而收益越来越少,使得运营商面临巨大的挑战。在在这种背景下,运营商阵营提出了网络功能虚拟化(Network Functions Virtualisation,NFV)的概念,通过借用IT的虚拟化技术,在标准的IT服务器上通过软件实现诸如路由器、交换机、防火墙和网络存储设备等网络功能,来实现通信网络硬件设备的标准化和简单化,以实现降成本和业务的快速部署、创新,然而,标准的IT服务器运行的软件许多场景下难以满足通信网络的性能和延时目标,因此需要硬件加速设备对业务进行加速。
除了NFV场景的加速需求,目前业界也有很多已有的硬件加速设备,比如图形加速卡、加解密加速卡、编解码加速卡以及其它业务加速芯片等。这种硬件加速设备,通过插卡的形式和业务处理连接在一起,或者专用的加速芯片通过PCB走线的方式和业务处理单元相连,业务处理单元独享加速器资源。也有加速器通过网络和业务单元相连,实现各业务处理单元分时共享加速器资源的方式。
如图1所示,现有技术提供了一种动态的管理加速器资源的方法和信息处理系统。该系统初始化完成后,系统初始地将加速器资源的集合和业务处理单元一一指定的对应起来,如初始化地将加速器资源A指派给业务处理单元A,将加速器资源B指派给业务 处理单元B,将加速器资源C指派给业务处理单元C。分配管理器负责监视所有业务单元上业务运行的性能统计信息,当某一业务处理单元A无法满足工作性能目标时,分配管理器会分析其它业务处理单元的工作负载情况,如果将其它另一业务处理单元C对应的加速器资源重新指派给某一无法满足工作性能目标的业务处理单元A,使得业务处理单元C性能下降值大于业务处理单元A的增加值,或者业务处理单元A的性能增加值大于设定的阀值,则将业务处理单元C对应的加速器资源分配给业务处理单元A,业务处理单元C将失去加速器资源。
上述现有技术通过系统分配管理器收集所有业务处理单元的工作情况,动态的分配加速器资源和业务处理单元的绑定关系,一定程度上提高了加速器资源的利用率,但是在任一时刻,某一个加速器资源只能被一个业务处理单元使用,增加一个业务处理单元的加速器资源,提升其工作性能,同时也会使得另一业务处理单元的工作性能的下降,所以加速器资源不能合理利用,从而出现资源浪费的问题。
发明内容
本发明提供一种业务加速方法及装置,本发明所提供的方法和装置解决现有技术中加速器资源不能合理利用,从而出现资源浪费的问题。
第一方面,提供一种业务加速装置,该装置与多个业务处理单元和多个加速引擎相连,该装置包括:
资源池形成模块,用于查询所述多个加速引擎的加速类型和空闲加速能力信息,根据所述加速类型和空闲加速能力信息形成加速器资源池;
判断模块,用于在接收到所述多个业务处理单元中的第一业务处理单元的加速申请后,根据该加速申请确定所述第一业务处理单元请求的第一加速能力和第一加速类型,判断所述加速器资源池中所述第一加速类型对应的空闲加速能力的第一数量是否大于所述第一加速能力所需求的第二数量;
加速能力分配模块,用于当所述第一数量大于第二数量,则按照预设的分配粒度从所述加速器资源池中分配与所述第一加速能力的第一加速类型对应的第一空闲加速能力和连接号;其中,所述分配粒度为预设地分配所述加速器资源池中空闲加速能力的最小分配单位;
建链模块,用于将所述连接号发送给所述第一业务处理单元,使得第一业务处理单元根据所述连接号与业务加速装置建立链接;
转发模块,用于将通过所述链接接收到的待加速报文分发送到所述多个加速引擎中提供所述空间加速能力的至少一个加速引擎进行加速处理,并将加速处理后得到的结果报文反馈给所述第一业务处理单元。
结合第一方面,在第一种可能的实现方式中,该装置还包括:
指示信息添加模块,用于在确定所述加速申请请求对所述待加速报文进行多种加速类型的加速处理时,则从所述多个加速引擎中确定所述多种加速类型对应的多个目标加速引擎,根据所述多个目标加速引擎的标识信息生成路由信息,并将所述路由信息添加到所述待加速报文中;使得接收到所述待加速报文的加速引擎根据该路由信息将该待加速报文转发到所述路由信息所指示的目标加速引擎进行加速处理。
结合第一方面,或者第一方面的第一种可能的实现方式,在第二种可能的实现方式中,所述转发模块还用于在所述待加速报文中添加标示每个待加速报文的序列号;并在接收到加速处理后得到的结果报文后,根据所述序列号是否连续确定所述待加速报文的加速处理是否异常,如果异常则给所述第一业务处理单元发送重传指示。
结合第一方面,或者第一方面的第一至二种可能的实现方式,在第三种可能的实现方式中,当第一所述业务处理单元与业务加速装置建立的链接为远程直接存储器存取RDMA连接,则所述转发模块还用于在接收到所述待加速报文后,获取所述待加速报文中携带的存储地址,所述存储地址对应第一业务处理单元中的第一存储区域;
则对应的,将加速处理后得到的结果报文反馈给所述第一业务处理单元时,通过RDMA方式将所述结果报文写入所述第一存储区域。
结合第一方面,或者第一方面的第一至三种可能的实现方式,在第四种可能的实现方式中,该装置还包括:
恢复模块,用于在接收到所述第一业务处理单元发送的加速资源释放请求后,删除所述第一业务处理单元与业务加速装置之间的链接,并在所述第一空闲加速能力释放后,更新所述加速器资源池。
第二方面,提供一种业务加速方法,该方法应用于业务加速装置中,该装置与多个业务处理单元和多个加速引擎相连,该方法包括:
查询所述多个加速引擎的加速类型和空闲加速能力信息,根据所述加速类型和空闲加速能力信息形成加速器资源池;
在接收到所述多个业务处理单元中的第一业务处理单元的加速申请后,根据该加速申请确定所述第一业务处理单元请求的第一加速能力和第一加速类型,判断所述加速器 资源池中所述第一加速类型对应的空闲加速能力的第一数量是否大于所述第一加速能力所需求的第二数量;
当所述第一数量大于第二数量,则按照预设的分配粒度从所述加速器资源池中分配与所述第一加速能力的第一加速类型对应的第一空闲加速能力和连接号;其中,所述分配粒度为预设地分配所述加速器资源池中空闲加速能力的最小分配单位;
将所述连接号发送给所述第一业务处理单元,使得第一业务处理单元根据所述连接号与业务加速装置建立链接;
将通过所述链接接收到的待加速报文分发送到所述多个加速引擎中提供所述空间加速能力的至少一个加速引擎进行加速处理,并将加速处理后得到的结果报文反馈给所述第一业务处理单元。
结合第二方面,在第一种可能的实现方式中,将通过所述链接接收到的待加速报文分发送到所述多个加速引擎中提供所述空间加速能力的至少一个加速引擎进行加速处理之前,该方法还进一步包括:
在确定所述加速申请请求对所述待加速报文进行多种加速类型的加速处理时,则从所述多个加速引擎中确定所述多种加速类型对应的多个目标加速引擎,根据所述多个目标加速引擎的标识信息生成路由信息,并将所述路由信息添加到所述待加速报文中;使得接收到所述待加速报文的加速引擎根据该路由信息将该待加速报文转发到所述路由信息所指示的目标加速引擎进行加速处理。
结合第二方面,或者第二方面的第一种可能的实现方式,在第二种可能的实现方式中,将通过所述链接接收到的待加速报文分发送到所述多个加速引擎中提供所述空间加速能力的至少一个加速引擎进行加速处理之前,该方法还进一步包括
在所述待加速报文中添加标示每个待加速报文的序列号;
对应的,在接收到加速处理后得到的结果报文后,根据所述序列号是否连续确定所述待加速报文的加速处理是否异常,如果异常则给所述第一业务处理单元发送重传指示。
结合第二方面,或者第二方面的第一至二种可能的实现方式,在第三种可能的实现方式中,当所述第一所述业务处理单元与业务加速装置建立的链接为RDMA连接,将通过所述链接接收到的待加速报文分发送到所述多个加速引擎中提供所述空间加速能力的至少一个加速引擎进行加速处理之前,该方法还包括:
在接收到所述待加速报文后,获取所述待加速报文中携带的存储地址,所述存储地址对应第一业务处理单元中的第一存储区域;
则对应的,所述将加速处理后得到的结果报文反馈给所述第一业务处理单元时包括:
通过RDMA方式将所述结果报文写入所述第一存储区域。
结合第二方面,或者第二方面的第一至三种可能的实现方式,在第四种可能的实现方式中,所述将加速处理后得到的结果报文反馈给所述第一业务处理单元之后,该方法进一步包括:
在接收到所述第一业务处理单元发送的加速资源释放请求后,删除所述第一业务处理单元与业务加速装置之间的链接,并在所述第一空闲加速能力释放后,更新所述加速器资源池。
上述技术方案中的一个或两个,至少具有如下技术效果:
本发明实施例所提供的方法和装置,将多个加速引擎所提供的加速资源整合成一个加速资源池,然后在统一的对加速资源池中的加速资源进行管理,并从加速资源池中量化的分配加速资源给申请对业务进行加速的各个业务处理单元。在本发明实施例中加速引擎和业务处理单元之间通过网络相连,根据业务处理单元的需求实时建立业务处理单元和加速器资源池的链路连接关系,申请建立连接时,业务处理单元定量的向加速器资源池申请加速能力,本发明提供的装置通过流量控制的方式,完成加速器资源池加速能力的量化分配,业务处理单位完成加速后释放加速器之间的连接,从而充分使所有业务处理单元充分共享加速器资源。
附图说明
图1为现有技术提供的动态分配加速器资源的业务加速系统结构示意图;
图2为本发明实施例提供的一种业务加速装置的结构示意图;
图3为本发明实施例提供的一种业务加速方法的流程示意图;
图4为本发明实施例提供的一种加速管理装置的结构示意图;
图5为本发明实施例提供的另一种加速管理装置的结构示意图;
图6为本发明实施例提供的一种加速系统架构示意图。
具体实施方式
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
本发明实施例提供的一种业务加速装置,将多个加速引擎所提供的加速资源整合成一个加速资源池,然后在统一的对加速资源池中的加速资源进行管理,并从加速资源池中量化的分配加速资源给申请对业务进行加速的各个业务处理单元。在本发明实施例中加速引擎和业务处理单元之间通过网络相连,根据业务处理单元的需求实时建立业务处理单元和加速器资源池的链路连接关系,申请建立连接时,业务处理单元定量的向加速器资源池申请加速能力,本发明提供的装置通过流量控制的方式,完成加速器资源池加速能力的量化分配,业务处理单位完成加速后释放加速器之间的连接,从而充分使所有业务处理单元充分共享加速器资源。
下面结合说明书附图对本发明实施例作进一步详细描述。
如图2所示,本发明实施例提供一种业务加速装置,该装置与多个业务处理单元和多个加速引擎相连,该装置包括:
本发明实施例中所提供业务加速装置可以与所述多个加速引擎设置在同一个硬件设备上;另外,该业务加速装置与所述多个加速引擎设置在不同的硬件设备上,设置有业务加速装置的硬件设备和设置有加速引擎的硬件设备可以通过一定的互联方式进行连接,当业务加速装置需要使用某一个或几个加速引擎时,可以与需要使用的加速引擎建立连接然后调用加速引擎。
资源池形成模块201,用于查询所述多个加速引擎的加速类型和空闲加速能力信息,根据所述加速类型和空闲加速能力信息形成加速器资源池;
在本发明实施例中,因为加速资源池是由多个加速引擎组成的,不同的加速引擎可能会对应不同的类型(在该实施例中加速引擎的类型是指加速引擎对数据报文进行那种类型的加速处理,例如:加速引擎进行编码加速处理,则该加速引擎的类型为编码加速类型),所以在该加速器资源池中,可以对加速资源进行分类,同一类型的加速引擎所提供的加速资源汇总在一起。例如,编码类型的加速引擎包括三个,则在该加速资源池中,统计的编码加速资源则为这个加速引擎所有空闲加速能力的总和。
判断模块202,用于在接收到所述多个业务处理单元中的第一业务处理单元的加速 申请后,根据该加速申请确定所述第一业务处理单元请求的第一加速能力和第一加速类型,判断所述加速器资源池中所述第一加速类型对应的空闲加速能力的第一数量是否大于所述第一加速能力所需求的第二数量;
在现有技术中,业务处理单元都是与每个单独的加速引擎对应,如果系统中的单个加速引擎不能满足业务处理单元的需求,则不能响应业务处理单元的申请。但在本发明实施例所提的方案中,将所以加速引擎综合成一个资源池,如果同类型的几个加速引擎的综合空闲资源能够满足业务处理单的需求则可以响应业务处理单元的申请。
加速能力分配模块203,用于当所述第一数量大于第二数量,则按照预设的分配粒度从所述加速器资源池中分配与所述第一加速能力的第一加速类型对应的第一空闲加速能力和连接号;其中,所述分配粒度为预设地分配所述加速器资源池中空闲加速能力的最小分配单位;
在本发明实施例中,该分配粒度对应实际加速引擎而言是与加速类型对应的,例如:加速引擎1对应的加速能力为2路H265的编解码加速处理能力,则对应的分配粒度可以是1路H265的编解码加速处理能力。该分配粒度可以采用默认值或者系统配置值。其他加速类型的分配粒度可以是:1路视频的编解码能力,1Gbps的压缩解压缩能力,1Gbps的加解密处理能力等。
建链模块204,用于将所述连接号发送给所述第一业务处理单元,使得第一业务处理单元根据所述连接号与业务加速装置建立链接;
转发模块205,用于将通过所述链接接收到的待加速报文分发送到所述多个加速引擎中提供所述空间加速能力的至少一个加速引擎进行加速处理,并将加速处理后得到的结果报文反馈给所述第一业务处理单元。
因为,业务处理单元申请到的加速资源有可能是多个加速引擎的空闲资源综合形成的,所以接收到业务处理单元发送的待加速报文后,可能需要根据申请到的加速能力与加速引擎的对应关系将待加速的报文发送到对应的加速引擎进行处理。所以所述转发模块205还用于在所述待加速报文中添加标示每个待加速报文的序列号;并在接收到加速处理后得到的结果报文后,根据所述序列号是否连续确定所述待加速报文的加速处理是否异常,如果异常则给所述第一业务处理单元发送重传指示。
在本发明实施例中,为了保证每个加速引擎在接收到待加速的报文后能够及时进行处理,则该装置根据业务处理单元分配到的加速能力信息,对各条流进行服务质量(Quality of Service,Qos)控制,超过加速业务单元建链时申请的加速能力的报文 则丢弃,同时返回给业务处理单元NAK信息,指示业务处理单元加速器拥塞和指示业务处理单元等待一段时间后重传该报文,即表示对应的加速单元的业务流超流量,根据业务处理单元根据指示,满足Qos后重传该加速报文。
另外,为了实现加速资源的实时共享,减小了加速器资源的闲置和浪费,本发明实施例提供的装置在完成每个业务处理单元申请的加速任务之后,会释放对应的加速资源。则该装置还包括:
恢复模块,用于在接收到所述第一业务处理单元发送的加速资源释放请求后,删除所述第一业务处理单元与业务加速装置之间的链接,并在所述第一空闲加速能力释放后,更新所述加速器资源池。
在本发明实施例中,业务加速装置只是对报文完成业务的加速处理,并不需要缓存报文,并且在业务处理单元申请加速资源的建链请求阶段已经完成资源的分配,业务加速装置会控制业务处理单元和加速引擎之间的报文的流量,这样进入加速引擎的报文都可以被加速引擎及时处理并返回给业务处理单元,所以业务加速装置并不需要先传地址给业务处理单元。所以业务加速装置和业务处理单元之间如果是远程直接存储器存取(Remote direct memory access,RDMA)连接,业务处理单元不用按照标准的RDMA协议获取业务加速装置的缓存地址,而且业务加速装置反馈报文的存储地址业务处理单元也可以在发送报文之前申请,避免后期再通过多次交互获取地址信息,当第一所述业务处理单元与业务加速装置建立的链接为RDMA连接,则该装置中:
所述转发模块205还用于在接收到所述待加速报文后,获取所述待加速报文中携带的存储地址,所述存储地址对应第一业务处理单元中的第一存储区域;则对应的,将加速处理后得到的结果报文反馈给所述第一业务处理单元时,通过RDMA方式将所述结果报文写入所述第一存储区域。
在本发明实施例中,对于需要多种加速的业务,为了实现待加速报文一次传输到加速器资源池,本发明实施例提供的方案中可以通过在待加速报文的报文头中携带加速路由信息,实现报文依次在各个加速引擎之间直接传输,完成各种加速处理,然后返回给业务处理单元,避免报文在业务处理单元和加速器资源池之间的多次传输。对应的,该装置还包括:
指示信息添加模块,用于在确定所述加速申请请求对所述待加速报文进行多种加速类型的加速处理时,则从所述多个加速引擎中确定所述多种加速类型对应的多个目标加速引擎,根据所述多个目标加速引擎的标识信息生成路由信息,并将所述路由信息添 加到所述待加速报文中;使得接收到所述待加速报文的加速引擎根据该路由信息将该待加速报文转发到所述路由信息所指示的目标加速引擎进行加速处理。
本发明实施例提供的业务加速装置所连接的所有加速引擎可以按需分配给多个用户同时共享使用,实现加速器资源的虚拟化和量化分配;另外,通过每个业务处理单元实时按需向加速器申请加速资源,并在业务加速处理完成后释放加速资源,并根据申请量对业务处理单元发送的待加速业务的流量控制,实现了加速资源的实时共享,减小了加速器资源的闲置和浪费。
如果业务处理单元与业务加速装置之间是RDMA连接,本基于本发明业务处理单元和业务加速装置之间业务处理的特性,还对RDMA协议的交互流程进行了简化,降低了业务处理单元和业务加速装置之间的报文传输时延,降低业务处理单元CPU的负荷,提升了系统的性能。
进一步,如果一个待加速报文需要多次加速,则在待加速报文中添加路由信息使得报文可以通过路由信息在多个加速引擎之间转发,使需要多次加速的业务报文不需要在业务处理单元和加速器之间的多次传输,也不需要报文在加速器内部的加速器管理模块的多次转发,简化了多次加速业务的处理延时,提升了系统的性能。
如图3所示,本发明实施例还提供一种业务加速方法,该方法应用于业务加速装置中,该装置与多个业务处理单元和多个加速引擎相连,该方法包括:
步骤301,查询所述多个加速引擎的加速类型和空闲加速能力信息,根据所述加速类型和空闲加速能力信息形成加速器资源池;
在本发明实施例中,因为加速资源池是由多个加速引擎组成的,不同的加速引擎可能会对应不同的类型(在该实施例中加速引擎的类型是指加速引擎对数据报文进行那种类型的加速处理,例如:加速引擎进行编码加速处理,则该加速引擎的类型为编码加速类型),所以在该加速器资源池中,可以对加速资源进行分类,同一类型的加速引擎所提供的加速资源汇总在一起。例如,编码类型的加速引擎包括三个,则在该加速资源池中,统计的编码加速资源则为这个加速引擎所有空闲加速能力的总和。在该实施例中可以通过表格的形式对各加速引擎的加速能力和加速类型进行统计。如表1所示:
加速引擎编号 加速能力类型 加速能力 分配粒度 空闲加速能力
AE1 Type1 Ability1 Cfg1 Ability1’
AE2 Type2 Ability2 Cfg2 Ability2
AE3 Type3 Ability3 Cfg3 Ability3
AE4 Type4 Ability4 Cfg4 Ability4’
…… …… …… …… ……
AEn Typen Abilityn Cfgn Abilityn’
表1
表1根据获取到的各个加速引擎中的加速能力信息以及加速类型信息形成,其中每行代表一个加速引擎当前的状态,例如第一行对应加速引擎1(加速引擎编号AE1),加速类型是(Type1,具体可以是:编码、解码、压缩、解压缩等等);加速能力(Ability1),是指AE1总的加速能力;分配粒度(Cfg1),是指基于加速引擎的加速类型对应的最小资源非配单位(例如:1路H262的解码加速处理能力,或者需要2Gbps的加解密处理能力);空闲加速能力是指当前加速引擎空闲的加速能力,如果该值为0则是指该加速引擎正处于忙的状态不能。
步骤302,在接收到所述多个业务处理单元中的第一业务处理单元的加速申请后,根据该加速申请确定所述第一业务处理单元请求的第一加速能力和第一加速类型,判断所述加速器资源池中所述第一加速类型对应的空闲加速能力的第一数量是否大于所述第一加速能力所需求的第二数量;
在现有技术中,业务处理单元都是与每个单独的加速引擎对应,如果系统中的单个加速引擎不能满足业务处理单元的需求,则不能响应业务处理单元的申请。但在本发明实施例所提的方案中,将所以加速引擎综合成一个资源池,如果同类型的几个加速引擎的综合空闲资源能够满足业务处理单的需求则可以响应业务处理单元的申请。
步骤303,当所述第一数量大于第二数量,则按照预设的分配粒度从所述加速器资源池中分配与所述第一加速能力的第一加速类型对应的第一空闲加速能力和连接号;其中,所述分配粒度为预设地分配所述加速器资源池中空闲加速能力的最小分配单位;
在本发明实施例中,该分配粒度对应实际加速引擎而言是与加速类型对应的,例如:加速引擎1对应的加速能力为2路H265的编解码加速处理能力,则对应的分配粒度可以是1路H265的编解码加速处理能力。该分配粒度可以采用默认值或者系统配置值。其他加速类型的分配粒度可以是:1路视频的编解码能力,1Gbps的压缩解压缩能力,1Gbps的加解密处理能力等。
步骤304,将所述连接号发送给所述第一业务处理单元,使得第一业务处理单元根据所述连接号与业务加速装置建立链接;
在进行加速资源分配之后,还可以对表1中内容进行刷新,便于其他业务处理单 元对加速资源的申请,如果第一业务处理单元的申请将加速引擎2和加速引擎3的空闲加速能力都申请完了,则可以将表1中加速引擎2(标识AE2)和加速引擎3(标识AE3)的空闲加速能力标示为0,具体的参数如表2所示:
加速引擎编号 加速能力类型 加速能力 分配粒度 空闲加速能力
AE1 Type1 Ability1 Cfg1 Ability1’
AE2 Type2 Ability2 Cfg2 0
AE3 Type3 Ability3 Cfg3 0
AE4 Type4 Ability4 Cfg4 Ability4’
…… …… …… …… ……
AEn Typen Abilityn Cfgn Abilityn’
表2
步骤305,将通过所述链接接收到的待加速报文分发送到所述多个加速引擎中提供所述空间加速能力的至少一个加速引擎进行加速处理,并将加速处理后得到的结果报文反馈给所述第一业务处理单元。
业务处理单元申请到的加速资源有可能是多个加速引擎的空闲资源综合形成的,所以接收到业务处理单元发送的待加速报文后,可能需要根据申请到的加速能力与加速引擎的对应关系将待加速的报文发送到对应的加速引擎进行处理。所以在本发明实施例提供的方法中,将通过所述链接接收到的待加速报文分发送到所述多个加速引擎中提供所述空间加速能力的至少一个加速引擎进行加速处理之前,该方法还进一步包括
在所述待加速报文中添加标示每个待加速报文的序列号;
对应的,在接收到加速处理后得到的结果报文后,根据所述序列号是否连续确定所述待加速报文的加速处理是否异常,如果异常则给所述第一业务处理单元发送重传指示。
在具体的实施例中,针对同一业务处理单元的报文还需要打上流号和报文序列号,在转发报文的时候,则识别报文发送的目的AE,流号AE11、AE12表示需要发送到AE1进行加速处理的报文1和报文2,流号AE33、AE34表示需要发送到AE3处理。
另外,为了实现加速资源的实时共享,减小了加速器资源的闲置和浪费,本发明实施例提供的装置在完成每个业务处理单元申请的加速任务之后,会释放对应的加速资源。则该实施例中,所述将加速处理后得到的结果报文反馈给所述第一业务处理单元之后,该方法进一步包括:
在接收到所述第一业务处理单元发送的加速资源释放请求后,删除所述第一业务处理单元与业务加速装置之间的链接,并在所述第一空闲加速能力释放后,更新所述加速器资源池。
在本发明实施例中,对于需要多种加速的业务,为了实现待加速报文一次传输到加速器资源池,本发明实施例提供的方案中可以通过在待加速报文的报文头中携带加速路由信息,实现报文依次在各个加速引擎之间直接传输,完成各种加速处理,然后返回给业务处理单元,避免报文在业务处理单元和加速器资源池之间的多次传输。对应的,将通过所述链接接收到的待加速报文分发送到所述多个加速引擎中提供所述空间加速能力的至少一个加速引擎进行加速处理之前,该方法还进一步包括:
在确定所述加速申请请求对所述待加速报文进行多种加速类型的加速处理时,则从所述多个加速引擎中确定所述多种加速类型对应的多个目标加速引擎,根据所述多个目标加速引擎的标识信息生成路由信息,并将所述路由信息添加到所述待加速报文中;使得接收到所述待加速报文的加速引擎根据该路由信息将该待加速报文转发到所述路由信息所指示的目标加速引擎进行加速处理。
在本发明实施例中,业务加速装置只是对报文完成业务的加速处理,并不需要缓存报文,并且在业务处理单元申请加速资源的建链请求阶段已经完成资源的分配,业务加速装置会控制业务处理单元和加速引擎之间的报文的流量,这样进入加速引擎的报文都可以被加速引擎及时处理并返回给业务处理单元,所以业务加速装置并不需要先传地址给业务处理单元。所以业务加速装置和业务处理单元之间如果是远程直接存储器存取(Remote direct memory access,RDMA)连接,业务处理单元不用按照标准的RDMA协议获取业务加速装置的缓存地址,而且业务加速装置反馈报文的存储地址业务处理单元也可以在发送报文之前申请,避免后期再通过多次交互获取地址信息,当第一所述业务处理单元与业务加速装置建立的链接为RDMA连接,则将通过所述链接接收到的待加速报文分发送到所述多个加速引擎中提供所述空间加速能力的至少一个加速引擎进行加速处理之前,该方法还包括:
在接收到所述待加速报文后,获取所述待加速报文中携带的存储地址,所述存储地址对应第一业务处理单元中的第一存储区域;
则对应的,所述将加速处理后得到的结果报文反馈给所述第一业务处理单元时包括:
通过RDMA方式将所述结果报文写入所述第一存储区域。
如图4所示,本发明还提供另一种加速管理装置,该加速管理装置与多个业务处理单元和多个加速引擎相连,该装置包括至少一个处理器401(例如CPU),至少一个网络接口402或者其他通信接口,存储器403,和至少一个通信总线404,用于实现这些装置之间的连接通信。处理器401用于执行存储器403中存储的可执行模块,例如计算机程序。存储器403可能包含高速随机存取存储器(RAM:Random Access Memory),也可能还包括非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。通过至少一个网络接口402(可以是有线或者无线)实现该系统网关与至少一个其他网元之间的通信连接,可以使用互联网,广域网,本地网,城域网等;该服务器中还设置有文件系统;该文件系统用于管理目录和文件,并且每个目录对应一个目录存储对象,该目录存储对象中包括对应的目录中所包括的文件或目录的属性列表,所述属性列表中包括所述文件或目录的名称和属性信息。
在一些实施方式中,存储器存储了程序4031,程序可以被处理器执行,这个程序包括:
查询所述多个加速引擎的加速类型和空闲加速能力信息,根据所述加速类型和空闲加速能力信息形成加速器资源池;
在接收到所述多个业务处理单元中的第一业务处理单元的加速申请后,根据该加速申请确定所述第一业务处理单元请求的第一加速能力和第一加速类型,判断所述加速器资源池中所述第一加速类型对应的空闲加速能力的第一数量是否大于所述第一加速能力所需求的第二数量;
当所述第一数量大于第二数量,则按照预设的分配粒度从所述加速器资源池中分配与所述第一加速能力的第一加速类型对应的第一空闲加速能力和连接号;其中,所述分配粒度为预设地分配所述加速器资源池中空闲加速能力的最小分配单位;
将所述连接号发送给所述第一业务处理单元,使得第一业务处理单元根据所述连接号与业务加速装置建立链接;
将通过所述链接接收到的待加速报文分发送到所述多个加速引擎中提供所述空间加速能力的至少一个加速引擎进行加速处理,并将加速处理后得到的结果报文反馈给所述第一业务处理单元。
在另一实施例中,加速管理装置也可以基于FPGA来实现,参见图5,为本发明实施例基于FPGA来实现加速管理装置50的示意图,加速管理装置50包括FPGA芯片51以及其他附属电路52(如电源电路),通过对FPGA芯片进行编程,使之具有上述各实施 例提到的功能(如具有执行图3所示的方法的功能)。
参见图6,本发明实施例公开了一种加速系统60,包括加速管理装置61以及多个加速引擎62,其中,加速管理装置61可以基于图4以及图5的方式实现,加速引擎62优先地基于FPGA芯片进行实现,这样相比于通用的CPU芯片能够提升处理速度,从而更好地对业务进行加速。加速管理装置61以及各个加速引擎62的交互可参见上述实施例中的描述,这里不再赘述。
本申请实施例中的上述一个或多个技术方案,至少具有如下的技术效果:
本发明实施例提供的业务加速装置所连接的所有加速引擎可以按需分配给多个用户同时共享使用,实现加速器资源的虚拟化和量化分配;另外,通过每个业务处理单元实时按需向加速器申请加速资源,并在业务加速处理完成后释放加速资源,并根据申请量对业务处理单元发送的待加速业务的流量控制,实现了加速资源的实时共享,减小了加速器资源的闲置和浪费。
如果业务处理单元与业务加速装置之间是RDMA连接,本基于本发明业务处理单元和业务加速装置之间业务处理的特性,还对RDMA协议的交互流程进行了简化,降低了业务处理单元和业务加速装置之间的报文传输时延,降低业务处理单元CPU的负荷,提升了系统的性能。
进一步,如果一个待加速报文需要多次加速,则在待加速报文中添加路由信息使得报文可以通过路由信息在多个加速引擎之间转发,使需要多次加速的业务报文不需要在业务处理单元和加速器之间的多次传输,也不需要报文在加速器内部的加速器管理模块的多次转发,简化了多次加速业务的处理延时,提升了系统的性能。
本发明所述的方法并不限于具体实施方式中所述的实施例,本领域技术人员根据本发明的技术方案得出其它的实施方式,同样属于本发明的技术创新范围。
显然,本领域的技术人员可以对本发明进行各种改动和变型而不脱离本发明的精神和范围。这样,倘若本发明的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。

Claims (10)

  1. 一种业务加速装置,其特征在于,该装置与多个业务处理单元和多个加速引擎相连,该装置包括:
    资源池形成模块,用于查询所述多个加速引擎的加速类型和空闲加速能力信息,根据所述加速类型和空闲加速能力信息形成加速器资源池;
    判断模块,用于在接收到所述多个业务处理单元中的第一业务处理单元的加速申请后,根据该加速申请确定所述第一业务处理单元请求的第一加速能力和第一加速类型,判断所述加速器资源池中所述第一加速类型对应的空闲加速能力的第一数量是否大于所述第一加速能力所需求的第二数量;
    加速能力分配模块,用于当所述第一数量大于第二数量,则按照预设的分配粒度从所述加速器资源池中分配与所述第一加速能力的第一加速类型对应的第一空闲加速能力和连接号;其中,所述分配粒度为预设地分配所述加速器资源池中空闲加速能力的最小分配单位;
    建链模块,用于将所述连接号发送给所述第一业务处理单元,使得第一业务处理单元根据所述连接号与业务加速装置建立链接;
    转发模块,用于将通过所述链接接收到的待加速报文分发送到所述多个加速引擎中提供所述空间加速能力的至少一个加速引擎进行加速处理,并将加速处理后得到的结果报文反馈给所述第一业务处理单元。
  2. 如权利要求1所述的装置,其特征在于,该装置还包括:
    指示信息添加模块,用于在确定所述加速申请请求对所述待加速报文进行多种加速类型的加速处理时,则从所述多个加速引擎中确定所述多种加速类型对应的多个目标加速引擎,根据所述多个目标加速引擎的标识信息生成路由信息,并将所述路由信息添加到所述待加速报文中;使得接收到所述待加速报文的加速引擎根据该路由信息将该待加速报文转发到所述路由信息所指示的目标加速引擎进行加速处理。
  3. 如权利要求1或2任一所述的装置,其特征在于,所述转发模块还用于在所述待加速报文中添加标示每个待加速报文的序列号;并在接收到加速处理后得到的结果报文后,根据所述序列号是否连续确定所述待加速报文的加速处理是否异常,如果异常则给所述第一业务处理单元发送重传指示。
  4. 如权利要求1~3任一所述的装置,其特征在于,当第一所述业务处理单元与业务加速装置建立的链接为远程直接存储器存取RDMA连接,则所述转发模块还用于在接收 到所述待加速报文后,获取所述待加速报文中携带的存储地址,所述存储地址对应第一业务处理单元中的第一存储区域;
    则对应的,将加速处理后得到的结果报文反馈给所述第一业务处理单元时,通过RDMA方式将所述结果报文写入所述第一存储区域。
  5. 如权利要求1~4任一所述的装置,其特征在于,该装置还包括:
    恢复模块,用于在接收到所述第一业务处理单元发送的加速资源释放请求后,删除所述第一业务处理单元与业务加速装置之间的链接,并在所述第一空闲加速能力释放后,更新所述加速器资源池。
  6. 一种业务加速方法,其特征在于,该方法应用于业务加速装置中,该装置与多个业务处理单元和多个加速引擎相连,该方法包括:
    查询所述多个加速引擎的加速类型和空闲加速能力信息,根据所述加速类型和空闲加速能力信息形成加速器资源池;
    在接收到所述多个业务处理单元中的第一业务处理单元的加速申请后,根据该加速申请确定所述第一业务处理单元请求的第一加速能力和第一加速类型,判断所述加速器资源池中所述第一加速类型对应的空闲加速能力的第一数量是否大于所述第一加速能力所需求的第二数量;
    当所述第一数量大于第二数量,则按照预设的分配粒度从所述加速器资源池中分配与所述第一加速能力的第一加速类型对应的第一空闲加速能力和连接号;其中,所述分配粒度为预设地分配所述加速器资源池中空闲加速能力的最小分配单位;
    将所述连接号发送给所述第一业务处理单元,使得第一业务处理单元根据所述连接号与业务加速装置建立链接;
    将通过所述链接接收到的待加速报文分发送到所述多个加速引擎中提供所述空间加速能力的至少一个加速引擎进行加速处理,并将加速处理后得到的结果报文反馈给所述第一业务处理单元。
  7. 如权利要求6所述的方法,其特征在于,将通过所述链接接收到的待加速报文分发送到所述多个加速引擎中提供所述空间加速能力的至少一个加速引擎进行加速处理之前,该方法还进一步包括:
    在确定所述加速申请请求对所述待加速报文进行多种加速类型的加速处理时,则从所述多个加速引擎中确定所述多种加速类型对应的多个目标加速引擎,根据所述多个目标加速引擎的标识信息生成路由信息,并将所述路由信息添加到所述待加速报文中;使 得接收到所述待加速报文的加速引擎根据该路由信息将该待加速报文转发到所述路由信息所指示的目标加速引擎进行加速处理。
  8. 如权利要求6或7任一所述的方法,其特征在于,将通过所述链接接收到的待加速报文分发送到所述多个加速引擎中提供所述空间加速能力的至少一个加速引擎进行加速处理之前,该方法还进一步包括
    在所述待加速报文中添加标示每个待加速报文的序列号;
    对应的,在接收到加速处理后得到的结果报文后,根据所述序列号是否连续确定所述待加速报文的加速处理是否异常,如果异常则给所述第一业务处理单元发送重传指示。
  9. 如权利要求6~8任一所述的方法,其特征在于,当所述第一所述业务处理单元与业务加速装置建立的链接为RDMA连接,将通过所述链接接收到的待加速报文分发送到所述多个加速引擎中提供所述空间加速能力的至少一个加速引擎进行加速处理之前,该方法还包括:
    在接收到所述待加速报文后,获取所述待加速报文中携带的存储地址,所述存储地址对应第一业务处理单元中的第一存储区域;
    则对应的,所述将加速处理后得到的结果报文反馈给所述第一业务处理单元时包括:
    通过RDMA方式将所述结果报文写入所述第一存储区域。
  10. 如权利要求6~9任一所述的方法,其特征在于,所述将加速处理后得到的结果报文反馈给所述第一业务处理单元之后,该方法进一步包括:
    在接收到所述第一业务处理单元发送的加速资源释放请求后,删除所述第一业务处理单元与业务加速装置之间的链接,并在所述第一空闲加速能力释放后,更新所述加速器资源池。
PCT/CN2015/100116 2014-12-31 2015-12-31 一种业务加速方法及装置 WO2016107598A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP15875274.1A EP3226468B1 (en) 2014-12-31 2015-12-31 Service acceleration method and apparatus
US15/639,274 US10545896B2 (en) 2014-12-31 2017-06-30 Service acceleration method and apparatus

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201410856653 2014-12-31
CN201410856653.X 2014-12-31
CN201510999854.XA CN105577801B (zh) 2014-12-31 2015-12-26 一种业务加速方法及装置
CN201510999854.X 2015-12-26

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/639,274 Continuation US10545896B2 (en) 2014-12-31 2017-06-30 Service acceleration method and apparatus

Publications (1)

Publication Number Publication Date
WO2016107598A1 true WO2016107598A1 (zh) 2016-07-07

Family

ID=55887445

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/100116 WO2016107598A1 (zh) 2014-12-31 2015-12-31 一种业务加速方法及装置

Country Status (4)

Country Link
US (1) US10545896B2 (zh)
EP (1) EP3226468B1 (zh)
CN (1) CN105577801B (zh)
WO (1) WO2016107598A1 (zh)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105979007B (zh) * 2016-07-04 2020-06-02 华为技术有限公司 加速资源处理方法、装置及网络功能虚拟化系统
CN111813459A (zh) 2016-11-09 2020-10-23 华为技术有限公司 一种加速器加载方法、系统和加速器加载装置
CN108073423B (zh) * 2016-11-09 2020-01-17 华为技术有限公司 一种加速器加载方法、系统和加速器加载装置
CN108009912A (zh) * 2017-11-30 2018-05-08 中国银行股份有限公司 一种释放他行票可用额的释放方法及装置
CN109981480A (zh) 2017-12-27 2019-07-05 华为技术有限公司 一种数据传输方法及第一设备
EP3811210B1 (en) * 2018-06-20 2024-05-01 Telefonaktiebolaget Lm Ericsson (Publ) Method and supporting node for supporting process scheduling in a cloud system
CN109039711B (zh) * 2018-07-12 2021-01-15 联想(北京)有限公司 一种硬件加速器的更换方法、装置及服务器
CN111352735A (zh) * 2020-02-27 2020-06-30 上海上大鼎正软件股份有限公司 数据加速方法、装置、存储介质及设备
CN113742028A (zh) * 2020-05-28 2021-12-03 伊姆西Ip控股有限责任公司 资源使用方法、电子设备和计算机程序产品
CN112822051B (zh) * 2021-01-06 2022-09-16 贵阳迅游网络科技有限公司 基于业务感知的业务加速方法
CN113596085A (zh) * 2021-06-24 2021-11-02 阿里云计算有限公司 数据处理方法系统及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388785A (zh) * 2007-09-11 2009-03-18 中国电信股份有限公司 内容分发网络的资源抽象方法和业务开通方法
CN103281251A (zh) * 2013-06-18 2013-09-04 北京百度网讯科技有限公司 数据中心间的数据传输方法、系统及其子系统
US20130290541A1 (en) * 2012-04-25 2013-10-31 Hitachi ,Ltd. Resource management system and resource managing method
CN103473117A (zh) * 2013-09-18 2013-12-25 北京思特奇信息技术股份有限公司 云模式下的虚拟化方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8713295B2 (en) * 2004-07-12 2014-04-29 Oracle International Corporation Fabric-backplane enterprise servers with pluggable I/O sub-system
US8745266B2 (en) * 2011-06-30 2014-06-03 Citrix Systems, Inc. Transparent layer 2 redirection of request to single sign in service based on applying policy to content of request
CN103399758B (zh) * 2011-12-31 2016-11-23 华为数字技术(成都)有限公司 硬件加速方法、装置和系统
US10222926B2 (en) * 2012-03-19 2019-03-05 Citrix Systems, Inc. Systems and methods for providing user interfaces for management applications
CN103686852B (zh) 2012-09-07 2016-12-21 中国移动通信集团贵州有限公司 一种对交互数据进行处理的方法、设备及无线加速系统
CN103269280B (zh) 2013-04-23 2017-12-15 华为技术有限公司 网络中开展业务的方法、装置及系统

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101388785A (zh) * 2007-09-11 2009-03-18 中国电信股份有限公司 内容分发网络的资源抽象方法和业务开通方法
US20130290541A1 (en) * 2012-04-25 2013-10-31 Hitachi ,Ltd. Resource management system and resource managing method
CN103281251A (zh) * 2013-06-18 2013-09-04 北京百度网讯科技有限公司 数据中心间的数据传输方法、系统及其子系统
CN103473117A (zh) * 2013-09-18 2013-12-25 北京思特奇信息技术股份有限公司 云模式下的虚拟化方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3226468A4 *

Also Published As

Publication number Publication date
CN105577801A (zh) 2016-05-11
EP3226468A4 (en) 2017-12-20
CN105577801B (zh) 2019-01-11
US20170300437A1 (en) 2017-10-19
EP3226468B1 (en) 2019-08-14
EP3226468A1 (en) 2017-10-04
US10545896B2 (en) 2020-01-28

Similar Documents

Publication Publication Date Title
WO2016107598A1 (zh) 一种业务加速方法及装置
US10698717B2 (en) Accelerator virtualization method and apparatus, and centralized resource manager
JP6503575B2 (ja) ソフトウェアデファインドネットワークに基づいてコンテンツディストリビューションネットワークを実現する方法及びシステム
CN113722077B (zh) 数据处理方法、系统、相关设备、存储介质及产品
US11848981B2 (en) Secure multi-directional data pipeline for data distribution systems
WO2016015559A1 (zh) 云化数据中心网络的承载资源分配方法、装置及系统
US10148565B2 (en) OPENFLOW communication method and system, controller, and service gateway
WO2015149604A1 (zh) 一种负载均衡方法、装置及系统
WO2018166111A1 (zh) 基于集中控制器及dci设备的负载均衡的方法、系统、电子装置及计算机可读存储介质
US9602331B2 (en) Shared interface among multiple compute units
US20160021005A1 (en) Communication system, control apparatus and communication apparatus
WO2016050109A1 (zh) 一种通信方法、云管理服务器及虚拟交换机
US20140025800A1 (en) Systems and methods for multi-blade load balancing
KR20110083084A (ko) 가상화를 이용한 서버 운영 장치 및 방법
US20140229586A1 (en) Dynamically allocating network resources for communication session
US20050169309A1 (en) System and method for vertical perimeter protection
CN111800441A (zh) 数据处理方法、系统、装置、用户端服务器、用户端及管控服务器
US20160205063A1 (en) Method, device and system for implementing address sharing
WO2015078220A1 (zh) 媒体复用协商的方法和装置
CN115412527B (zh) 虚拟私有网络之间单向通信的方法及通信装置
WO2018127013A1 (zh) 一种流数据的并发传输方法和装置
CN114785790B (zh) 跨域分析系统、跨域资源调度方法、装置及存储介质
KR102234089B1 (ko) 컨퍼런스 접속 방법 및 이를 수행하기 위한 단말
WO2024016801A1 (zh) 基站算力编排方法、装置、电子设备及存储介质
CN107360104B (zh) 一种隧道端点网络的实现方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15875274

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2015875274

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE