WO2023050663A1 - 一种虚拟网络性能加速方法、装置、设备及存储介质 - Google Patents

一种虚拟网络性能加速方法、装置、设备及存储介质 Download PDF

Info

Publication number
WO2023050663A1
WO2023050663A1 PCT/CN2022/074069 CN2022074069W WO2023050663A1 WO 2023050663 A1 WO2023050663 A1 WO 2023050663A1 CN 2022074069 W CN2022074069 W CN 2022074069W WO 2023050663 A1 WO2023050663 A1 WO 2023050663A1
Authority
WO
WIPO (PCT)
Prior art keywords
message
header information
virtual network
network performance
information
Prior art date
Application number
PCT/CN2022/074069
Other languages
English (en)
French (fr)
Inventor
李丰启
Original Assignee
苏州浪潮智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 苏州浪潮智能科技有限公司 filed Critical 苏州浪潮智能科技有限公司
Priority to US18/279,159 priority Critical patent/US20240089171A1/en
Publication of WO2023050663A1 publication Critical patent/WO2023050663A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/0816Configuration setting characterised by the conditions triggering a change of settings the condition being an adaptation, e.g. in response to network events
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/70Virtual switches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/02Standardisation; Integration
    • H04L41/0226Mapping or translating multiple network management protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0823Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability
    • H04L41/083Configuration setting characterised by the purposes of a change of settings, e.g. optimising configuration for enhancing reliability for increasing network speed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0895Configuration of virtualised networks or elements, e.g. virtualised network function or OpenFlow elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/455Emulation; Interpretation; Software simulation, e.g. virtualisation or emulation of application or operating system execution engines
    • G06F9/45533Hypervisors; Virtual machine monitors
    • G06F9/45558Hypervisor-specific management and integration aspects
    • G06F2009/45595Network integration; Enabling network access in virtual machine instances
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L41/00Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
    • H04L41/08Configuration management of networks or network elements
    • H04L41/0803Configuration setting
    • H04L41/0813Configuration setting characterised by the conditions triggering a change of settings
    • H04L41/082Configuration setting characterised by the conditions triggering a change of settings the condition being updates or upgrades of network functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L43/00Arrangements for monitoring or testing data switching networks
    • H04L43/16Threshold monitoring

Definitions

  • the present application relates to the technical field of communications, and in particular to a virtual network performance acceleration method, device, device, and storage medium.
  • OVS virtual switch software
  • CT connection tracking
  • the CT module of the kernel is designed for a general packet processing flow, and its internal state and flow are long and complicated, which brings great performance loss. Improving the network performance of the OVS virtual network without upgrading the hardware has become a focus and difficulty in the field of communication technology.
  • a virtual network performance acceleration method comprising: step S1, monitoring whether the OVS invokes a CT mechanism; step S2, triggering a translation rule when it is detected that the OVS invokes a CT mechanism; step S3, translating the translated message translated by the translation rule Perform forwarding processing; wherein, the translation rules include: obtaining the header information of the first message processed by the CT mechanism, the universal unique identification code and the header information of the second message that needs to be processed by the CT mechanism; and, based on The header information of the first message and the UUID are used to translate the header information of the second message.
  • the translation of the header information of the second packet includes: generating an information correspondence table based on the header information of the first packet and the UUID; and determining the information correspondence table based on the information correspondence table.
  • the step of generating the information correspondence table includes: obtaining the header information of the first message processed by the CT mechanism; storing the header information of the first message in the form of a data structure, and determining any A universal unique identification code corresponding to a data structure; and, an information correspondence table is generated based on the data structure and the universal unique identification code.
  • the header information at least includes: network address translation type, IP address, port information, link type; wherein, the replacement of the header information of the second message is determined by the network conversion type header information of the second message .
  • step S0 when the virtual network performance acceleration is enabled, and when it is confirmed that the virtual network performance acceleration is enabled, step S1 is performed; when it is confirmed that the virtual network performance acceleration is not enabled, step S1 is not executed; and, the step S2 also includes: When it is not detected that OVS calls the CT mechanism, the translation rule is not triggered.
  • the header information of the second message is replaced and stored in the form of a second data structure; based on the second data structure and its corresponding universal unique identification code, the information correspondence table is updated.
  • updating the information correspondence table includes: traversing the information correspondence table, identifying the degree of overlap between the header information of the second message and the header information of any one of the first messages; When the overlap between the header information of the message and the header information of any one of the first messages is lower than the first threshold, storing the second data structure and its corresponding universal unique identification code into the information corresponding table to update the information correspondence table.
  • a virtual network performance acceleration device comprising: a control module for enabling/closing the acceleration function of the virtual network performance acceleration device; a monitoring module for monitoring whether the OVS invokes a CT mechanism; a translation module for realizing the head Information translation, obtaining the header information of the first message processed by the CT mechanism, the universal unique identification code and the header information of the second message that needs to be processed by the CT mechanism; based on the header information of the first message and the UUID, generating an information correspondence table; based on the information correspondence table, determining the data structure corresponding to the UUID; based on the data structure, performing the header information of the second message replacing, to generate a translated message; and, the kernel, configured to receive the translated message, and forward the translated message.
  • the embodiment of the present application also provides a virtual network performance acceleration device, including a memory and one or more processors, computer-readable instructions are stored in the memory, and the computer-readable instructions are processed by the one or more processors When executed by a processor, the one or more processors are made to execute the steps of any one of the virtual network performance acceleration methods described above.
  • the embodiment of the present application also provides one or more non-volatile computer-readable storage media storing computer-readable instructions.
  • the computer-readable instructions are executed by one or more processors, the one or more processors Execute the steps of any one of the methods for accelerating virtual network performance above.
  • FIG. 1 shows a schematic flowchart of a virtual network performance acceleration method described in one or more embodiments of the present application
  • FIG. 2 shows a comparison diagram between the message forwarding process in the prior art and the accelerated message forwarding process described in one or more embodiments of the present application;
  • FIG. 3 shows a structural block diagram of a virtual network performance acceleration device described in one or more embodiments of the present application
  • FIG. 4 is a schematic structural diagram of a computer device provided by one or more embodiments of the present application.
  • Fig. 5 is a schematic structural diagram of an embodiment of a computer-readable storage medium provided by one or more embodiments of the present application.
  • This application provides a virtual network performance acceleration method. As shown in FIG. 1 , users can choose whether to enable virtual network performance acceleration according to their own needs.
  • the step of monitoring whether the OVS invokes the CT mechanism when it is determined to enable the virtual network performance acceleration and it is confirmed that the virtual network performance acceleration is enabled, the step of monitoring whether the OVS invokes the CT mechanism. When it is determined that the virtual network performance acceleration is enabled and it is confirmed that the virtual network performance acceleration is not enabled, the step of monitoring whether the OVS invokes the CT mechanism is not performed.
  • the Linux kernel includes CT, data packet filtering, network address translation (NAT), transparent proxy, packet speed limit, data packet Modification and other functional modules.
  • Step S1 monitoring and judging whether the OVS invokes the CT mechanism, that is, judging whether the OVS invokes the CT mechanism, and the next step executes Step S2.
  • Step S2 if it is detected that the OVS invokes the CT mechanism, then trigger the translation rule; that is, it is determined that the OVS is detected to invoke the CT mechanism, and trigger the translation rule. If it is determined that the OVS invokes the CT mechanism, then step S3 is performed. If it is detected that the OVS does not invoke the CT mechanism, the message is processed according to the original message processing flow when the CT mechanism is not invoked, such as: packet filtering, network address translation (NAT), transparent proxy, packet speed limit, data packet modification and other processing procedures.
  • NAT network address translation
  • Step S3 forwarding the translated message translated by the translation rule; re-injecting the generated translated message into the Linux kernel, and the Linux kernel performs subsequent flow table matching and forwarding operations.
  • the virtual network performance acceleration method is applicable to the processing flow of the request message as well as the processing flow of the response message.
  • steps in the flow chart of FIG. 1 are displayed sequentially as indicated by the arrows, these steps are not necessarily executed sequentially in the order indicated by the arrows. Unless otherwise specified herein, there is no strict order restriction on the execution of these steps, and these steps can be executed in other orders. Moreover, at least some of the steps in Fig. 1 may include multiple sub-steps or multiple stages, these sub-steps or stages are not necessarily executed at the same time, but may be executed at different times, the execution of these sub-steps or stages The order is not necessarily performed sequentially, but may be performed alternately or alternately with at least a part of other steps or sub-steps or stages of other steps.
  • a method for accelerating virtual network performance comprising: step S1, monitoring whether the OVS invokes a CT mechanism; step S2, triggering a translation rule if it is detected that the OVS invokes a CT mechanism; step S3, translating the translated message translated by the translation rule Perform forwarding processing; wherein, the translation rules include: obtaining the header information of the first message processed by the CT mechanism, the universal unique identification code and the header information of the second message that needs to be processed by the CT mechanism; based on the first The header information of the message and the UUID are used to translate the header information of the second message.
  • OVS invokes the CT mechanism and triggers translation rules.
  • the translation of the header information of the second message includes: generating an information correspondence table based on the header information of the first message and the universal unique identification code; based on the information correspondence table, Determine the data structure corresponding to the UUID; based on the data structure, replace the header information of the second message to generate a translated message.
  • the step of generating the information correspondence table includes: obtaining the header information of the first message processed by the CT mechanism; storing the header information of the first message in the form of a data structure, and Determine the universal unique identification code corresponding to any data structure; generate an information correspondence table based on the data structure and the universal unique identification code.
  • the header information at least includes: network address translation type, IP address, port information, link type; wherein, the replacement of the header information of the second message is determined by the network conversion type header of the second message Information decides.
  • Step S0 judging whether to enable the virtual network performance acceleration, if it is confirmed that the virtual network performance acceleration is enabled, then perform step S1; if it is confirmed that the virtual network performance acceleration is not enabled, then do not perform step S1; the step S2 It also includes: if it is not detected that the OVS invokes the CT mechanism, the translation rule will not be triggered.
  • step S1 it is determined that the virtual network performance acceleration is enabled and it is confirmed that the virtual network performance acceleration is enabled, and step S1 is executed. It is determined that the virtual network performance acceleration is enabled and it is confirmed that the virtual network performance acceleration is not enabled, and step S1 is not performed. It is determined that OVS does not call the CT mechanism and does not trigger translation rules.
  • the header information of the second message is replaced and stored in the form of a second data structure; based on the second data structure and its corresponding universal unique identification code, the information correspondence table is updated.
  • updating the information correspondence table includes: traversing the information correspondence table, and judging the degree of coincidence between the header information of the second message and any header information of the first message; if the The coincidence degree of the header information of the second message with any of the header information of the first message is lower than a first threshold; storing the second data structure and its corresponding universal unique identification code into the information corresponding table, to update the information corresponding table.
  • the first threshold is 95%.
  • identifying a degree of coincidence between the header information of the second packet and the header information of any one of the first packets Determining that the degree of overlap between the header information of the second message and the header information of any one of the first messages is lower than a first threshold, and storing the second data structure and its corresponding universal unique identification code in The information correspondence table is used to update the information correspondence table.
  • control module described in this application is used to enable/disable the performance acceleration of the virtual network according to one's own needs, which helps to improve the user's experience and avoids waste of performance resources.
  • the translation rules described in this application can achieve the beneficial effect of optimizing the processing logic, which further improves the forwarding rate of the message, reduces the forwarding delay, and improves the network performance, that is, without increasing the hardware cost. Based on this, the virtual network performance of OVS has been greatly improved.
  • the virtual network performance acceleration method described in this application through the generated translation rules, bypasses the long and complex CT processing process originally designed for general purposes in the Linux kernel, reduces the processing path, and improves network performance. Compared with In terms of the forwarding performance of the original OVS, the forwarding performance has been greatly improved, which can be increased by 40%-60%. At the same time, the forwarding delay can be reduced by 30%.
  • the virtual network performance acceleration method described in this application makes full use of the broadband resources of the data center network link without upgrading the hardware, and brings excellent user experience to users.
  • the monitoring module is used to monitor whether the OVS invokes the CT mechanism; the translation module implements the translation of the header information according to the preset translation rules to generate the translation message;
  • the translation rule includes: obtaining the header information of the first message processed by the CT mechanism, the universal unique identification code, and the header information of the second message that needs to be processed by the CT mechanism; based on the first message header information and the UUID, generate an information correspondence table; based on the information correspondence table, determine the data structure corresponding to the UUID; based on the data structure, generate an information correspondence table for the second message
  • the header information is replaced to generate a translated message; the kernel is configured to receive the translated message and forward the translated message.
  • the virtual network performance acceleration device may further include: a control module, used to enable/disable the acceleration function of the virtual network performance acceleration device; a monitoring module, used to monitor whether the OVS invokes the CT mechanism; a translation module, according to The preset translation rule implements the translation of the header information to generate a translated message; the kernel is used to receive the translated message and forward the translated message.
  • a control module used to enable/disable the acceleration function of the virtual network performance acceleration device
  • a monitoring module used to monitor whether the OVS invokes the CT mechanism
  • a translation module according to The preset translation rule implements the translation of the header information to generate a translated message
  • the kernel is used to receive the translated message and forward the translated message.
  • the translation rule includes: obtaining the header information of the first message processed by the CT mechanism, the universal unique identification code, and the header information of the second message that needs to be processed by the CT mechanism; based on the first message header information and the UUID, generate an information correspondence table; based on the information correspondence table, determine the data structure corresponding to the UUID; based on the data structure, generate an information correspondence table for the second message The header information is replaced to generate a translated message.
  • the step of generating the information correspondence table includes: obtaining the header information of the first message processed by the CT mechanism; storing the header information of the first message in the form of a data structure, and Determine the universal unique identification code corresponding to any data structure; generate an information correspondence table based on the data structure and the universal unique identification code.
  • the header information at least includes: network address translation type, IP address, port information, and connection type.
  • the modules included in the virtual network performance acceleration device may include the above-mentioned functional modules, but are not limited to the above-mentioned functional modules. Those skilled in the art can design a combination of the above-mentioned modules according to actual scene requirements, and can also choose other possible A module or unit that realizes the above functions.
  • FIG. 3 is only a block diagram of a partial structure related to the solution of this application, and does not constitute a limitation on the virtual network performance acceleration device applied to the solution of this application.
  • the specific computer Devices may include more or fewer components than shown in the figures, or combine certain components, or have a different arrangement of components.
  • a virtual network performance acceleration device is provided, and an internal structure diagram of the virtual network performance acceleration device may be as shown in FIG. 4 .
  • the virtual network performance acceleration device includes a processor connected through a system bus, a memory, a network interface and an input device. Among them, the processor is used to provide calculation and control capabilities.
  • the memory includes a non-volatile storage medium and an internal memory.
  • the non-volatile storage medium stores an operating system and computer readable instructions.
  • the internal memory provides an environment for the execution of the operating system and computer readable instructions in the non-volatile storage medium.
  • the network interface of the virtual network performance acceleration device is used to communicate with an external terminal or server through a network connection.
  • the input device may be a touch layer covered on the display screen, or a button, a trackball or a touch pad provided on the casing of the computer equipment, or an external keyboard, touch pad or mouse.
  • FIG. 4 is only a block diagram of a part of the structure related to the solution of the present application, and does not constitute a limitation on the equipment to which the solution of the application is applied.
  • the specific equipment may include More or fewer components are shown in the figures, or certain components are combined, or have different component arrangements.
  • the memory of the virtual network performance acceleration device includes a non-volatile storage medium and an internal memory.
  • the above-mentioned virtual network performance acceleration device is only a part of the structure related to the solution of this application, and does not constitute a limitation on the virtual network performance acceleration device applied to this application solution.
  • the specific virtual network performance The acceleration device may include more or fewer components than shown in the above device structure, or combine certain components, or have different components.
  • the embodiment of the present application also provides a non-volatile readable storage medium 50, in which Computer-readable instructions 510 are stored, and when the computer-readable instructions 510 are executed by one or more processors, the steps of a method for accelerating virtual network performance in any one of the above-mentioned embodiments can be implemented.
  • Nonvolatile memory can include read only memory (ROM), programmable ROM (PROM), electrically programmable ROM (EPROM), electrically erasable programmable ROM (EEPROM), or flash memory.
  • Volatile memory can include random access memory (RAM) or external cache memory.
  • RAM random access memory
  • RAM is available in many forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Chain Synchlink DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请公开了一种虚拟网络性能加速方法、装置、设备及存储介质,涉及通信技术领域。虚拟网络性能加速方法包括:步骤S1、监测OVS是否调用CT机制;步骤S2、若监测到OVS调用CT机制,则触发转译规则;步骤S3、将经过所述转译规则转译的转译报文进行转发处理。

Description

一种虚拟网络性能加速方法、装置、设备及存储介质
相关申请的交叉引用
本申请要求于2021年9月29日提交中国专利局,申请号为202111147642.0,申请名称为“一种虚拟网络性能加速方法、装置、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及通信技术领域,具体涉及一种虚拟网络性能加速方法、装置、设备及存储介质。
背景技术
经过不断的发展完善,云计算已经成为当前的主流,传统以裸机部署业务的形式在数据中心中几乎难觅踪影,取而代之的是业务以虚拟机或容器的形式运行,经过虚拟化、容器化的不断发展,传统的裸机独占的硬件资源需要由几十上百个虚拟机(VM)或容器共同使用,使数据中心中硬件资源的利用率飙升,单位硬件资源承载的业务密度越来越高,这就为我们如何提升单位硬件资源的使用性能提出了更高的挑战与要求。针对网络硬件来说,如何高效的使用固定的网络硬件资源的使用效率为用户提供最优质的服务是当前数据中心亟需解决的问题。
当前数据中心中虚拟网络绝大部分是使用虚拟交换机软件(OVS)来实现传统的需要硬件完成的交换、路由等网络功能,OVS在处理带状态的流量时会调用Linux系统内核的连接跟踪(CT)模块,其中,内核的CT模块是为通用的包处理流程设计的,其内部的状态和流程又长又复杂,带来了很大的性能损耗。在不升级硬件的情况下,提升OVS虚拟网络的网络性能已经成为了通信技术领域的重点和难点。
因此,急需提出一种可以绕过CT模块、以实现加速基于OVS的虚拟网络性能的方法、装置、设备及存储介质。
发明内容
本申请实施例提供的具体技术方案如下:
一种虚拟网络性能加速方法,包括:步骤S1、监测OVS是否调用CT机制;步骤S2、在监测到OVS调用CT机制时,触发转译规则;步骤S3、将经过所述转译规则转译的转译报文进行转发处理;其中,所述转译规则包括:获取经CT机制处理的第一报文的头部信息、通用唯一识别码及需要经CT机制处理的第二报文的头部信息;和,基于第一报文的头部信息及所述通用唯一识别码,对第二报文的头部信息进行转译。
进一步地,所述第二报文的头部信息的转译包括:基于所述第一报文的头部信息及所述通用唯一识别码,生成信息对应表;基于所述信息对应表,确定所述通用唯一识别码对应的数据结构;和,基于所述数据结构,对所述第二报文的头部信息进行替换,生成转译报文。
进一步地,所述信息对应表的生成步骤包括:获取经CT机制处理的第一报文的头部信息;将所述第一报文的头部信息以数据结构的形式进行存储,并确定任一数据结构对应的通用唯一识别码;和,基于所述数据结构与所述通用唯一识别码,生成信息对应表。
进一步地,所述头部信息至少包括:网络地址转换类型、IP地址、端口信息、链接类型;其中,第二报文的头部信息的替换由第二报文的网络转换类型头部信息决定。
进一步地,包括:步骤S0、在开启虚拟网络性能加速,确认开启虚拟网络性能加速时,执行步骤S1;确认未开启虚拟网络性能加速时,不执行步骤S1;和,所述步骤S2还包括:在未监测到OVS调用CT机制时,不触发转译规则。
进一步地,所述第二报文的头部信息进行替换后以第二数据结构的形式进行存储;基于第二数据结构及其对应的通用唯一识别码,更新所述信息对应表。
进一步地,更新所述信息对应表包括:遍历所述信息对应表,识别所述第二报文的头部信息与任一所述第一报文的头部信息重合度;在所述第二报文的头部信息与任一所述第一报文的头部信息的重合度低于第一阈值时,将所述第二数据结构及其对应的通用唯一识别码存入所述信息对应表,以更新所述信息对应表。
一种虚拟网络性能加速装置,包括:控制模块,用于开启/关闭所述虚拟网络性能加速装置的加速功能;监测模块,用于监测OVS是否调用CT机制;转译模块,用于实现对头部信息的转译,获取经CT机制处理的第一报文的头部信息、通用唯一识别码及需要经CT机制处理的第二报文的头部信息;基于所述第一报文的头部信息及所述通用唯一识别码,生成信息对应表;基于所述信息对应表,确定所述通用唯一识别码对应的数据结构;基于所述数据结构,对所述第二报文的头部信息进行替换,以生成转译报文;和,内核,用于接收所述转译报文,并将所述转译报文进行转发处理。
本申请实施例还提供了一种虚拟网络性能加速设备,包括存储器及一个或多个处理器, 所述存储器中储存有计算机可读指令,所述计算机可读指令被所述一个或多个处理器执行时,使得所述一个或多个处理器执行上述任一项虚拟网络性能加速方法的步骤。
本申请实施例最后还提供了一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,计算机可读指令被一个或多个处理器执行时,使得一个或多个处理器执行上述任一项虚拟网络性能加速方法的步骤。
本申请的一个或多个实施例的细节在下面的附图和描述中提出。本申请的其它特征和优点将从说明书、附图以及权利要求书变得明显。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1示出本申请一个或多个实施例所述的虚拟网络性能加速方法的流程示意图;
图2示出现有技术的报文转发流程和本申请一个或多个实施例所述的加速报文转发流程的对比图;
图3示出本申请一个或多个实施例所述的虚拟网络性能加速装置的结构框图;
图4为本申请一个或多个实施例提供的一种计算机设备的结构示意图;
图5为本申请一个或多个实施例提供的计算机可读存储介质的一实施例的结构示意图。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
实施例一
本申请提供一种虚拟网络性能加速方法,如图1所示,用户可以根据自我的需求,选择是否开启虚拟网络性能加速。
具体地,判定开启虚拟网络性能加速且确认开启虚拟网络性能加速时,监测OVS是否调用CT机制的步骤。判定开启虚拟网络性能加速且确认未开启虚拟网络性能加速时,不执行监测OVS是否调用CT机制的步骤。
如果不开启网络性能加速,则按照原有的Linux内核内部的功能模块设计进行报文转发作业,Linux内核包括CT、数据包过滤、网络地址转换(NAT)、透明代理、包速限制、数据包修改等功能模块。
如果开启网络性能加速,则执行以下步骤:步骤S1、监测判断OVS是否调用CT机制,即判断OVS是否调用CT机制,下一步执行步骤S2。步骤S2、若监测到OVS调用CT机制,则触发转译规则;也即是,判定监测到OVS调用CT机制,触发转译规则。若判断确定OVS调用CT机制,则执行步骤S3,若监测到OVS未调用CT机制,则按照不调用CT机制时原报文处理流程对报文进行处理,例如:数据包过滤、网络地址转换(NAT)、透明代理、包速限制、数据包修改等处理流程。也即是,判定监测到OVS未调用CT机制,按照不调用CT机制时原报文处理流程对报文进行处理。步骤S3、将经过所述转译规则转译的转译报文进行转发处理;将生成的转译报文重新注入到Linux内核中,Linux内核执行后续的流表匹配和转发操作。需要注意的是,虚拟网络性能加速方法适用于请求报文的处理流程也适用于回应报文的处理流程。最终以确保在减少原有的处理逻辑上的基础上,达到与调用CT机制时原报文处理功能一致的技术效果,最终实现报文的高速处理,提升报文转发性能。调用CT机制时原报文处理流程与使用本申请所述的虚拟网络性能加速方法的报文处理流程的处理流程对比图如图2所示,通过对比图可以明显地看到虚拟网络性能加速方法的报文处理流程相较于不使用本申请所述的网络性能加速方法的报文处理流程而言,处理逻辑大大减少了,报文的转发性能大大提高了。
应该理解的是,虽然图1的流程图中的各个步骤按照箭头的指示依次显示,但是这些步骤并不是必然按照箭头指示的顺序依次执行。除非本文中有明确的说明,这些步骤的执行并没有严格的顺序限制,这些步骤可以以其它的顺序执行。而且,图1中的至少一部分步骤可以包括多个子步骤或者多个阶段,这些子步骤或者阶段并不必然是在同一时刻执行完成,而是可以在不同的时刻执行,这些子步骤或者阶段的执行顺序也不必然是依次进行,而是可以与其它步骤或者其它步骤的子步骤或者阶段的至少一部分轮流或者交替地执行。
实施例二
一种虚拟网络性能加速方法,包括:步骤S1、监测OVS是否调用CT机制;步骤S2、若监测到OVS调用CT机制,则触发转译规则;步骤S3、将经过所述转译规则转译的转译报文进行转发处理;其中,所述转译规则包括:获取经CT机制处理的第一报文的头部信息、通用唯一识别码及需要经CT机制处理的第二报文的头部信息;基于第一报文的头部信息及所述通用唯一识别码,对第二报文的头部信息进行转译。
具体地,判定监测到OVS调用CT机制,触发转译规则。
本实施例中,所述第二报文的头部信息的转译包括:基于所述第一报文的头部信息及所述通用唯一识别码,生成信息对应表;基于所述信息对应表,确定所述通用唯一识别码对应的数据结构;基于所述数据结构,对所述第二报文的头部信息进行替换,生成转译报文。
本实施例中,所述信息对应表的生成步骤包括:获取经CT机制处理的第一报文的头部信息;将所述第一报文的头部信息以数据结构的形式进行存储,并确定任一数据结构对应的通用唯一识别码;基于所述数据结构与所述通用唯一识别码,生成信息对应表。
本实施例中,所述头部信息至少包括:网络地址转换类型、IP地址、端口信息、链接类型;其中,第二报文的头部信息的替换由第二报文的网络转换类型头部信息决定。
本实施例中,包括:步骤S0、判断是否开启虚拟网络性能加速,若确认开启虚拟网络性能加速,则执行步骤S1;若确认未开启虚拟网络性能加速,则不执行步骤S1;所述步骤S2还包括:若未监测到OVS调用CT机制,则不触发转译规则。
具体地,判定开启虚拟网络性能加速且确认开启虚拟网络性能加速,执行步骤S1。判定开启虚拟网络性能加速且确认未开启虚拟网络性能加速,不执行步骤S1。判定未监测到OVS调用CT机制,不触发转译规则。
本实施例中,所述第二报文的头部信息进行替换后以第二数据结构的形式进行存储;基于第二数据结构及其对应的通用唯一识别码,更新所述信息对应表。
本实施例中,更新所述信息对应表包括:遍历所述信息对应表,判断所述第二报文的头部信息与任一所述第一报文的头部信息重合度;若所述第二报文的头部信息与任一所述第一报文的头部信息的重合度低于第一阈值;将所述第二数据结构及其对应的通用唯一识别码存入所述信息对应表,以更新所述信息对应表。其中,本领域的技术人员可以根据实际情况对第一阈值的取值进行合理选择,在一个具体的实施例中,所述第一阈值为95%。
具体地,识别所述第二报文的头部信息与任一所述第一报文的头部信息重合度。判定所述第二报文的头部信息与任一所述第一报文的头部信息的重合度低于第一阈值,将所述第二数据结构及其对应的通用唯一识别码存入所述信息对应表,以更新所述信息对应表。
本申请实施例具有如下有益效果:
1.本申请所述的控制模块,用于可以根据自己的需要选择开启/关闭所属虚拟网络性能加速,有助于提升用户的体验感,同时也避免的性能资源浪费。
2.本申请所述的转译规则可以实现优化处理逻辑的有益效果,通过优化处理逻辑进一步地提高了报文的转发速率,降低了转发时延,提升了网络性能,即在不增加硬件成本的基础 上,极大程度地提升了OVS的虚拟网络性能。
3.本申请所述的虚拟网络性能加速方法,通过生成的转译规则实现了绕过Linux内核内原有为通用目的设计的冗长复杂的CT处理流程,减少了处理路径,提升了网络性能,对比于原OVS的转发性能而言,转发性能得到了很大的提升,可以提升40%-60%,与此同时,转发时延可以下降30%。
4.本申请所述的虚拟网络性能加速方法,在不升级硬件的基础上充分利用了数据中心网络链路宽带资源,为用户带来了极佳的使用体验。
实施例三
一种虚拟网络性能加速装置,如图3所示,监测模块,用于监测OVS是否调用CT机制;转译模块,根据预设的转译规则,实现对头部信息进行转译,以生成转译报文;其中,所述转译规则包括:获取经CT机制处理的第一报文的头部信息、通用唯一识别码及需要经CT机制处理的第二报文的头部信息;基于所述第一报文的头部信息及所述通用唯一识别码,生成信息对应表;基于所述信息对应表,确定所述通用唯一识别码对应的数据结构;基于所述数据结构,对所述第二报文的头部信息进行替换,生成转译报文;内核,用于接收所述转译报文,并将所述转译报文进行转发处理。
在一个实施例中,虚拟网络性能加速装置还可以包括:控制模块,用于开启/关闭所述虚拟网络性能加速装置的加速功能;监测模块,用于监测OVS是否调用CT机制;转译模块,根据预设的转译规则,实现对头部信息进行转译,以生成转译报文;内核,用于接收所述转译报文,并将所述转译报文进行转发处理。其中,所述转译规则包括:获取经CT机制处理的第一报文的头部信息、通用唯一识别码及需要经CT机制处理的第二报文的头部信息;基于所述第一报文的头部信息及所述通用唯一识别码,生成信息对应表;基于所述信息对应表,确定所述通用唯一识别码对应的数据结构;基于所述数据结构,对所述第二报文的头部信息进行替换,生成转译报文。
本实施例中,所述信息对应表的生成步骤包括:获取经CT机制处理的第一报文的头部信息;将所述第一报文的头部信息以数据结构的形式进行存储,并确定任一数据结构对应的通用唯一识别码;基于所述数据结构与所述通用唯一识别码,生成信息对应表。其中,所述头部信息至少包括:网络地址转换类型、IP地址、端口信息、链接类型。
以上实施例的各技术特征可以进行任意的组合,为使描述简洁,未对上述实施例中的各个技术特征所有可能的组合都进行描述,然而,只要这些技术特征的组合不存在矛盾,都应当认为是本说明书记载的范围。需要理解的是,虚拟网络性能加速装置包括的模块可以包括 上述功能模块,但是不限于上述功能模块,本领域的技术人员可以根据实际场景需求,可以对上述模块进行组合设计,还可以选择其他可以实现上述功能的模块、单元。
本领域技术人员可以理解,图3中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上虚拟网络性能加速装置的限定,具体的计算机设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
实施例四
在一些实施例中,提供了一种虚拟网络性能加速设备,该虚拟网络性能加速设备的内部结构图可以如图4所示。该虚拟网络性能加速设备包括通过系统总线连接的处理器、存储器、网络接口和输入装置。其中,该处理器用于提供计算和控制能力。该存储器包括非易失性存储介质、内存储器。该非易失性存储介质存储有操作系统和计算机可读指令。该内存储器为非易失性存储介质中的操作系统和计算机可读指令的运行提供环境。该虚拟网络性能加速设备的网络接口用于与外部的终端或者服务器通过网络连接通信。该计算机可读指令被处理器执行时以实现一种虚拟网络性能加速方法。该输入装置可以是显示屏上覆盖的触摸层,也可以是计算机设备外壳上设置的按键、轨迹球或触控板,还可以是外接的键盘、触控板或鼠标等。
本领域技术人员可以理解,图4中示出的结构,仅仅是与本申请方案相关的部分结构的框图,并不构成对本申请方案所应用于其上的设备的限定,具体的设备可以包括比图中所示更多或更少的部件,或者组合某些部件,或者具有不同的部件布置。
需要理解的是,该虚拟网络性能加速设备的存储器包括非易失性存储介质、内存储器。本领域技术人员可以理解,上述虚拟网络性能加速设备,仅仅是与本申请方案相关的部分结构,并不构成对本申请方案所应用于其上的虚拟网络性能加速设备的限定,具体的虚拟网络性能加速设备可以包括比上述设备结构所示更多或更少的部件,或者组合某些部件,或者具有不同的部件。
实施例五
基于同一发明构思,根据本申请的另一个方面,如图5所示,本申请的实施例还提供了一种非易失性可读存储介质50,该非易失性可读存储介质50中存储有计算机可读指令510,该计算机可读指令510被一个或多个处理器执行时可实现上述任意一个实施例的一种虚拟网络性能加速方法的步骤。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计 算机可读指令来指令相关的硬件来完成,所述的计算机可读指令可存储于一非易失性计算机可读取存储介质中,该计算机可读指令在执行时,可包括如上述各方法的实施例的流程。其中,本申请所提供的各实施例中所使用的对存储器、存储、数据库或其它介质的任何引用,均可包括非易失性和/或易失性存储器。非易失性存储器可包括只读存储器(ROM)、可编程ROM(PROM)、电可编程ROM(EPROM)、电可擦除可编程ROM(EEPROM)或闪存。易失性存储器可包括随机存取存储器(RAM)或者外部高速缓冲存储器。作为说明而非局限,RAM以多种形式可得,诸如静态RAM(SRAM)、动态RAM(DRAM)、同步DRAM(SDRAM)、双数据率SDRAM(DDRSDRAM)、增强型SDRAM(ESDRAM)、同步链路(Synchlink)DRAM(SLDRAM)、存储器总线(Rambus)直接RAM(RDRAM)、直接存储器总线动态RAM(DRDRAM)、以及存储器总线动态RAM(RDRAM)等。
尽管已描述了本申请实施例中的优选实施例,但本领域内的技术人员一旦得知了基本创造性概念,则可对这些实施例作出另外的变更和修改。所以,所附权利要求意欲解释为包括优选实施例以及落入本申请实施例中范围的所有变更和修改。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (10)

  1. 一种虚拟网络性能加速方法,其特征在于,包括:
    步骤S1、监测OVS是否调用CT机制;
    步骤S2、在监测到OVS调用CT机制时,触发转译规则;
    步骤S3、将经过所述转译规则转译的转译报文进行转发处理;
    其中,所述转译规则包括:
    获取经CT机制处理的第一报文的头部信息、通用唯一识别码及需要经CT机制处理的第二报文的头部信息;和
    基于所述第一报文的头部信息及所述通用唯一识别码,对所述第二报文的头部信息进行转译。
  2. 根据权利要求1所述的虚拟网络性能加速方法,其特征在于,所述第二报文的头部信息的转译包括:
    基于所述第一报文的头部信息及所述通用唯一识别码,生成信息对应表;
    基于所述信息对应表,确定所述通用唯一识别码对应的数据结构;和
    基于所述数据结构,对所述第二报文的头部信息进行替换,生成转译报文。
  3. 根据权利要求2所述的虚拟网络性能加速方法,其特征在于,所述信息对应表的生成步骤包括:
    获取经CT机制处理的第一报文的头部信息;
    将所述第一报文的头部信息以数据结构的形式进行存储,并确定任一数据结构对应的通用唯一识别码;和
    基于所述数据结构与所述通用唯一识别码,生成信息对应表。
  4. 根据权利要求3所述的虚拟网络性能加速方法,其特征在于,所述头部信息至少包括:网络地址转换类型、IP地址、端口信息、链接类型;其中,第二报文的头部信息的替换由第二报文的网络转换类型头部信息决定。
  5. 根据权利要求1所述的虚拟网络性能加速方法,其特征在于,包括:步骤S0、在开启虚拟网络性能加速,确认开启虚拟网络性能加速时,执行步骤S1;确认未开启虚拟网络性能加速时,不执行步骤S1;和
    所述步骤S2还包括:在未监测到OVS调用CT机制时,不触发转译规则。
  6. 根据权利要求3所述的虚拟网络性能加速方法,其特征在于,所述第二报文的头部信息进行替换后以第二数据结构的形式进行存储;基于所述第二数据结构及其对应的通用唯一识别码,更新所述信息对应表。
  7. 根据权利要求6所述的虚拟网络性能加速方法,其特征在于,更新所述信息对应表包括:
    遍历所述信息对应表,识别所述第二报文的头部信息与任一所述第一报文的头部信息重合度;和
    在所述第二报文的头部信息与任一所述第一报文的头部信息的重合度低于第一阈值时,将所述第二数据结构及其对应的通用唯一识别码存入所述信息对应表,以更新所述信息对应表。
  8. 一种虚拟网络性能加速装置,其特征在于,包括:
    控制模块,用于开启/关闭所述虚拟网络性能加速装置的加速功能;
    监测模块,用于监测OVS是否调用CT机制;
    转译模块,用于实现对头部信息的转译,获取经CT机制处理的第一报文的头部信息、通用唯一识别码及需要经CT机制处理的第二报文的头部信息;基于所述第一报文的头部信息及所述通用唯一识别码,生成信息对应表;基于所述信息对应表,确定所述通用唯一识别码对应的数据结构;基于所述数据结构,对所述第二报文的头部信息进行替换,以生成转译报文;和
    内核,用于接收所述转译报文,并将所述转译报文进行转发处理。
  9. 一种虚拟网络性能加速设备,其特征在于,包括存储器及一个或多个处理器,所述存储器中储存有计算机可读指令,所述计算机可读指令被所述一个或多个处理器执行时,使得所述一个或多个处理器执行如权利要求1-7任意一项所述的方法的步骤。
  10. 一个或多个存储有计算机可读指令的非易失性计算机可读存储介质,其特征在于,所述计算机可读指令被一个或多个处理器执行时,使得所述一个或多个处理器执行如权利要求1-7任意一项所述的方法的步骤。
PCT/CN2022/074069 2021-09-29 2022-01-26 一种虚拟网络性能加速方法、装置、设备及存储介质 WO2023050663A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/279,159 US20240089171A1 (en) 2021-09-29 2022-01-26 Virtual network performance acceleration method, apparatus and device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111147642.0A CN113595938B (zh) 2021-09-29 2021-09-29 一种虚拟网络性能加速方法、装置、设备及存储介质
CN202111147642.0 2021-09-29

Publications (1)

Publication Number Publication Date
WO2023050663A1 true WO2023050663A1 (zh) 2023-04-06

Family

ID=78242719

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/074069 WO2023050663A1 (zh) 2021-09-29 2022-01-26 一种虚拟网络性能加速方法、装置、设备及存储介质

Country Status (3)

Country Link
US (1) US20240089171A1 (zh)
CN (1) CN113595938B (zh)
WO (1) WO2023050663A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113595938B (zh) * 2021-09-29 2021-12-17 苏州浪潮智能科技有限公司 一种虚拟网络性能加速方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170371692A1 (en) * 2016-06-22 2017-12-28 Ciena Corporation Optimized virtual network function service chaining with hardware acceleration
CN109962832A (zh) * 2017-12-26 2019-07-02 华为技术有限公司 报文处理的方法和装置
CN110391993A (zh) * 2019-07-12 2019-10-29 苏州浪潮智能科技有限公司 一种数据处理方法及系统
CN110636036A (zh) * 2018-06-22 2019-12-31 复旦大学 一种基于SDN的OpenStack云主机网络访问控制的方法
CN113595938A (zh) * 2021-09-29 2021-11-02 苏州浪潮智能科技有限公司 一种虚拟网络性能加速方法、装置、设备及存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7818729B1 (en) * 2003-09-15 2010-10-19 Thomas Plum Automated safe secure techniques for eliminating undefined behavior in computer software
CN108234359B (zh) * 2016-12-13 2020-12-04 华为技术有限公司 传输报文的系统和方法
US10547553B2 (en) * 2017-09-17 2020-01-28 Mellanox Technologies, Ltd. Stateful connection tracking
CN107872545B (zh) * 2017-09-26 2022-12-06 中兴通讯股份有限公司 一种报文传输方法及装置、计算机可读存储介质
US10708229B2 (en) * 2017-11-15 2020-07-07 Nicira, Inc. Packet induced revalidation of connection tracker
US10757077B2 (en) * 2017-11-15 2020-08-25 Nicira, Inc. Stateful connection policy filtering
CN110708393B (zh) * 2019-10-21 2023-11-21 北京百度网讯科技有限公司 用于传输数据的方法、装置和系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170371692A1 (en) * 2016-06-22 2017-12-28 Ciena Corporation Optimized virtual network function service chaining with hardware acceleration
CN109962832A (zh) * 2017-12-26 2019-07-02 华为技术有限公司 报文处理的方法和装置
CN110636036A (zh) * 2018-06-22 2019-12-31 复旦大学 一种基于SDN的OpenStack云主机网络访问控制的方法
CN110391993A (zh) * 2019-07-12 2019-10-29 苏州浪潮智能科技有限公司 一种数据处理方法及系统
CN113595938A (zh) * 2021-09-29 2021-11-02 苏州浪潮智能科技有限公司 一种虚拟网络性能加速方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN113595938B (zh) 2021-12-17
CN113595938A (zh) 2021-11-02
US20240089171A1 (en) 2024-03-14

Similar Documents

Publication Publication Date Title
US20200195511A1 (en) Network management method and related device
TWI766893B (zh) 虛擬專有網路及規則表生成方法、裝置及路由方法
EP3373518B1 (en) Service configuration method and device for network service
US11055159B2 (en) System and method for self-healing of application centric infrastructure fabric memory
CN110808857B (zh) 实现Kubernetes集群的网络互通方法、装置、设备以及存储介质
WO2021109750A1 (zh) 节点管理方法、装置、设备、存储介质和系统
CN115134315B (zh) 报文转发方法及相关装置
WO2013152565A1 (zh) 能力聚合开放的方法和系统
WO2023050663A1 (zh) 一种虚拟网络性能加速方法、装置、设备及存储介质
CN115174474B (zh) 一种私有云内基于SRv6的SFC实现方法及装置
EP4391448A1 (en) Method and apparatus for determining lost host
CN114157633B (zh) 一种报文转发方法及装置
CN114567481A (zh) 一种数据传输方法、装置、电子设备及存储介质
US20230081696A1 (en) Methods for Shunting Clustered Gateways
CN111988154B (zh) 一种网络传输加速的方法、装置及计算机可读存储介质
US9668082B2 (en) Virtual machine based on a mobile device
CN111404705B (zh) 一种sdn的优化方法、装置及计算机可读存储介质
CN113746802B (zh) 网络功能虚拟化中的方法以及本地状态和远程状态全存储的vnf装置
WO2024066503A1 (zh) 服务调用方法及装置
WO2024104021A1 (zh) 建立会话的方法、装置、电子设备及存储介质
US20240244080A1 (en) Method and apparatus for determining compromised host
US20240223504A1 (en) Packet processing method, flow specification transmission method, device, system, and storage medium
WO2023207278A1 (zh) 一种报文处理方法及装置
WO2024027398A1 (zh) 一种通信方法和装置
WO2024033882A1 (en) Near real time request handling in flexible api router

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22874066

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 18279159

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE