WO2020134153A1 - 一种分流方法、系统和处理设备 - Google Patents

一种分流方法、系统和处理设备 Download PDF

Info

Publication number
WO2020134153A1
WO2020134153A1 PCT/CN2019/103682 CN2019103682W WO2020134153A1 WO 2020134153 A1 WO2020134153 A1 WO 2020134153A1 CN 2019103682 W CN2019103682 W CN 2019103682W WO 2020134153 A1 WO2020134153 A1 WO 2020134153A1
Authority
WO
WIPO (PCT)
Prior art keywords
network card
processed message
processed
target network
software
Prior art date
Application number
PCT/CN2019/103682
Other languages
English (en)
French (fr)
Inventor
明义波
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2020134153A1 publication Critical patent/WO2020134153A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L49/00Packet switching elements
    • H04L49/35Switches specially adapted for specific applications
    • H04L49/351Switches specially adapted for specific applications for local area network [LAN], e.g. Ethernet switches
    • H04L49/352Gigabit ethernet switching [GBPS]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/125Avoiding congestion; Recovering from congestion by balancing the load, e.g. traffic engineering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS

Definitions

  • This application relates to the field of mobile communications, and in particular to a method, system, and processing device for shunting.
  • 40G, 100G and other optical ports can achieve low performance on general-purpose servers under the application of DPDK (Data Plane Development Kit).
  • Delay linear forwarding can obtain performance similar to traditional dedicated equipment.
  • NIC hardware distribution Although the performance of NIC hardware distribution is very high, it generally cannot meet the needs of various scenarios, and the distribution strategy cannot be customized. Some network cards do not support the requirements of multiple queues, which results in great limitations.
  • the present application provides a shunting method, device and system, processing device and storage medium.
  • the present application provides a distribution method, including:
  • a processing device which includes a processor and a memory for storing processor-executable instructions.
  • the processor executes the instructions, the following steps are implemented:
  • a distribution system including: a network card, the above processing device, and a processing thread.
  • the above technical solution provided by the embodiments of the present application has the following advantages: the to-be-processed message from the network is shunted to multiple processing threads by software, that is, to the multi-core processing device for processing , which can solve the problem of poor distribution effect caused by the existing shunting only by the length of the transceiver queue supported by the network card, which cannot meet the needs of using the IO performance of the network card, and can fully utilize the multi-core advantages of the CPU. While taking advantage of network IO performance, it can also take advantage of the technical effects of multi-core advantages.
  • Figure 1 is a schematic diagram of the network card hardware distribution model
  • FIG. 2 is a flow chart of a method for shunting provided by an embodiment of the present application
  • Figure 3 is a schematic diagram of the architecture of SR-IOV
  • FIG. 4 is a schematic diagram of a single RSS distribution model provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a multi-RSS shunting model provided by an embodiment of the present application.
  • FIG. 6 is a configuration code diagram of a pass-through network card in a configuration file of a virtual machine provided by an embodiment of this application;
  • FIG. 7 is a schematic diagram of a setting code of an interface mode provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a traffic distribution strategy provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of viewing the sending and receiving packets of each forwarding instance through a diagnostic command provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of the configuration code of the SR-IOV network card in the configuration file of the virtual machine provided by the embodiment of the present application;
  • FIG. 11 is a schematic diagram of a VF setting VLAN code provided by an embodiment of the present application.
  • FIG. 12 is a structural block diagram of a distribution device provided by an embodiment of the present application.
  • a shunt method is provided in this example, including the following steps:
  • Step 201 Receive a pending message from the target network card
  • the target network card is a network card that supports multiple transceiver queues
  • the network card is first used to shunt to different shunting devices (the shunting device may be RSS), and then the shunting device shunts to different processing threads (ie, WORK) through software.
  • the target network card is a network card that does not support multiple transceiver queues, then the software is directly distributed to different processing threads through the distribution device.
  • the received to-be-processed message from the target network card may have been shunted by the network card, or may not have been shunted by the network card, which can be mainly determined by the type of the network card.
  • Step 202 Distribute the to-be-processed message to multiple processing threads through software
  • packets from the network card 1 can be shunted to the processing thread 1, processing thread 2, and processing thread 3 by software
  • packets from the network card 2 can be shunted to the processing thread 5 and processing thread 6 by software.
  • the processing thread 1, the processing thread 2, the processing thread 3, the processing thread 5, and the processing thread 6 may correspond to different processing cores.
  • the specific software distribution strategy is pre-configured, for example, the developer configures through the strategy configuration interface. That is, the software implements the offload processing of the message to be processed.
  • Step 203 Receive packets processed by multiple processing threads, and send the processed packets through the target network card.
  • the to-be-processed packets from the network are distributed to multiple processing threads by software, that is, to multi-core processing devices for processing, which can solve the existing problems that are only supported by the network card itself.
  • the distribution effect caused by the length of the sending and receiving queues is poor, which cannot meet the needs of using the IO performance of the network card, and can fully utilize the multi-core advantages of the CPU. Technical effect.
  • the efficiency of the shunting can be improved, making Diversion is more reasonable. Therefore, when receiving the to-be-processed message from the target network card in the above step 101, it may be to receive the to-be-processed message after the target network card performs the offloading process by hardware offloading, wherein the target network card receives The number of queues distributes the packets to be processed. That is, the hardware is shunted by the network card.
  • the network card can shunt the to-be-processed message into two shunting devices, and then the shunting device then shunts the software to shunt the message to the processing thread. in.
  • sending the processed message through the target network card may include: mounting the processed message on a sending queue; and sending the processed message through the target network card.
  • the sending queue is the sending queue on the distribution device, that is, after processing by the processing thread is completed, the processed message is first mounted on the sending queue of the distribution device, and then sent through the network card, so that the message can be realized Orderly processing and sending.
  • the to-be-processed message may be distributed to multiple processing threads according to a load balancing strategy through software.
  • a load balancing strategy By adopting the load balancing distribution method, it can ensure that each processing thread can have an approximate processing amount, on the one hand, it can realize the effective use of resources, and on the other hand, it can also improve the efficiency of packet processing.
  • the characteristic value of the to-be-processed message may be calculated according to a preset offload strategy; according to the characteristic value
  • the to-be-processed message is distributed to multiple processing threads. That is, the distribution is performed by calculating the feature value, which is realized based on the content and characteristics of the body of the message, and the accuracy is higher.
  • the above-mentioned preset diversion strategy may be configured by a developer, for example, a configuration instruction for the preset diversion strategy may be received; in response to the configuration instruction, the preset diversion strategy is configured.
  • a configuration instruction for the preset diversion strategy may be received; in response to the configuration instruction, the preset diversion strategy is configured.
  • the specific configuration method can be selected according to the actual situation, which is not limited in this application.
  • the above target network card may include, but is not limited to, at least one of the following: a network card that supports multiple queues, and a network card that does not support multiple queues.
  • the devices involved can include: wired and wireless communication routers, telecommunications gateways, user access, address translation, firewalls and other virtualization devices; through the generalized equipment of the X86 server, to achieve rapid iteration and demand Delivery; in SDN/NFV network, cloud computing, data center has a wide range of applications.
  • the multi-queue network card was originally used to solve the problem of network IO QoS (quality of service). Later, as the bandwidth of network IO continued to increase, the single-core CPU (Central Processing Unit) could not fully meet the needs of the network card. Therefore, through the support of multi-queue network card drivers, each queue is bound to different cores through interrupts to meet the needs of the network card, while also reducing the load of a single CPU and improving the computing power of the system.
  • QoS quality of service
  • the hardware offload will encounter a bottleneck, the offload strategy supported by the hardware cannot be customized, and some scenarios cannot achieve load balancing. And the forwarding thread is strongly related to the number of queues of the network card, so it can not make full use of multi-core resources.
  • the hardware's distribution strategy is only suitable for outer packets. If it is based on inner packets, it cannot be achieved. For example, if the outer layer is a tunnel, it is not enough to wish to distribute traffic based on inner layer packets.
  • NIC Network Information Center
  • the command configuration is used to implement a customizable shunt.
  • a customizable shunt Taking soft and hard coordination as an example for illustration, the following steps may be included:
  • the specifications of the server can be determined according to the traffic bandwidth and service characteristics, for example, the server's network card model, the number of CPU cores, and the size of memory are determined;
  • services can be deployed on bare metal or virtual machines.
  • appropriate hardware resources can be allocated according to business characteristics, and an operating system or virtualization management software can be installed.
  • the forwarding thread needs to be bound and exclusive to the core; if it is a virtualized environment, the CPU and vCPU also need to be bound, CPU settings In the same NUMA (Non Uniform Memory Access Architecture, non-uniform memory access) node to improve performance.
  • NUMA Non Uniform Memory Access Architecture, non-uniform memory access
  • the virtualization environment needs to assign NUMA nodes to the virtual machine memory and allocate huge pages to improve performance; the sending and receiving packets are DPDK suites, and the operating system needs to reserve huge pages for the DPDK.
  • the network card can be a kernel virtual
  • the network card can be the exclusive network card in the through mode, or multiple virtual machines can share the network card through IO virtualization.
  • the cloud host can share the network card in the form of SR-IOV (Single-root I/O virtualization) to fully utilize the hardware capabilities of the network card.
  • FIG. 3 it is a schematic diagram of SR-IOV, where VF (Virtual Function) is a function associated with physical function, and is a lightweight PCIe (peripheral component interconnect, express, high-speed serial computer Extended bus standard) function, which can share one or more physical resources with physical functions and other VFs associated with the same physical function.
  • PF Physical Functions
  • PF Physical Functions
  • PF Physical Functions
  • Each VF has a PCI memory space for mapping its register set.
  • the VF device driver operates on the register set to enable its function, and displays it as a PCI device that actually exists.
  • PF Physical is a PCI function used to support the SR-IOV function.
  • the PF contains the SR-IOV function structure and is used to manage the SR-IOV function.
  • PF is a full-featured PCIe function that can be discovered, managed, and processed like any other PCIe device. PF has fully configured resources and can be used to configure or control PCIe devices.
  • VF Virtual Function
  • a VF is a function associated with physical functions.
  • a VF is a lightweight PCIe function that can share one or more physical resources with a physical function and other VFs associated with the same physical function.
  • a VF only allows configuration resources for its own behavior.
  • Each SR-IOV device can have a physical function PF, and each PF can have up to 64,000 virtual function VFs associated with it. PF can create VF through registers, these registers are designed with attributes dedicated to this purpose. Once SR-IOV is enabled in PF, you can access the PCI configuration space of each VF through the PF's bus, device, and function number (routing ID). Each VF has a PCI memory space for mapping its register set.
  • the method is flexible, the command configuration is simple, and the deployment is easy. According to different performance requirements, different diversion models are selected and the diversion strategy is configured, which can basically meet most application scenarios.
  • the software shunting can be as shown in FIG. 4, which is a special case of hard and soft shunting, and does not require hardware shunting of the network card.
  • the two network cards do not support multiple queues.
  • the software distributes it to different WORK.
  • the message is hung in the RSS sending queue, and then the RSS sends the message through the network card.
  • Software offload is applicable to the situation where the network card does not support multiple queues and there are many nuclear resources.
  • the peak value of RSS is about 8MPPS, and the bandwidth of the network card is about 20G. If this limit is exceeded, a single RSS cannot be satisfied.
  • the soft and hard distribution can be distributed as shown in Figure 5.
  • the network card hardware is used to distribute to different RSS, and then the RSS software is distributed to different WORK threads. After the WORK is processed, the packet is hung to the sending queue, and the RSS is reported through different network cards. The text is sent out.
  • this software-hardware distribution method can maximize the potential of the hardware.
  • linear forwarding above 100G can be easily achieved.
  • two network cards each have two send and receive queues, which are first distributed to different RSS through hardware, and RSS is distributed to different WORK through software. After the WORK is processed, the message is linked to The sending queue of RSS, and then each RSS sends a message through the network card.
  • virtual machine NIC pass-through taking two 20G NICs and virtual machine pass-through as an example, it introduces software and hardware offloading, using DPDK suites for sending and receiving packets, and RSS software offloading.
  • DPDK suites for sending and receiving packets
  • RSS software offloading The implementation model is shown in Figure 5.
  • S2 When the DPDK is initialized, set the number of queues and the distribution strategy. Among them, the number of queues for sending and receiving packets is equal to twice the RSS instance.
  • the distribution strategy is related to the network card and is controlled by the macro switch.
  • S3 Set the software distribution strategy on the CLI interface. First, select the inner or outer packet distribution strategy, and then select the specific distribution strategy. As shown in Figure 7 and Figure 8, you can configure the command as follows:
  • the default is to shunt based on outer packets; after inner packet shunting is enabled, the shunting rules take effect for inner packets.
  • SIP Session Initiation Protocol
  • SR-IOV For the implementation of the virtual machine SR-IOV, take two 100G network cards and the virtual machine VF network card as an example. SR-IOV adopts a shunt method combining hardware and software. Since the network card bandwidth is shared, it is assumed that the virtual machine needs to support 40G The main difference between the performance and the through mode is:
  • the configuration file of the virtual machine sets the configuration of SR-IOV, as shown in Figure 10 is the configuration of the SR-IOV network card in the configuration file of the virtual machine, where the upper line is PCI of VF and the lower line is the mapped network card MAC address;
  • PF network card forwards traffic according to VF MAC and VLAN
  • FIG. 12 is a structural block diagram of a shunting device according to an embodiment of the present invention. As shown in FIG. 12, it includes: a receiving module 1201, a shunting module 1202, and a sending module 1203. The structure will be described below.
  • the receiving module 1201 is used to receive the to-be-processed message from the target network card
  • the distribution module 1202 is configured to distribute the to-be-processed message to multiple processing threads through software
  • the sending module 1203 is configured to receive packets processed by multiple processing threads, and send the processed packets through the target network card.
  • the offload module 1202 may specifically receive the to-be-processed message after the target network card performs offload processing by hardware, wherein the target network card offloads the to-be-processed message according to the number of supported receive queues .
  • the sending module 1203 may specifically mount the processed message on the sending queue; and send the processed message through the target network card.
  • the distribution module 1202 may specifically distribute the to-be-processed packet to multiple processing threads according to a load balancing strategy through software.
  • the offload module 1202 may specifically calculate the characteristic value of the to-be-processed message according to a preset offload strategy; according to the characteristic value, according to the load balancing strategy, the off-load message may be offloaded up to Processing threads.
  • a configuration instruction for the preset diversion strategy may also be received; in response to the configuration instruction, the preset diversion strategy is configured.
  • the target network card may include but is not limited to at least one of the following: a network card that supports multiple queues, and a network card that does not support multiple queues.
  • the embodiments of the present application also provide a specific implementation of an electronic device that can implement all the steps in the shunt method in the foregoing embodiments.
  • the electronic device specifically includes the following:
  • Processor 1 memory, communication interface and bus;
  • the processor, the memory, and the communication interface communicate with each other through the bus;
  • the processor is used to call a computer program in the memory, and the processor implements the computer program to implement the above embodiment All steps in the offloading method of, for example, when the processor executes the computer program, the following steps are realized:
  • Step 1 Receive the pending message from the target network card
  • Step 2 Distribute the to-be-processed message to multiple processing threads through software
  • Step 3 Receive packets processed by multiple processing threads, and send the processed packets through the target network card.
  • the embodiment of the present application distributes the to-be-processed message from the network to multiple processing threads through software, that is, to the multi-core processing device for processing, which can solve the existing problem only through the network card
  • the length of the sending and receiving queues supported by itself is poor, and the distribution effect is poor, which can not meet the needs of using the IO performance of the network card, and can fully utilize the advantages of the CPU's multi-core, while achieving the use of network IO performance.
  • Ingredients take advantage of the technical effects of multicore.
  • the embodiments of the present application also provide a computer-readable storage medium capable of implementing all the steps in the offloading method in the foregoing embodiments.
  • the computer-readable storage medium stores a computer program, which is implemented when executed by the processor All the steps of the offload method in the above embodiment, for example, when the processor executes the computer program, the following steps are realized:
  • Step 1 Receive the pending message from the target network card
  • Step 2 Distribute the to-be-processed message to multiple processing threads through software
  • Step 3 Receive packets processed by multiple processing threads, and send the processed packets through the target network card.
  • the embodiment of the present application distributes the to-be-processed message from the network to multiple processing threads through software, that is, to the multi-core processing device for processing, which can solve the existing problem only through the network card
  • the length of the sending and receiving queues supported by itself is poor, and the distribution effect is poor, which can not meet the needs of using the IO performance of the network card, and can fully utilize the advantages of the CPU's multi-core, while achieving the use of network IO performance.
  • Ingredients take advantage of the technical effects of multicore.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本申请涉及一种分流方法、系统和处理设备,其中,该方法包括:接收来自目标网卡的待处理报文;通过软件将所述待处理报文分流到多个处理线程;接收多个处理线程处理后的报文,将处理后的报文通过所述目标网卡发送出去。通过上述方案可以达到在利用网络IO性能的同时,又可以成分利用多核优势的技术效果。

Description

一种分流方法、系统和处理设备
本申请要求享有2018年12月26日提交的名称为“一种分流方法、装置和系统、处理设备和存储介质”的中国专利申请CN201811602922.4的优先权,其全部内容通过引用并入本文中。
技术领域
本申请涉及移动通讯领域,尤其涉及一种分流方法、系统和处理设备。
背景技术
随着万兆网卡、智能网卡的推出,以及硬件虚拟化技术的不断发展,40G、100G等光口在DPDK(Data Plane Development Kit,数据平面开发)套件的运用下,在通用服务器上能实现低时延的线性转发,可以获得与传统专用设备近似的性能。
网卡的硬件分流虽然性能很高,但是一般无法满足各种场景的需求,且分流策略无法定制,部分网卡不支持多队列等情况的需求,这就导致存在很大的局限性。
针对如何在充分利用网卡的IO性能的同时,又能充分利用CPU的多核优势,目前尚未提出有效的解决方案。
发明内容
为了解决上述技术问题或者至少部分地解决上述技术问题,本申请提供了一种分流方法、装置和系统、处理设备和存储介质。
第一方面,本申请提供了一种分流方法,包括:
接收来自目标网卡的待处理报文;
通过软件将所述待处理报文分流到多个处理线程;
接收多个处理线程处理后的报文,将处理后的报文通过所述目标网卡发送出去。
第二方面,提供了一种处理设备,包括处理器以及用于存储处理器可执行指令的存 储器,所述处理器执行所述指令时实现如下步骤:
接收来自目标网卡的待处理报文;
通过软件将所述待处理报文分流到多个处理线程;
接收多个处理线程处理后的报文,将处理后的报文通过所述目标网卡发送出去。
第三方面,提供了一种分流系统,包括:网卡、上述的处理设备、处理线程。
本申请实施例提供的上述技术方案与现有技术相比具有如下优点:将来自网络的待处理报文,通过软件将待处理报文分流至多个处理线程,即,分流至多核处理设备进行处理,从而可以解决现有的仅通过网卡自身所支持的收发队列的长度进行分流导致的分流效果较差,无法满足利用网卡的IO性能的同时,又能充分利用CPU的多核优势的需求,达到了在利用网络IO性能的同时,又可以成分利用多核优势的技术效果。
附图说明
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本发明的实施例,并与说明书一起用于解释本发明的原理。
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为网卡硬件分流的模型示意图;
图2为本申请实施例提供的一种分流方法的方法流程图;
图3为SR-IOV的架构示意图;
图4为本申请实施例提供的单RSS的分流模型示意图;
图5为本申请实施例提供的多RSS的分流模型示意图;
图6为本申请实施例提供的虚拟机的配置文件中直通网卡的配置代码图;
图7为本申请实施例提供的接口模式的设置代码示意图;
图8为本申请实施例提供的软件分流的分流策略示意图;
图9为本申请实施例提供的通过诊断命令查看每个转发实例的收发包情况示意图;
图10为本申请实施例提供的虚拟机的配置文件中SR-IOV网卡的配置代码示意图;
图11为本申请实施例提供的VF设置VLAN的代码示意图;
图12为本申请实施例提供的一种分流装置的结构框图。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请的一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本申请保护的范围。
如图1所示,为现有的网卡硬件分流的模型示例,2个网卡各创建3个接收队列和4个发送队列,硬件分流到不同的WORK,WORK处理完后将报文发送出去,这种模型受限于硬件的分流策略和网卡所支持的队列数。为此,在本例中,提出了一种通过命令配置的分流策略,根据该策略计算报文特征值,然后将报文分流到不同的转发线程。这种方式的扩展性更好,通过软硬件分流的配合,可以满足更多场景的需求,实现在不同应用场景下的网卡高性能收发包和负载均衡。
具体的,如图2所示,在本例中提供了分流方法,包括如下步骤:
步骤201:接收来自目标网卡的待处理报文;
其中,如果目标网卡为支持多收发队列的网卡,就先通过网卡进行分流到不同的分流装置(该分流装置可以是RSS),然后分流装置通过软件分流到不同的处理线程(即,WORK)。如果目标网卡为不支持多收发队列的网卡,那么就直接通过分流装置进行软件分流到不同的处理线程。
因此,所接收的来自目标网卡的待处理报文可能是已经经过网卡分流的,也可能是未经过网卡分流的,这主要可以由网卡的类型决定。
步骤202:通过软件将所述待处理报文分流到多个处理线程;
例如,可以将来自网卡1的报文通过软件分流到处理线程1、处理线程2、处理线程3进行处理,将来自网卡的2的报文通过软件分流到处理线程5、处理线程6进行处理。其中,处理线程1、处理线程2、处理线程3、处理线程5和处理线程6可以对应不同的处理内核。具体的软件分流策略是预先配置好的,例如开发人员通过策略配置界面等进行配置。即,通过软件实现对待处理报文的分流处理。
步骤203:接收多个处理线程处理后的报文,将处理后的报文通过所述目标网卡发 送出去。
在上例中,将来自网络的待处理报文,通过软件将待处理报文分流至多个处理线程,即,分流至多核处理设备进行处理,从而可以解决现有的仅通过网卡自身所支持的收发队列的长度进行分流导致的分流效果较差,无法满足利用网卡的IO性能的同时,又能充分利用CPU的多核优势的需求,达到了在利用网络IO性能的同时,又可以成分利用多核优势的技术效果。
考虑到如果可以将软件分流(即,通过软件进行分流)与硬件分流(即,通过硬件进行分流,也就是基于网卡所支持的收发队列的数量进行分流)进行结合,可以提升分流的效率,使得分流更为合理。因此,在上述步骤101中接收来自目标网卡的待处理报文的时候,可以是接收所述目标网卡通过硬件分流进行分流处理后的待处理报文,其中,所述目标网卡根据所支持的接收队列数量对所述待处理报文进行分流。即,通过网卡进行硬件分流,例如,网卡支持两个接收线程,那么网卡就可以将待处理报文分流到两个分流装置中,然后分流装置再经过软件进行分流,将报文分流到处理线程中。
为了实现对处理后报文的有效发送,因为在传入处理线程的时候是采用软硬件集合的分流方式,相应的,在将处理后的报文发送出去的时候,需要按照原本路线回到网卡上,通过网卡将报文发送出去。因此,将处理后的报文通过所述目标网卡发送出去可以包括:将处理后的报文挂载到发送队列上;通过所述目标网卡将所述处理后的报文发送出去。该发送队列为分流装置上的发送队列,即,在处理线程处理完成之后,先将处理后的报文挂载到分流装置的发送队列上,然后,再通过网卡进行发送,从而可以实现报文的有序处理和发送。
在通过软件将所述待处理报文分流到多个处理线程的时候,可以是通过软件,按照负载均衡策略将所述待处理报文分流到多个处理线程。通过采用负载均衡的分流方式,可以保证每个处理线程可以具备近似的处理量,一方面可以实现资源的有效利用,一方面还可以提高报文处理的效率。
具体的,在按照负载均衡策略,将所述待处理报文分流到多个处理线程的时候,可以是按照预设的分流策略,计算所述待处理报文的特征值;根据所述特征值,按照负载均衡策略,将所述待处理报文分流到多个处理线程。即,通过计算特征值的方式,进行分流,这样实现起来是基于报文本身的内容和特征进行的分流,准确性更高。
对于上述的预设分流策略,可以是开发人员配置的,例如,可以接收对所述预设的分流策略的配置指令;响应于所述配置指令,对所述预设的分流策略进行配置。具体 的,可以是在CLI(Command-Line Interface,命令行界面)界面设置软件分流的策略,先选择基于内层还是外层包分流,然后选择具体的分流策略等。具体的配置方式可以根据实际情况选择,本申请对此不作限定。
上述的目标网卡可以包括但不限于以下至少之一:支持多队列的网卡、不支持多队列的网卡。
下面结合一个具体实施例对上述方法进行说明,然而,值得注意的是,该具体实施例仅是为了更好地说明本申请,并不构成对本申请的不当限定。
在本例中,所涉及到设备可以有:有线和无线通讯领域的路由器、电信网关、用户接入、地址转换、防火墙等虚拟化设备;通过X86服务器的通用化设备,实现需求的快速迭代和交付;在SDN/NFV的网络,云计算,数据中心有着广泛的运用。
多队列网卡原本是用来解决网络IO QoS(quality of service)问题的,后来随着网络IO的带宽的不断提升,单核CPU(Central Processing Unit,中央处理器)不能完全满足网卡的需求。因此,通过多队列网卡驱动的支持,将各个队列通过中断绑定到不同的核上,以满足网卡的需求,同时也可以降低单个CPU的负载,提升系统的计算能力。
考虑到当业务链较复杂时,硬件分流就会遇到瓶颈,硬件支持的分流策略不能定制,有些场景无法做到负载均衡。且转发线程与网卡的队列数强相关,因此无法充分利用多核资源,另外硬件的分流策略,只适合外层包,如果基于内层包分流就无法实现了。例如:外层是隧道,那么希望基于内层的报文进行分流就无法满足。在本例中,考虑到如果在转发线程和网卡(Network Information Center,NIC)之间增加一层RSS(Receive Side Scaling,是一种能够在多处理器系统下使接收报文在多个CPU之间高效分发的网卡驱动技术),通过软件进行流量的分发和发送,将网卡的带宽和多核进行适配,可以最大限度地满足用户的需求。
具体的,在本例中,采用命令配置实现可定制的分流,以软硬配合为例进行说明,可以包括如下步骤:
S1:根据应用场景确定性能要求;
具体的,可以根据流量带宽和业务特性确定服务器的规格,例如,确定服务器的网卡型号、CPU的核数、内存大小等;
S2:分配合适的硬件和安装软件。
其中,业务可以部署在裸机上,也可以部署在虚拟机上,在实现的时候,可以根据 业务特性分配合适的硬件资源,并且安装操作系统,或者虚拟化管理软件。
S3:CPU资源的分配;
因为转发域对时延的要求很高,为了减少核之间的切换,需要把转发线程进行核的绑定和排他;如果是虚拟化的环境,还需要将CPU和vCPU进行绑定,CPU设置在同一个NUMA(Non Uniform Memory Access Architecture,非统一内存访问)的节点上以提高性能。
S4:内存资源的分配;
虚拟化环境需要给虚拟机内存指定NUMA节点,并且分配巨页以提高性能;收发包是DPDK套件,操作系统需要预留巨页给DPDK。
S5:网卡资源的分配;
网卡类型较多,有的网卡支持多队列,有的网卡不支持;驱动和固件也有差异,命令呈现也不一样;在裸机上网卡一般采用独占的方式,在虚拟机上,网卡可以是内核虚拟网卡,可以是采用直通方式独享网卡,也可以通过IO虚拟化的方式多个虚拟机共享网卡。在云平台,云主机可以采用SR-IOV(Single-root I/O virtualization,单根I/O虚拟化)的方式共享网卡,以便充分利用网卡硬件能力。
如图3所示,为SR-IOV的示意图,其中,VF(Virtual Function,虚拟功能)是与物理功能关联的一种功能,是一种轻量级PCIe(peripheral component interconnect express,高速串行计算机扩展总线标准)功能,可以与物理功能以及与同一物理功能关联的其他VF共享一个或多个物理资源。PF(Physical Function,物理功能)中启用了SR-IOV,就可以通过PF的总线、设备和功能编号(路由ID)访问各个VF的PCI配置空间。每个VF都具有一个PCI内存空间,用于映射其寄存器集。VF设备驱动程序对寄存器集进行操作以启用其功能,并且显示为实际存在的PCI设备。
具体的,PF(Physical Function)是用于支持SR-IOV功能的PCI功能,PF包含SR-IOV功能结构,用于管理SR-IOV功能。PF是全功能的PCIe功能,可以像其他任何PCIe设备一样进行发现、管理和处理,PF拥有完全配置资源,可以用于配置或控制PCIe设备。
VF(Virtual Function)是与物理功能关联的一种功能。VF是一种轻量级PCIe功能,可以与物理功能以及与同一物理功能关联的其他VF共享一个或多个物理资源,VF仅允许拥有用于其自身行为的配置资源。
每个SR-IOV设备都可有一个物理功能PF,并且每个PF最多可有64,000个与其关 联的虚拟功能VF。PF可以通过寄存器创建VF,这些寄存器设计有专用于此目的的属性。一旦在PF中启用了SR-IOV,就可以通过PF的总线、设备和功能编号(路由ID)访问各个VF的PCI配置空间。每个VF都具有一个PCI内存空间,用于映射其寄存器集。
S6:设置网卡的队列个数和分流策略;
Linux的系统,通过ethtool工具可以查看网卡队列数,以及网卡的硬件分流策略;DPDK套件,初始化时将队列个数,分流策略等在网卡绑定时,传给网卡DPDK的驱动,以完成设置;
S7:查看分流是否成功以及是否均匀;
在Linux的系统中,通过ethtool可以查看到有多少个队列,以及每个队列的收发包情况;DPDK的套件,通过系统的CLI界面可以查看每个队列的收发包情况,以及分流是否均匀;
上例中的方式灵活、命令配置简单且部署方便,根据不同的性能要求,选择不同的分流模型,配置上分流策略,基本能满足绝大部分的应用场景。
具体的,软件分流可以如图4所示,是软硬配合分流的一种特例,无需网卡的硬件分流。如图4所示,两个网卡不支持多队列,流量被RSS收到后,软件分流给不同的WORK,WORK处理完后将报文挂到RSS的发送队列,然后RSS通过网卡发送报文。软件分流适用于网卡不支持多队列,核资源较多的情况。但是单RSS的局限性比较明显,软件分流会成为瓶颈,RSS的峰值大概在8MPPS,网卡带宽在20G左右,超过这个限值,单个RSS就无法满足了。
软硬配合分流可以如图5所示,先通过网卡硬件分流到不同RSS,然后在RSS软件分流到不同的WORK线程,WORK处理完后将包挂到发送队列,RSS再通过不同的网卡将报文发送出去。在网卡带宽很大,核资源很多的情况下,这种软硬件配合的分流方法可以最大限度地发挥硬件的潜能,通过软硬件配合的方式,可以很容易地实现100G以上的线性转发。如图5所示的多RSS的分流模型示例,两个网卡各有两个收发队列,先通过硬件分流到不同的RSS,RSS通过软件分流到不同的WORK,WORK处理完后将报文挂到RSS的发送队列,然后各RSS通过网卡发送报文。
对于虚拟机网卡直通的实施方式,以2个20G的网卡,虚拟机直通的方式为例,介绍软硬件实现的分流,收发包使用DPDK的套件,RSS软件分流,实现模型如图5所示。
S1:如图6所示,在虚拟机的配置文件设置直通模式:
S2:DPDK初始化时,设置队列数和分流的策略,其中,收发包的队列个数等于RSS实例的两倍,分流策略与网卡有关,通过宏开关进行控制。
S3:在CLI界面设置软件分流的策略,首先选择基于内层还是外层包分流,然后选择具体的分流策略,如图7和图8所示,可以按照如下步骤配置命令:
#进入虚拟化配置模式
#进入接口配置模式
#使能基于GRE内层报文分流
#选择基于源IP和目的端口的分流。
具体的,在接口模式选择是否基于内层包分流,默认是基于外层包分流;使能内层包分流后,分流规则对内层包生效。在软件分流的时候,缺省基于SIP(Session Initiation Protocol,会话初始协议)分流,通过命令可以灵活设置,配置在每个接口上。
S4:查看转发实例分流是否均匀,具体的,通过诊断命令可以查看到分流的情况,以及性能瓶颈在哪,即,可以如图9所示,通过诊断命令查看每个转发实例的收发包情况,可以判断分流是否均匀。
对于虚拟机SR-IOV的实现方式,以2个100G的网卡,虚拟机VF网卡为例进行说明,SR-IOV采用软硬件结合的分流方式,由于网卡带宽是共享的,假设虚拟机需要支持40G的性能,与直通方式的主要区别在于:
虚拟机的配置文件设置SR-IOV的配置,如图10所示为虚拟机的配置文件中SR-IOV网卡的配置,其中,上面线条为VF的PCI,下面线条为映射的网卡MAC地址;
通过命令设置VF的VLAN,如图11所示,在宿主机上通过ip link命令可以设置和查看;
PF网卡根据VF的MAC,VLAN进行流量转发;
DPDK初始化时设置队列数和分流的策略。
基于同一发明构思,本发明实施例中还提供了一种分流装置,如下面的实施例所述。由于分流装置解决问题的原理与分流方法相似,因此分流装置的实施可以参见分流方法的实施,重复之处不再赘述。以下所使用的,术语“单元”或者“模块”可以实现预定功能的软件和/或硬件的组合。尽管以下实施例所描述的装置较佳地以软件来实现,但是硬件,或者软件和硬件的组合的实现也是可能并被构想的。图12是本发明实施例的 分流装置的一种结构框图,如图12所示,包括:接收模块1201、分流模块1202和发送模块1203,下面对该结构进行说明。
接收模块1201,用于接收来自目标网卡的待处理报文;
分流模块1202,用于通过软件将所述待处理报文分流到多个处理线程;
发送模块1203,用于接收多个处理线程处理后的报文,将处理后的报文通过所述目标网卡发送出去。
在一个实施方式中,分流模块1202具体可以接收所述目标网卡通过硬件进行分流处理后的待处理报文,其中,所述目标网卡根据所支持的接收队列数量对所述待处理报文进行分流。
在一个实施方式中,发送模块1203具体可以将处理后的报文挂载到发送队列上;通过所述目标网卡将所述处理后的报文发送出去。
在一个实施方式中,分流模块1202具体可以通过软件,按照负载均衡策略将所述待处理报文分流到多个处理线程。
在一个实施方式中,分流模块1202具体可以按照预设的分流策略,计算所述待处理报文的特征值;根据所述特征值,按照负载均衡策略,将所述待处理报文分流到多个处理线程。
在一个实施方式中,还可以接收对所述预设的分流策略的配置指令;响应于所述配置指令,对所述预设的分流策略进行配置。
在一个实施方式中,上述目标网卡可以包括但不限于以下至少之一:支持多队列的网卡、不支持多队列的网卡。
本申请的实施例还提供能够实现上述实施例中的分流方法中全部步骤的一种电子设备的具体实施方式,所述电子设备具体包括如下内容:
处理器1、存储器、通信接口和总线;
其中,所述处理器、存储器、通信接口通过所述总线完成相互间的通信;所述处理器用于调用所述存储器中的计算机程序,所述处理器执行所述计算机程序时实现上述实施例中的分流方法中的全部步骤,例如,所述处理器执行所述计算机程序时实现下述步骤:
步骤1:接收来自目标网卡的待处理报文;
步骤2:通过软件将所述待处理报文分流到多个处理线程;
步骤3:接收多个处理线程处理后的报文,将处理后的报文通过所述目标网卡发送出去。
从上述描述可知,本申请实施例将来自网络的待处理报文,通过软件将待处理报文分流至多个处理线程,即,分流至多核处理设备进行处理,从而可以解决现有的仅通过网卡自身所支持的收发队列的长度进行分流导致的分流效果较差,无法满足利用网卡的IO性能的同时,又能充分利用CPU的多核优势的需求,达到了在利用网络IO性能的同时,又可以成分利用多核优势的技术效果。
本申请的实施例还提供能够实现上述实施例中的分流方法中全部步骤的一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,该计算机程序被处理器执行时实现上述实施例中的分流方法的全部步骤,例如,所述处理器执行所述计算机程序时实现下述步骤:
步骤1:接收来自目标网卡的待处理报文;
步骤2:通过软件将所述待处理报文分流到多个处理线程;
步骤3:接收多个处理线程处理后的报文,将处理后的报文通过所述目标网卡发送出去。
从上述描述可知,本申请实施例将来自网络的待处理报文,通过软件将待处理报文分流至多个处理线程,即,分流至多核处理设备进行处理,从而可以解决现有的仅通过网卡自身所支持的收发队列的长度进行分流导致的分流效果较差,无法满足利用网卡的IO性能的同时,又能充分利用CPU的多核优势的需求,达到了在利用网络IO性能的同时,又可以成分利用多核优势的技术效果。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅是本发明的具体实施方式,使本领域技术人员能够理解或实现本发明。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一 般原理可以在不脱离本发明的精神或范围的情况下,在其它实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所申请的原理和新颖特点相一致的最宽的范围。

Claims (9)

  1. 一种分流方法,其中,包括:
    接收来自目标网卡的待处理报文;
    通过软件将所述待处理报文分流到多个处理线程;
    接收所述多个处理线程处理后的报文,将处理后的报文通过所述目标网卡发送出去。
  2. 根据权利要求1所述的方法,其中,接收来自目标网卡的待处理报文包括:
    接收所述目标网卡通过硬件进行分流处理后的待处理报文,其中,所述目标网卡根据所支持的接收队列数量对所述待处理报文进行分流。
  3. 根据权利要求1所述的方法,其中,将处理后的报文通过所述目标网卡发送出去,包括:
    将处理后的报文挂载到发送队列上;
    通过所述目标网卡将所述处理后的报文发送出去。
  4. 根据权利要求1所述的方法,其中,通过软件将所述待处理报文分流到多个处理线程,包括:
    通过软件,按照负载均衡策略将所述待处理报文分流到多个处理线程。
  5. 根据权利要求4所述的方法,其中,按照负载均衡策略,将所述待处理报文分流到多个处理线程,包括:
    按照预设的分流策略,计算所述待处理报文的特征值;
    根据所述特征值,按照负载均衡策略,将所述待处理报文分流到多个处理线程。
  6. 根据权利要求5所述的方法,其中,还包括:
    接收对所述预设的分流策略的配置指令;
    响应于所述配置指令,对所述预设的分流策略进行配置。
  7. 根据权利要求1所述的方法,其中,所述目标网卡包括以下至少之一:支持多队列的网卡、不支持多队列的网卡。
  8. 一种处理设备,包括处理器以及用于存储处理器可执行指令的存储器,所述处理器执行所述指令时实现权利要求1至7中任一项所述方法的步骤。
  9. 一种分流系统,其中包括:网卡、处理线程以及如权利要求8所述的处理设备。
PCT/CN2019/103682 2018-12-26 2019-08-30 一种分流方法、系统和处理设备 WO2020134153A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811602922.4A CN111371694B (zh) 2018-12-26 2018-12-26 一种分流方法、装置和系统、处理设备和存储介质
CN201811602922.4 2018-12-26

Publications (1)

Publication Number Publication Date
WO2020134153A1 true WO2020134153A1 (zh) 2020-07-02

Family

ID=71129611

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/103682 WO2020134153A1 (zh) 2018-12-26 2019-08-30 一种分流方法、系统和处理设备

Country Status (2)

Country Link
CN (1) CN111371694B (zh)
WO (1) WO2020134153A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114095251A (zh) * 2021-11-19 2022-02-25 南瑞集团有限公司 一种基于dpdk与vpp的sslvpn实现方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112073327B (zh) * 2020-08-19 2023-02-24 广东省新一代通信与网络创新研究院 一种抗拥塞的软件分流方法、装置及存储介质
CN112235213B (zh) * 2020-12-16 2021-04-06 金锐同创(北京)科技股份有限公司 一种sdn交换机分流方法、系统、终端及存储介质
CN115002046B (zh) * 2022-05-26 2024-01-23 北京天融信网络安全技术有限公司 报文处理方法、numa节点、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104753814A (zh) * 2013-12-31 2015-07-01 国家计算机网络与信息安全管理中心 基于网卡的报文分流处理方法
WO2016177191A1 (zh) * 2015-08-27 2016-11-10 中兴通讯股份有限公司 一种报文的处理方法及装置
CN108092913A (zh) * 2017-12-27 2018-05-29 杭州迪普科技股份有限公司 一种报文分流的方法和多核cpu网络设备
CN108667733A (zh) * 2018-03-29 2018-10-16 新华三信息安全技术有限公司 一种网络设备及报文处理方法
CN108984327A (zh) * 2018-07-27 2018-12-11 新华三技术有限公司 报文转发方法、多核cpu及网络设备

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102497322A (zh) * 2011-12-19 2012-06-13 曙光信息产业(北京)有限公司 一种基于分流网卡和多核cpu实现的高速包过滤设备和方法
CN104811400B (zh) * 2014-01-26 2018-04-06 杭州迪普科技股份有限公司 一种分布式网络设备
CN105871741B (zh) * 2015-01-23 2018-12-25 阿里巴巴集团控股有限公司 一种报文分流方法及装置
CN105577567B (zh) * 2016-01-29 2018-11-02 国家电网公司 基于Intel DPDK的网络数据包并行处理方法
US10432531B2 (en) * 2016-06-28 2019-10-01 Paypal, Inc. Tapping network data to perform load balancing
CN108123888A (zh) * 2016-11-29 2018-06-05 中兴通讯股份有限公司 报文的负载均衡方法、装置及系统
CN106713185B (zh) * 2016-12-06 2019-09-13 瑞斯康达科技发展股份有限公司 一种多核cpu的负载均衡方法及装置
US10469386B2 (en) * 2017-05-17 2019-11-05 General Electric Company Network shunt with bypass

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104753814A (zh) * 2013-12-31 2015-07-01 国家计算机网络与信息安全管理中心 基于网卡的报文分流处理方法
WO2016177191A1 (zh) * 2015-08-27 2016-11-10 中兴通讯股份有限公司 一种报文的处理方法及装置
CN108092913A (zh) * 2017-12-27 2018-05-29 杭州迪普科技股份有限公司 一种报文分流的方法和多核cpu网络设备
CN108667733A (zh) * 2018-03-29 2018-10-16 新华三信息安全技术有限公司 一种网络设备及报文处理方法
CN108984327A (zh) * 2018-07-27 2018-12-11 新华三技术有限公司 报文转发方法、多核cpu及网络设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114095251A (zh) * 2021-11-19 2022-02-25 南瑞集团有限公司 一种基于dpdk与vpp的sslvpn实现方法
CN114095251B (zh) * 2021-11-19 2024-02-13 南瑞集团有限公司 一种基于dpdk与vpp的sslvpn实现方法

Also Published As

Publication number Publication date
CN111371694B (zh) 2022-10-04
CN111371694A (zh) 2020-07-03

Similar Documents

Publication Publication Date Title
WO2020134153A1 (zh) 一种分流方法、系统和处理设备
AU2016414390B2 (en) Packet processing method in cloud computing system, host, and system
EP3629162B1 (en) Technologies for control plane separation at a network interface controller
US11736402B2 (en) Fast data center congestion response based on QoS of VL
EP3654620B1 (en) Packet processing method in cloud computing system, host, and system
CN109076029B (zh) 用于非统一网络输入/输出访问加速的方法和装置
US9548890B2 (en) Flexible remote direct memory access resource configuration in a network environment
US20180357086A1 (en) Container virtual switching
US11593140B2 (en) Smart network interface card for smart I/O
JP2015039166A (ja) 仮想スイッチを有するネットワークインタフェースカードおよびトラフィックフローポリシの適用
US11669468B2 (en) Interconnect module for smart I/O
WO2018113622A1 (zh) 基于虚拟机的数据包发送和接收方法及装置
Kaufmann et al. {FlexNIC}: Rethinking Network {DMA}
WO2021254001A1 (zh) 会话建立方法、装置、系统及计算机存储介质
Abbasi et al. A performance comparison of container networking alternatives
US11412059B2 (en) Technologies for paravirtual network device queue and memory management
Qi et al. Middlenet: A unified, high-performance nfv and middlebox framework with ebpf and dpdk
CN108881065B (zh) 基于流的速率限制
WO2018057165A1 (en) Technologies for dynamically transitioning network traffic host buffer queues
KR20240041500A (ko) 네트워크 종단 단말에서의 트래픽 송수신 제어 장치 및 방법
US11271897B2 (en) Electronic apparatus for providing fast packet forwarding with reference to additional network address translation table
Zeng et al. Middlenet: A high-performance, lightweight, unified nfv and middlebox framework
US20240134706A1 (en) A non-intrusive method for resource and energy efficient user plane implementations
Ginka et al. Optimization of Packet Throughput in Docker Containers
Miao et al. Renovate high performance user-level stacks' innovation utilizing commodity network adaptors

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19903297

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19903297

Country of ref document: EP

Kind code of ref document: A1