CN102970244A - Network message processing method of multi-CPU (Central Processing Unit) inter-core load balance - Google Patents

Network message processing method of multi-CPU (Central Processing Unit) inter-core load balance Download PDF

Info

Publication number
CN102970244A
CN102970244A CN2012104846532A CN201210484653A CN102970244A CN 102970244 A CN102970244 A CN 102970244A CN 2012104846532 A CN2012104846532 A CN 2012104846532A CN 201210484653 A CN201210484653 A CN 201210484653A CN 102970244 A CN102970244 A CN 102970244A
Authority
CN
China
Prior art keywords
cpu
message
processing method
receiving queue
nuclear
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012104846532A
Other languages
Chinese (zh)
Other versions
CN102970244B (en
Inventor
裴建成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Huanchuang Communication Technology Co Ltd
Original Assignee
Shanghai Huanchuang Communication Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Huanchuang Communication Technology Co Ltd filed Critical Shanghai Huanchuang Communication Technology Co Ltd
Priority to CN201210484653.2A priority Critical patent/CN102970244B/en
Publication of CN102970244A publication Critical patent/CN102970244A/en
Application granted granted Critical
Publication of CN102970244B publication Critical patent/CN102970244B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)
  • Computer And Data Communications (AREA)

Abstract

The invention relates to a network message processing method of multi-CPU (Central Processing Unit) inter-core load balance. The network message processing method comprises the following steps: at first, assigning a CPU core to receive a queue collection message from a network card, and distributing the queue collection message to a message receiving queue of the other CPU core until the quantity of the message receiving queues reaches a maximum threshold value; receiving the message from corresponding message receiving queue by the other CPU core; and finally, carrying out protocol stack treatment on the message. Compared with the prior art, the network message processing method provided by the invention has the advantages of sufficiently using CPU resources, realizing automatic balance and the like.

Description

The network message processing method of a kind of many CPU inter-core load equilibrium
Technical field
The present invention relates to a kind of network data processing method, especially relate to the network message processing method of a kind of many CPU inter-core load equilibrium.
Background technology
Prior art is for single receiving queue network card chip, receiving message processes and often adopts the hard down trigger poll civilian mode of receiving telegraph, owing to be subject to the characteristic of single formation, message generally is sent on the CPU nuclear, other CPU nuclears can't be got message with walking abreast from single receiving queue together like this, when network interface card receives the message load greater than the CPU disposal ability, cause a CPU nuclear busy, the idle situation of other all CPU nuclears.Single receiving queue network card chip only can be sent to the nuclear of CPU and process when receiving message, the problem such as cause that disposal ability can't effectively be utilized.
Summary of the invention
Purpose of the present invention is exactly the network message processing method that provides a kind of in order to overcome the defective that above-mentioned prior art exists and take full advantage of cpu resource, can realize many CPU inter-core load equilibrium of automatic equalization.
Purpose of the present invention can be achieved through the following technical solutions:
The network message processing method of a kind of many CPU inter-core load equilibrium, the method at first specifies a CPU nuclear to collect message from the network interface card receiving queue, and be distributed in the message receiving queue of other CPU nuclears, until the message receiving queue reaches its maximum threshold values, other CPU nuclears are collected message from the message receiving queue corresponding with it, then message are carried out protocol stack and process.
Described each CPU nuclear is equipped with corresponding ID, and the span of described ID is the integer between [0, CPU_CORE_NUMBERS-1], and wherein, CPU_CORE_NUMBERS is the sum of CPU nuclear.
Described network message processing method specifically may further comprise the steps:
1) assigned I D is that the CPU nuclear of CURRENT_CPU_ID is the current pronucleus of working as of collecting message from the network interface card receiving queue;
2) collect message from the network interface card receiving queue when pronucleus, and the value of variable recv_packet_count adds one, described variable recv_packet_count represents the message number collected when pronucleus;
3) whether judge hash_cpu more than or equal to CURRENT_CPU_ID, if then the value of hash_cpu adds one, execution in step 4), if not, direct execution in step 4);
Wherein, hash_cpu=recv_packet_count% (CPU_CORE_NUMBERS-1);
4) judge ID is whether message number in the message receiving queue on the CPU nuclear of hash_cpu reaches maximum threshold values, if then execution in step 5), if not, then execution in step 6);
5) process when pronucleus directly carries out protocol stack to the message of collecting, return step 1);
6) message of collecting is sent in the message receiving queue of CPU nuclear that ID is hash_cpu when pronucleus, and notifies this CPU nuclear to process message, return step 1).
Described recv_packet_count and hash_cpu are static signless integer variable.
Compared with prior art, the present invention has the following advantages:
1) the inventive method cooperates under the multi-core CPU hardware structure at single formation network interface card, can fully use each nuclear of CPU, thereby reach the maximum network message processing capability, does not waste cpu resource;
2) realized the automatic equalization of flowing water between monokaryon of the present invention distribution and other nuclear protocol stack disposal abilities.
Description of drawings
Fig. 1 is structural representation of the present invention.
Embodiment
The present invention is described in detail below in conjunction with the drawings and specific embodiments.
Embodiment
As shown in Figure 1, the network message processing method of a kind of many CPU inter-core load equilibrium, the method totally can be described as: at first specify a CPU nuclear to collect message from the network interface card receiving queue, and be distributed in the message receiving queue of other CPU nuclears, until the message receiving queue reaches its maximum threshold values, other CPU nuclears are collected message from the message receiving queue corresponding with it, then message are carried out protocol stack and process.Wherein, each CPU nuclear is equipped with corresponding ID, and the span of described ID is the integer between [0, CPU_CORE_NUMBERS-1], and wherein, CPU_CORE_NUMBERS is the sum of CPU nuclear.
Define static signless integer variable: recv_packet_count, be used for the message number that expression has received; Define static signless integer variable: hash_cpu, for the ID of the CPU nuclear of indicating to be distributed to.
Described network message processing method specifically may further comprise the steps:
1) assigned I D is that the CPU nuclear of CURRENT_CPU_ID is the current pronucleus of working as of collecting message from the network interface card receiving queue;
2) collect message from the network interface card receiving queue when pronucleus, and the value of variable recv_packet_count adds one, described variable recv_packet_count represents the message number collected when pronucleus;
3) judge whether large son or equal CURRENT_CPU_ID of hash_cpu, if then the value of hash_cpu adds one, execution in step 4), if not, direct execution in step 4);
Wherein, hash_cpu=recv_packet_count% (CPU_CORE_NUMBERS-1) adopts based on receiving the message number the complete average algorithm that CPU nuclear sum subtracts a rear remainder is asked for;
4) judge ID is whether message number in the message receiving queue on the CPU nuclear of hash_cpu reaches maximum threshold values, if then execution in step 5), if not, then execution in step 6);
5) process when pronucleus directly carries out protocol stack to the message of collecting, thereby reduce the chance of getting message from the network interface card receiving queue when pronucleus, reduce other nuclear message processing pressure, reach the automatic equalization of distribution and protocol stack stream treatment, then return step 1);
6) message of collecting is sent in the message receiving queue of CPU nuclear that ID is hash_cpu when pronucleus, and notifies this CPU nuclear to process message, return step 1), enter next time from the network interface card receiving queue civilian process of receiving telegraph.
The network message processing method of above-mentioned many CPU inter-core load equilibrium cooperates under the multi-core CPU hardware structure at single formation network interface card, can fully use each nuclear of CPU, thereby reach the maximum network message processing capability, does not waste cpu resource; Realized the automatic equalization of flowing water between monokaryon distribution and other nuclear protocol stack disposal abilities.

Claims (4)

1. the network message processing method of the equilibrium of CPU inter-core load more than a kind, it is characterized in that, the method at first specifies a CPU nuclear to collect message from the network interface card receiving queue, and be distributed in the message receiving queue of other CPU nuclears, until the message receiving queue reaches its maximum threshold values, other CPU nuclears are collected message from the message receiving queue corresponding with it, then message are carried out protocol stack and process.
2. the network message processing method of a kind of many CPU inter-core load according to claim 1 equilibrium, it is characterized in that, described each CPU nuclear is equipped with corresponding ID, the span of described ID is [0, CPU_CORE_NUMBERS-1] between integer, wherein, CPU_CORE_NUMBERS is the sum of CPU nuclear.
3. the network message processing method of a kind of many CPU inter-core load according to claim 2 equilibrium is characterized in that, described network message processing method specifically may further comprise the steps:
1) assigned I D is that the CPU nuclear of CURRENT_CPU_ID is the current pronucleus of working as of collecting message from the network interface card receiving queue;
2) collect message from the network interface card receiving queue when pronucleus, and the value of variable recv_packet_count adds one, described variable recv_packet_count represents the message number collected when pronucleus;
3) whether judge hash_cpu more than or equal to CURRENT_CPU_ID, if then the value of hash_cpu adds one, execution in step 4), if not, direct execution in step 4);
Wherein, hash_cpu=recv_packet_count% (CPU_CORE_NUMBERS-1);
4) judge ID is whether message number in the message receiving queue on the CPU nuclear of hash_cpu reaches maximum threshold values, if then execution in step 5), if not, then execution in step 6);
5) process when pronucleus directly carries out protocol stack to the message of collecting, return step 1);
6) message of collecting is sent in the message receiving queue of CPU nuclear that ID is hash_cpu when pronucleus, and notifies this CPU nuclear to process message, return step 1).
4. the network message processing method of a kind of many CPU inter-core load according to claim 3 equilibrium is characterized in that, described recv_packet_count and hash_cpu are static signless integer variable.
CN201210484653.2A 2012-11-23 2012-11-23 A kind of network message processing method of multi -CPU inter-core load equilibrium Active CN102970244B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210484653.2A CN102970244B (en) 2012-11-23 2012-11-23 A kind of network message processing method of multi -CPU inter-core load equilibrium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210484653.2A CN102970244B (en) 2012-11-23 2012-11-23 A kind of network message processing method of multi -CPU inter-core load equilibrium

Publications (2)

Publication Number Publication Date
CN102970244A true CN102970244A (en) 2013-03-13
CN102970244B CN102970244B (en) 2018-04-13

Family

ID=47800131

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210484653.2A Active CN102970244B (en) 2012-11-23 2012-11-23 A kind of network message processing method of multi -CPU inter-core load equilibrium

Country Status (1)

Country Link
CN (1) CN102970244B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015067118A1 (en) * 2013-11-08 2015-05-14 华为技术有限公司 Multiple protocol stack load balancing method and apparatus
CN104969533A (en) * 2013-12-25 2015-10-07 华为技术有限公司 Data packet processing method and device
CN105630731A (en) * 2015-12-24 2016-06-01 曙光信息产业(北京)有限公司 Network card data processing method and device in multi-CPU (Central Processing Unit) environment
CN106533978A (en) * 2016-11-24 2017-03-22 东软集团股份有限公司 Network load balancing method and system
CN107888626A (en) * 2017-12-25 2018-04-06 新华三信息安全技术有限公司 A kind of message detecting method and device
CN108259369A (en) * 2018-01-26 2018-07-06 迈普通信技术股份有限公司 The retransmission method and device of a kind of data message
CN109218226A (en) * 2017-07-03 2019-01-15 迈普通信技术股份有限公司 Message processing method and the network equipment
CN110166373A (en) * 2019-05-21 2019-08-23 优刻得科技股份有限公司 Method, apparatus, medium and system of the source physical machine to purpose physical machine hair data
CN111277514A (en) * 2020-01-21 2020-06-12 新华三技术有限公司合肥分公司 Message queue distribution method, message forwarding method and related device
CN111314249A (en) * 2020-05-08 2020-06-19 深圳震有科技股份有限公司 Method and server for avoiding data packet loss of 5G data forwarding plane
CN112073332A (en) * 2020-08-10 2020-12-11 烽火通信科技股份有限公司 Message distribution method, multi-core processor and readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101599028A (en) * 2009-07-08 2009-12-09 成都市华为赛门铁克科技有限公司 URL(uniform resource locator) is filtered in a kind of multi-core CPU method and device
CN101719872A (en) * 2009-12-11 2010-06-02 曙光信息产业(北京)有限公司 Zero-copy mode based method and device for sending and receiving multi-queue messages
CN101877666A (en) * 2009-11-13 2010-11-03 曙光信息产业(北京)有限公司 Method and device for receiving multi-application program message based on zero copy mode
CN102004673A (en) * 2010-11-29 2011-04-06 中兴通讯股份有限公司 Processing method and system of multi-core processor load balancing
US20110179416A1 (en) * 2010-01-21 2011-07-21 Vmware, Inc. Virtual Machine Access to Storage Via a Multi-Queue IO Storage Adapter With Optimized Cache Affinity and PCPU Load Balancing
CN102364455A (en) * 2011-10-31 2012-02-29 杭州华三通信技术有限公司 Balanced share control method and device for virtual central processing units (VCPUs) among cascaded multi-core central processing units (CPUs)
CN102571580A (en) * 2011-12-31 2012-07-11 曙光信息产业股份有限公司 Data receiving method and computer

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101599028A (en) * 2009-07-08 2009-12-09 成都市华为赛门铁克科技有限公司 URL(uniform resource locator) is filtered in a kind of multi-core CPU method and device
CN101877666A (en) * 2009-11-13 2010-11-03 曙光信息产业(北京)有限公司 Method and device for receiving multi-application program message based on zero copy mode
CN101719872A (en) * 2009-12-11 2010-06-02 曙光信息产业(北京)有限公司 Zero-copy mode based method and device for sending and receiving multi-queue messages
US20110179416A1 (en) * 2010-01-21 2011-07-21 Vmware, Inc. Virtual Machine Access to Storage Via a Multi-Queue IO Storage Adapter With Optimized Cache Affinity and PCPU Load Balancing
CN102004673A (en) * 2010-11-29 2011-04-06 中兴通讯股份有限公司 Processing method and system of multi-core processor load balancing
CN102364455A (en) * 2011-10-31 2012-02-29 杭州华三通信技术有限公司 Balanced share control method and device for virtual central processing units (VCPUs) among cascaded multi-core central processing units (CPUs)
CN102571580A (en) * 2011-12-31 2012-07-11 曙光信息产业股份有限公司 Data receiving method and computer

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104639578B (en) * 2013-11-08 2018-05-11 华为技术有限公司 Multi-protocol stack load-balancing method and device
CN104639578A (en) * 2013-11-08 2015-05-20 华为技术有限公司 Multi-protocol-stack load balancing method and multi-protocol-stack load balancing device
WO2015067118A1 (en) * 2013-11-08 2015-05-14 华为技术有限公司 Multiple protocol stack load balancing method and apparatus
CN104969533A (en) * 2013-12-25 2015-10-07 华为技术有限公司 Data packet processing method and device
CN104969533B (en) * 2013-12-25 2018-11-06 华为技术有限公司 A kind of data package processing method and device
CN105630731A (en) * 2015-12-24 2016-06-01 曙光信息产业(北京)有限公司 Network card data processing method and device in multi-CPU (Central Processing Unit) environment
CN106533978A (en) * 2016-11-24 2017-03-22 东软集团股份有限公司 Network load balancing method and system
CN106533978B (en) * 2016-11-24 2019-09-10 东软集团股份有限公司 A kind of network load balancing method and system
CN109218226A (en) * 2017-07-03 2019-01-15 迈普通信技术股份有限公司 Message processing method and the network equipment
CN107888626A (en) * 2017-12-25 2018-04-06 新华三信息安全技术有限公司 A kind of message detecting method and device
CN107888626B (en) * 2017-12-25 2020-11-06 新华三信息安全技术有限公司 Message detection method and device
CN108259369A (en) * 2018-01-26 2018-07-06 迈普通信技术股份有限公司 The retransmission method and device of a kind of data message
CN108259369B (en) * 2018-01-26 2022-04-05 迈普通信技术股份有限公司 Method and device for forwarding data message
CN110166373A (en) * 2019-05-21 2019-08-23 优刻得科技股份有限公司 Method, apparatus, medium and system of the source physical machine to purpose physical machine hair data
CN110166373B (en) * 2019-05-21 2022-12-27 优刻得科技股份有限公司 Method, device, medium and system for sending data from source physical machine to destination physical machine
CN111277514A (en) * 2020-01-21 2020-06-12 新华三技术有限公司合肥分公司 Message queue distribution method, message forwarding method and related device
CN111314249A (en) * 2020-05-08 2020-06-19 深圳震有科技股份有限公司 Method and server for avoiding data packet loss of 5G data forwarding plane
CN111314249B (en) * 2020-05-08 2021-04-20 深圳震有科技股份有限公司 Method and server for avoiding data packet loss of 5G data forwarding plane
CN112073332A (en) * 2020-08-10 2020-12-11 烽火通信科技股份有限公司 Message distribution method, multi-core processor and readable storage medium

Also Published As

Publication number Publication date
CN102970244B (en) 2018-04-13

Similar Documents

Publication Publication Date Title
CN102970244A (en) Network message processing method of multi-CPU (Central Processing Unit) inter-core load balance
EP2701074B1 (en) Method, device, and system for performing scheduling in multi-processor core system
CN102868635B (en) The message order-preserving method of Multi-core and system
CN111459665A (en) Distributed edge computing system and distributed edge computing method
CN105007337A (en) Cluster system load balancing method and system thereof
CN101547150A (en) Method and device for scheduling data communication input port
CN111163018B (en) Network equipment and method for reducing transmission delay thereof
CN102110022B (en) Sensor network embedded operation system based on priority scheduling
CN101345652A (en) Data acquisition method and data acquisition equipment
CN102821164A (en) Efficient parallel-distribution type data processing system
CN102945185B (en) Task scheduling method and device
CN105430030A (en) OSG-based parallel extendable application server
CN102202094A (en) Method and device for processing service request based on HTTP (hyper text transfer protocol)
CN112383585A (en) Message processing system and method and electronic equipment
CN105978821B (en) The method and device that network congestion avoids
CN103532955B (en) Embedded multi-protocol mobile network data acquisition probe equipment
CN107479966B (en) Signaling acquisition method based on multi-core CPU
CN104821958B (en) Electricity consumption data packet interactive interface method based on WebService
CN103544098B (en) A kind of method and apparatus of pressure test
CN106570011B (en) Distributed crawler URL seed distribution method, scheduling node and capturing node
CN101272334B (en) Method, device and equipment for processing QoS service by multi-core CPU
CN1984119A (en) Method for controlling flow by time-division technology
CN102073548A (en) Method for executing task, and system thereof
CN105516276B (en) Message processing method and system based on bionic hierarchical communication
CN112019589B (en) Multi-level load balancing data packet processing method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant