CN104394096A - Multi-core processor based message processing method and multi-core processor - Google Patents

Multi-core processor based message processing method and multi-core processor Download PDF

Info

Publication number
CN104394096A
CN104394096A CN201410764673.4A CN201410764673A CN104394096A CN 104394096 A CN104394096 A CN 104394096A CN 201410764673 A CN201410764673 A CN 201410764673A CN 104394096 A CN104394096 A CN 104394096A
Authority
CN
China
Prior art keywords
message
arbitrary
processing stage
streamline
call number
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410764673.4A
Other languages
Chinese (zh)
Other versions
CN104394096B (en
Inventor
李蒙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ruijie Networks Co Ltd
Original Assignee
Fujian Star Net Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujian Star Net Communication Co Ltd filed Critical Fujian Star Net Communication Co Ltd
Priority to CN201410764673.4A priority Critical patent/CN104394096B/en
Publication of CN104394096A publication Critical patent/CN104394096A/en
Application granted granted Critical
Publication of CN104394096B publication Critical patent/CN104394096B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention provides a multi-core processor based message processing method and a multi-core processor. The method comprises steps as follows: the multi-core processor distributes one buffer subpool to each assembly line in a flow line establishing process, and the buffer subpool is used for storing a message pointer which points to a message buffer; and the received messages are scattered to the buffer subpools corresponding to different flow lines and processed on the corresponding flow lines. According to the multi-core processor based message processing method and the multi-core processor, one buffer subpool is provided for each flow line, and resources are directly acquired from the corresponding buffer subpools when required to be acquired for the flow lines, so that the lock confliction phenomenon among the flow lines in the prior art is avoided, lockless message retransmission is realized, and the parallel processing capability of the multi-core processor is improved to a certain degree.

Description

A kind of message processing method based on polycaryon processor and polycaryon processor
Technical field
The present invention relates to network communication technology field, particularly relate to a kind of message processing method based on polycaryon processor and polycaryon processor.
Background technology
At present, in order to meet the demand of high-speed data forwarding performance, polycaryon processor more and more welcomed by the people, the ability of polycaryon processor expandability and parallel computation is higher.So-called polycaryon processor is exactly the processor of integrated multiple core processor on a hardware chip, normally shared drive formula structure.Polycaryon processor has separate multiple process cores, can perform mh concurrently, thus strengthens the handling capacity of system, utilizes the scheme of polycaryon processor process message as follows at present:
(1) processing stage of the processing procedure of message being divided into multiple, different process core is bound from the processing stage of different, and the message be responsible for the processing stage of processing this, adopt the tupe of streamlined to process message, thus enhance the disposal ability to message.
(2) message is classified, dissimilar message by different process core process, multi-core parallel concurrent process message, thus enhance the disposal ability to message.
(3) processing stage of the processing procedure of message being divided into some, according to certain rule, message is hashed on many streamlines, give many pipeline and parallel design messages respectively, thus strengthen the disposal ability of message.
No matter adopt which kind of scheme above-mentioned, each bar streamline of polycaryon processor is before process message, need to obtain spatial cache from this cache pool, wherein arbitrary streamline is obtaining in the process of spatial cache from cache pool, this cache pool is locked, namely, arbitrary streamline is obtaining in the process of spatial cache from cache pool, after other streamline can only wait for that this arbitrary streamline obtains spatial cache, other streamline could compete acquisition spatial cache again, the lock conflict phenomenon of resource mutual exclusion lock that Here it is.This lock conflict phenomenon, after the streamline not getting spatial cache needs to wait until and obtain spatial cache, could process message further, this will reduce the parallel processing capability of polycaryon processor, thus causes the wasting of resources of process core.
Summary of the invention
The invention provides a kind of message processing method based on polycaryon processor and polycaryon processor, in order to solve the lock conflict phenomenon occurred in prior art, in order to improve the parallel processing capability of polycaryon processor.
Based on a message processing method for polycaryon processor, comprising:
Described polycaryon processor is in the process creating streamline, and for every bar streamline distributes a buffer memory subpool, described buffer memory subpool is used for depositing message pointer, this pointed message buffer;
The described message received is hashed to buffer memory subpool corresponding to different streamlines, and on corresponding streamline, message is processed.
In described method, described buffer memory subpool is an array of indexes, to be hashed on described arbitrary streamline the call number one_to_one corresponding in the message of corresponding buffer memory subpool and described array of indexes.
The embodiment of the present invention is by displaying buffer memory subpool in the mode of array of indexes, and the lock conflict phenomenon between the Message processing stage can avoiding streamline inside, further increases the parallel processing capability of polycaryon processor to a certain extent.
In described method, for arbitrary streamline, this streamline comprises several Message processing stage, then for arbitrary Message processing stage, the method also comprises:
Described arbitrary processing stage from described array of indexes, obtain a setting number call number at every turn;
Search a setting number message corresponding to a described setting number call number according to a described setting number call number, and a described setting number message is carried out to the process in described arbitrary Message processing stage;
Wherein, described arbitrary Message processing stage is different from the call number that other Message processing stages obtain at one time, described arbitrary processing stage the call number that obtains be described arbitrary processing stage before one processing stage call number corresponding to processed message.
Utilize the embodiment of the present invention, obtain different call numbers at one time processing stage of different, the phenomenon of lock conflict occurred between the Message processing stage can avoiding streamline inside.
In described method, described arbitrary streamline comprises four Message processing stages, is respectively: the processing stage that message receiving two layers of resolution phase, three layers of process business, message distributes two layers of encapsulated phase, message transmission phase.
Described method, for arbitrary message, the message descriptor of this message and message content leave continuous print memory headroom in, and this continuous print memory headroom is the message buffer of this message of this message pointed.
The embodiment of the present invention, by leaving the descriptor of message and message content in continuous print memory headroom, can improve cache hit probability.
Described method, for arbitrary message, the buffer length that this message is corresponding is not equal to the integral multiple of cache line length.
In the embodiment of the present invention, the buffer length of single message is set as the integral multiple being not equal to cache line length, thus improves cache hit probability.
The embodiment of the present invention additionally provides a kind of polycaryon processor, comprising:
Allocation units: in the process creating streamline, for every bar streamline distributes a buffer memory subpool, described buffer memory subpool is used for depositing message pointer, this pointed message buffer;
Processing unit: for the described message received is hashed to buffer memory subpool corresponding to different streamlines, and on corresponding streamline, message is processed.
In described polycaryon processor, described buffer memory subpool is an array of indexes, to be hashed on described arbitrary streamline the call number one_to_one corresponding in the message of corresponding buffer memory subpool and described array of indexes.
Described polycaryon processor, for arbitrary streamline, this streamline comprises several Message processing stage, described processing unit specifically for:
For arbitrary Message processing stage, from described array of indexes, obtain a setting number call number at every turn;
Search a setting number message corresponding to a described setting number call number according to a described setting number call number, and a described setting number message is carried out to the process in described arbitrary Message processing stage;
Wherein, described arbitrary Message processing stage is different from the call number that other Message processing stages obtain at one time, described arbitrary processing stage the call number that obtains be described arbitrary processing stage before one processing stage call number corresponding to processed message.
Described polycaryon processor, described arbitrary streamline comprises four Message processing stages, is respectively: the processing stage that message receiving two layers of resolution phase, three layers of process business, message distributes two layers of encapsulated phase, message transmission phase.
Described polycaryon processor, for arbitrary message, the message descriptor of this message and message content leave continuous print memory headroom in, and this continuous print memory headroom is the message buffer of this message of this message pointed.
Described polycaryon processor, for arbitrary message, the buffer length that this message is corresponding is not equal to the integral multiple of cacheline length.
The message processing method based on polycaryon processor utilizing the embodiment of the present invention to provide and polycaryon processor, there is following beneficial effect: by being every each streamline buffer memory subpool, when streamline needs Gains resources, directly obtain from the buffer memory subpool of its correspondence, thus the lock conflict phenomenon avoided between streamline of the prior art, achieve and E-Packet without lock, improve the parallel processing capability of polycaryon processor to a certain extent.
Accompanying drawing explanation
The message processing method flow chart based on polycaryon processor that Fig. 1 provides for the embodiment of the present invention;
The array of indexes schematic diagram that Fig. 2 provides for the embodiment of the present invention;
The flow chart of arbitrary Message processing phase process message in the streamline that Fig. 3 provides for the embodiment of the present invention;
Fig. 4 is the message structure schematic diagram of prior art;
The message structure schematic diagram that Fig. 5 provides for the embodiment of the present invention;
Fig. 6 is the mapping relations figure in prior art between cache and main storage;
The polycaryon processor schematic diagram that Fig. 7 provides for the embodiment of the present invention.
Embodiment
Below in conjunction with drawings and Examples, the message forwarding method based on polycaryon processor provided by the invention and polycaryon processor are illustrated in greater detail.
The embodiment of the present invention provides a kind of message processing method based on polycaryon processor, as shown in Figure 1, comprising:
Step 101, polycaryon processor is in the process creating streamline, and for every bar streamline distributes a buffer memory subpool, described buffer memory subpool is used for depositing message pointer, this pointed message buffer.
Wherein, the corresponding message of each message pointer, the corresponding message buffer of each message, message buffer, message buffer and the actual physical region depositing message of the message that each this message pointer of message pointed is corresponding.Concrete, the actual address of depositing the physical region of this message of message pointed.
Step 102, hashes to buffer memory subpool corresponding to different streamlines, and processes message on corresponding streamline by the described message received.
Wherein, the corresponding process core of streamline, or, the corresponding multiple process core of streamline.
Concrete, buffer memory subpool is a physical memory area, houses the pointer of the message hashed on streamline corresponding to this buffer memory subpool, and message descriptor.This buffer memory subpool can be a chained list also can be an array of indexes.
The corresponding large buffering area of many streamlines in prior art, only allow same flow waterline Gains resources from this buffering area at one time, do not allow many streamlines Gains resources from this buffering area simultaneously, therefore there will be the phenomenon of resource mutual exclusion lock conflict, cause the parallel processing capability of polycaryon processor to decline.The embodiment of the present invention is by being every each streamline buffer memory subpool, when streamline needs Gains resources, directly obtain from the buffer memory subpool of its correspondence, thus the lock conflict phenomenon avoided between streamline of the prior art, achieve and E-Packet without lock, improve the parallel processing capability of polycaryon processor to a certain extent.
Based on above-described embodiment, preferably, described buffer memory subpool is an array of indexes, is hashed the call number one_to_one corresponding in the message of buffer memory subpool corresponding to streamline and described array of indexes.
In the embodiment of the present invention, the message pointer of message due to what preserve in buffer memory subpool, therefore, when buffer memory subpool is an array of indexes, by the message pointer one_to_one corresponding of the call number in array of indexes and message, thus realize being hashed the one_to_one corresponding of the call number in the message of buffer memory subpool corresponding on arbitrary streamline and array of indexes.Wherein, a corresponding message of message pointer.
Concrete, every bar streamline in the embodiment of the present invention comprises multiple Message processing stage, if buffer memory subpool is chained list, also the phenomenon of mutual exclusion lock is there will be between the Message processing stage of now streamline inside, that is, chained list only allows a Message processing stage and this chained list to carry out alternately at one time.The embodiment of the present invention is by displaying buffer memory subpool in the mode of array of indexes, and the lock conflict phenomenon between the Message processing stage can avoiding streamline inside, further increases the parallel processing capability of polycaryon processor to a certain extent.
In the embodiment of the present invention, buffer memory subpool (fast subpool) is an array of indexes in essence, in order to deposit the data structure of message pointer.Do not need when receiving message like this to operating system application internal memory, but message pointer is taken out from fast subpool, and it is unavailable to mark this message in fast subpool, certainly after this message is processed, the internal memory that the buffering area of this message is corresponding does not need release to operating system yet, only can need use by this message in fast subpool.
Fast subpool is exactly an array of indexes in essence, its structure as shown in Figure 2, wherein 0,1 ... n-1 represents call number, when receiving the request of the request message buffering area sent to fast subpool, this fast subpool then returns head (head) and tail (tail) two integers, head and tail represents the call number in fast subpool, message corresponding to the call number namely between head and tail can be used, and message corresponding to other call number is unavailable.
Further preferably, the call number in array of indexes is sorted according to descending or ascending order.
Based on above-mentioned preferred implementation, for arbitrary streamline, this streamline comprises several Message processing stage, then for arbitrary Message processing stage, as shown in Figure 3, also comprise:
Step 301, described arbitrary processing stage from described array of indexes, obtain a setting number call number at every turn.
Concrete, the large I of setting number is determined according to actual conditions, does not limit here.
Step 302, a setting number message corresponding to a described setting number call number is searched according to a described setting number call number, and a described setting number message is carried out to the process in described arbitrary Message processing stage, wherein, described arbitrary Message processing stage is different from the call number that other Message processing stages obtain at one time, described arbitrary processing stage the call number that obtains be described arbitrary processing stage before one processing stage the call number that obtained.
Concrete, due to the corresponding message of a call number, according to obtained a setting number call number, can find a setting number message, the address of the memory space that the setting number message found is corresponding is not necessarily continuous.After above-mentioned arbitrary place obtains call number in the Message processing stage from array of indexes, unavailable the processing stage of the call number obtained the processing stage that this being arbitrary in array of indexes being labeled as other, Message processing corresponding to the call number it obtained the processing stage of knowing that this is arbitrary is complete, the call number obtained the processing stage that array of indexes being arbitrary by this be labeled as this arbitrary processing stage next processing stage can use.Can prevent multiple Message processing stage from obtaining message corresponding to identical call number (resource) so simultaneously, thus the lock conflict phenomenon of a streamline inside can be avoided.
Processing mode due to message is pipeline processing mode, when streamline comprises multiple Message processing stage, streamline is to being that processing sequence according to multiple Message processing stage processes during Message processing, namely, each message in streamline, must according to Message processing phase sequence, successively by each Message processing phase process, until complete by last Message processing phase process.
Carry out in processing procedure according to the order in Message processing stage to message at streamline, if above-mentioned arbitrary processing stage with will obtain a setting number Message processing complete, array of indexes can be returned to the call number of processed message, for the Message processing stage after above-mentioned arbitrary Message processing stage, message be processed.In to the last Message processing stage from the processing stage of second message in several Message processing stages, the call number that these Message processing stages obtain from array of indexes is all the call number obtained in the last Message processing stage.Identical call number can not be obtained from array of indexes between the processing stage of can ensureing each so simultaneously, thus avoid the appearance of streamline internal lock conflict phenomenon.
Concrete, the number of the call number at every turn obtained from array of indexes processing stage of each can be different, but the processing stage that the number of the call number obtained the processing stage of wherein arbitrary needing to be less than or equal to this before one processing stage the number of call number that obtained.
Preferably, when the call number in array of indexes sorts according to descending order, above-mentioned arbitrary processing stage from array of indexes, obtain call number according to the order that call number is descending; When the call number in array of indexes sorts according to ascending order, above-mentioned arbitrary processing stage from array of indexes, obtain call number according to the order that call number is ascending.
Further preferably, the call number in array of indexes is continuous print natural number, above-mentioned arbitrary processing stage from array of indexes, obtain a setting number continuous print call number at every turn.The treatment effeciency to message can be improved like this.
Based on above-described embodiment, described arbitrary streamline comprises four Message processing stages, is respectively: the processing stage that message receiving two layers of resolution phase, three layers of process business, message distributes two layers of encapsulated phase, message transmission phase.
Wherein, message is IPSRV successively through reception two layers of resolution phase RX, three layers of process business processing stage, message distributes two layers of encapsulated phase DISP, message transmission phase completes process TX.
Concrete, streamline corresponding to above-mentioned four Message processing stages is when starting to process message, first RX obtains a setting number call number from array of indexes, process according to the message that call number searches each call number corresponding, after Message processing is completed, call number is returned to array of indexes, now, IPSRV can obtain the call number that the TX stage returns from array of indexes, the number of the call number that IPSRV obtains should be less than or equal to the number of the call number that the TX stage returns, by that analogy, a to the last stage TX.That is, the call number that must obtain from array of indexes is the call number obtained in the RX stage IPSRV stage; The call number that the DISP stage must obtain from array of indexes is the call number obtained in the IPSRV stage; The call number that the TX stage must obtain from array of indexes is the call number obtained in the DISP stage, make each stage can not obtain identical call number at one time like this, thus avoid inner of streamline each processing stage between lock conflict phenomenon.
Based on the various embodiments described above, for arbitrary message, the message descriptor of this message and message content leave continuous print memory headroom in, and this continuous print memory headroom is the message buffer of this message of this message pointed.Wherein, described message descriptor comprises length, the protocol type of message, and described message content comprises all data corresponding to described message.Wherein, the message that message descriptor leaves message descriptor in corresponding describes district.
In prior art, the message structure of message is: the descriptor of message and message content leave different memory headrooms respectively in, that is, the descriptor of same message and message content leave discrete memory headroom in.Message content comprises header information, message is in processed process, it is generally the descriptor of filling message according to header information, or read according to the descriptor of message or amendment header information, owing to the descriptor of same message and message content being mapped in different memory headrooms in prior art, namely, be mapped to different cache line (cache memory cache lines), this can cause the problem that cache (cache memory) hit rate is low.
In prior art, wherein a kind of message structure is as shown in Figure 4 for message, wherein, message describes the descriptor that district houses message pointer and message, and the address (address) that message pointer (* pkt_ptr) points to is address corresponding to message content.As shown in Figure 4, the descriptor of this message and message content leave discontinuous memory headroom in.
The message structure of message redefines by the embodiment of the present invention, namely, the descriptor of message and message content leave continuous print memory headroom in, the descriptor of message and message content can be made like this to be mapped to same cache line region, effectively can promote cache hit probability.The descriptor of the message that the embodiment of the present invention provides and message content all leave continuous print memory headroom in, the structure of this message as shown in Figure 5, wherein, message describes in district deposits message descriptor, message descriptor and message content is made to have left in a continuous print memory headroom, the reserved area that message describes district be message describe district be not enough to deposit message descriptor time, be used for depositing unnecessary message descriptor.
Based on the various embodiments described above, for arbitrary message, the buffer length that this message is corresponding is not equal to the integral multiple of cacheline length.
Concrete, each message corresponding a message buffer, i.e. message BUF.In prior art, main storage is divided into each region of M, as shown in Figure 6, wherein, and district 1, district 2 ... .. first piece of district M map with first in cache piece, district 1, district 2 ... .. second piece of district M maps with second in cache piece, by that analogy.When cache or cache rent length is the integral multiple of single message BUF, the same area of different message can be mapped on same cache.Such as: cache group size is 2kB, single message BUF size is also 2kB simultaneously, message 1,2 ... M is stored in district 0, the district 1 of internal memory respectively ... district M, generally the head of message frequently can be accessed by processor, but the head of each message of synchronization M only has one can be mapped in the district 0 of cache, this will cause cache miss and cache simultaneous operation frequently, thus reduces cache hit probability.In the embodiment of the present invention, the buffer length of single message is not equal to the integral multiple of cache line length, thus improves cache hit probability.
The embodiment of the present invention additionally provides a kind of polycaryon processor, as shown in Figure 7, comprising:
Allocation units 701: in the process creating streamline, for every bar streamline distributes a buffer memory subpool, described buffer memory subpool is used for depositing message pointer, this pointed message buffer;
Processing unit 702: for the described message received is hashed to buffer memory subpool corresponding to different streamlines, and on corresponding streamline, message is processed.
In described polycaryon processor, described buffer memory subpool is an array of indexes, to be hashed on described arbitrary streamline the call number one_to_one corresponding in the message of corresponding buffer memory subpool and described array of indexes.
In described polycaryon processor, for arbitrary streamline, this streamline comprises several Message processing stage, described processing unit specifically for:
For arbitrary Message processing stage, from described array of indexes, obtain a setting number call number at every turn;
Search a setting number message corresponding to a described setting number call number according to a described setting number call number, and a described setting number message is carried out to the process in described arbitrary Message processing stage;
Wherein, described arbitrary Message processing stage is different from the call number that other Message processing stages obtain at one time, described arbitrary processing stage the call number that obtains be described arbitrary processing stage before one processing stage call number corresponding to processed message.
Preferably, described arbitrary streamline comprises four Message processing stages, is respectively: the processing stage that message receiving two layers of resolution phase, three layers of process business, message distributes two layers of encapsulated phase, message transmission phase.
Preferably, for arbitrary message, the message descriptor of this message and message content leave continuous print memory headroom in, and this continuous print memory headroom is the message buffer of this message of this message pointed.
Preferably, for arbitrary message, the buffer length that this message is corresponding is not equal to the integral multiple of cache line length.
The message processing method based on polycaryon processor utilizing the embodiment of the present invention to provide and polycaryon processor, there is following beneficial effect: by being every each streamline buffer memory subpool, when streamline needs Gains resources, directly obtain from the buffer memory subpool of its correspondence, thus the lock conflict phenomenon avoided between streamline of the prior art, achieve and E-Packet without lock, improve the parallel processing capability of polycaryon processor to a certain extent.
The present invention describes with reference to according to the flow chart of the method for the embodiment of the present invention, equipment (system) and computer program and/or block diagram.Should understand can by the combination of the flow process in each flow process in computer program instructions realization flow figure and/or block diagram and/or square frame and flow chart and/or block diagram and/or square frame.These computer program instructions can being provided to the processor of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing device to produce a machine, making the instruction performed by the processor of computer or other programmable data processing device produce device for realizing the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
These computer program instructions also can be stored in can in the computer-readable memory that works in a specific way of vectoring computer or other programmable data processing device, the instruction making to be stored in this computer-readable memory produces the manufacture comprising command device, and this command device realizes the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
These computer program instructions also can be loaded in computer or other programmable data processing device, make on computer or other programmable devices, to perform sequence of operations step to produce computer implemented process, thus the instruction performed on computer or other programmable devices is provided for the step realizing the function of specifying in flow chart flow process or multiple flow process and/or block diagram square frame or multiple square frame.
Although describe the preferred embodiments of the present invention, those skilled in the art once obtain the basic creative concept of cicada, then can make other change and amendment to these embodiments.So claims are intended to be interpreted as comprising preferred embodiment and falling into all changes and the amendment of the scope of the invention.
Obviously, those skilled in the art can carry out various change and modification to the present invention and not depart from the spirit and scope of the present invention.Like this, if these amendments of the present invention and modification belong within the scope of the claims in the present invention and equivalent technologies thereof, then the present invention is also intended to comprise these change and modification.

Claims (12)

1. based on a message processing method for polycaryon processor, it is characterized in that, comprising:
Described polycaryon processor is in the process creating streamline, and for every bar streamline distributes a buffer memory subpool, described buffer memory subpool is used for depositing message pointer, this pointed message buffer;
The described message received is hashed to buffer memory subpool corresponding to different streamlines, and on corresponding streamline, message is processed.
2. the method for claim 1, is characterized in that, described buffer memory subpool is an array of indexes, to be hashed on described arbitrary streamline the call number one_to_one corresponding in the message of corresponding buffer memory subpool and described array of indexes.
3. method as claimed in claim 2, it is characterized in that, for arbitrary streamline, this streamline comprises several Message processing stage, then for arbitrary Message processing stage, the method also comprises:
Described arbitrary processing stage from described array of indexes, obtain a setting number call number at every turn;
Search a setting number message corresponding to a described setting number call number according to a described setting number call number, and a described setting number message is carried out to the process in described arbitrary Message processing stage;
Wherein, described arbitrary Message processing stage is different from the call number that other Message processing stages obtain at one time, described arbitrary processing stage the call number that obtains be described arbitrary processing stage before one processing stage call number corresponding to processed message.
4. method as claimed in claim 3, it is characterized in that, described arbitrary streamline comprises four Message processing stages, is respectively: the processing stage that message receiving two layers of resolution phase, three layers of process business, message distributes two layers of encapsulated phase, message transmission phase.
5. the method as described in as arbitrary in claim 1-4, it is characterized in that, for arbitrary message, the message descriptor of this message and message content leave continuous print memory headroom in, and this continuous print memory headroom is the message buffer of this message of this message pointed.
6. the method as described in as arbitrary in claim 1-4, it is characterized in that, for arbitrary message, the buffer length that this message is corresponding is not equal to the integral multiple of cache memory cache lines cache line length.
7. a polycaryon processor, is characterized in that, comprising:
Allocation units: in the process creating streamline, for every bar streamline distributes a buffer memory subpool, described buffer memory subpool is used for depositing message pointer, this pointed message buffer;
Processing unit: for the described message received is hashed to buffer memory subpool corresponding to different streamlines, and on corresponding streamline, message is processed.
8. polycaryon processor as claimed in claim 7, it is characterized in that, described buffer memory subpool is an array of indexes, is hashed the call number one_to_one corresponding in the message of buffer memory subpool corresponding on described arbitrary streamline and described array of indexes.
9. polycaryon processor as claimed in claim 8, it is characterized in that, for arbitrary streamline, this streamline comprises several Message processing stage, described processing unit specifically for:
For arbitrary Message processing stage, from described array of indexes, obtain a setting number call number at every turn;
Search a setting number message corresponding to a described setting number call number according to a described setting number call number, and a described setting number message is carried out to the process in described arbitrary Message processing stage;
Wherein, described arbitrary Message processing stage is different from the call number that other Message processing stages obtain at one time, described arbitrary processing stage the call number that obtains be described arbitrary processing stage before one processing stage call number corresponding to processed message.
10. polycaryon processor as claimed in claim 9, it is characterized in that, described arbitrary streamline comprises four Message processing stages, is respectively: the processing stage that message receiving two layers of resolution phase, three layers of process business, message distributes two layers of encapsulated phase, message transmission phase.
11. as arbitrary in claim 7-10 as described in polycaryon processor, it is characterized in that, for arbitrary message, the message descriptor of this message and message content leave continuous print memory headroom in, and this continuous print memory headroom is the message buffer of this message of this message pointed.
12. as arbitrary in claim 7-10 as described in polycaryon processor, it is characterized in that, for arbitrary message, the buffer length that this message is corresponding is not equal to the integral multiple of cache memory cache lines cache line length.
CN201410764673.4A 2014-12-11 2014-12-11 A kind of message processing method and polycaryon processor based on polycaryon processor Active CN104394096B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410764673.4A CN104394096B (en) 2014-12-11 2014-12-11 A kind of message processing method and polycaryon processor based on polycaryon processor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410764673.4A CN104394096B (en) 2014-12-11 2014-12-11 A kind of message processing method and polycaryon processor based on polycaryon processor

Publications (2)

Publication Number Publication Date
CN104394096A true CN104394096A (en) 2015-03-04
CN104394096B CN104394096B (en) 2017-11-03

Family

ID=52611932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410764673.4A Active CN104394096B (en) 2014-12-11 2014-12-11 A kind of message processing method and polycaryon processor based on polycaryon processor

Country Status (1)

Country Link
CN (1) CN104394096B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017067215A1 (en) * 2015-10-21 2017-04-27 深圳市中兴微电子技术有限公司 Method and system for packet scheduling using many-core network processor and micro-engine thereof, and storage medium
CN107896199A (en) * 2017-10-20 2018-04-10 深圳市风云实业有限公司 The method and apparatus of transmitting message
CN109495401A (en) * 2018-12-13 2019-03-19 迈普通信技术股份有限公司 The management method and device of caching
CN110147254A (en) * 2019-05-23 2019-08-20 苏州浪潮智能科技有限公司 A kind of data buffer storage processing method, device, equipment and readable storage medium storing program for executing
CN111107536A (en) * 2019-12-30 2020-05-05 联想(北京)有限公司 User plane function forwarding method, device, system and storage medium
CN111651373A (en) * 2020-05-15 2020-09-11 南京南瑞继保电气有限公司 Message receiving method, device, terminal and storage medium
CN112286679A (en) * 2020-10-20 2021-01-29 烽火通信科技股份有限公司 DPDK-based inter-multi-core buffer dynamic migration method and device
CN112511460A (en) * 2020-12-29 2021-03-16 安徽皖通邮电股份有限公司 Lock-free shared message forwarding method for single-transceiving-channel multi-core network communication equipment
CN112702275A (en) * 2020-12-29 2021-04-23 迈普通信技术股份有限公司 Method, device, network equipment and computer storage medium based on packet-by-packet forwarding
WO2021082969A1 (en) * 2019-10-29 2021-05-06 Oppo广东移动通信有限公司 Inter-core data processing method and system, system on chip and electronic device
CN115292023A (en) * 2022-10-08 2022-11-04 北京中科网威信息技术有限公司 Timing task processing method and device
WO2022252590A1 (en) * 2021-06-04 2022-12-08 展讯通信(上海)有限公司 Data packet processing method and apparatus

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080209176A1 (en) * 2007-02-28 2008-08-28 Advanced Micro Devices, Inc. Time stamping transactions to validate atomic operations in multiprocessor systems
EP2267940A1 (en) * 2009-06-22 2010-12-29 Citrix Systems, Inc. Systems and methods for N-Core tracing
CN102662761A (en) * 2012-03-27 2012-09-12 福建星网锐捷网络有限公司 Method and device for scheduling memory pool in multi-core central processing unit system
CN102685002A (en) * 2012-04-26 2012-09-19 汉柏科技有限公司 Multicore multi-threaded packet forwarding method and system
CN102752198A (en) * 2012-06-21 2012-10-24 北京星网锐捷网络技术有限公司 Multi-core message forwarding method, multi-core processor and network equipment

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080209176A1 (en) * 2007-02-28 2008-08-28 Advanced Micro Devices, Inc. Time stamping transactions to validate atomic operations in multiprocessor systems
EP2267940A1 (en) * 2009-06-22 2010-12-29 Citrix Systems, Inc. Systems and methods for N-Core tracing
CN102662761A (en) * 2012-03-27 2012-09-12 福建星网锐捷网络有限公司 Method and device for scheduling memory pool in multi-core central processing unit system
CN102685002A (en) * 2012-04-26 2012-09-19 汉柏科技有限公司 Multicore multi-threaded packet forwarding method and system
CN102752198A (en) * 2012-06-21 2012-10-24 北京星网锐捷网络技术有限公司 Multi-core message forwarding method, multi-core processor and network equipment

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106612236A (en) * 2015-10-21 2017-05-03 深圳市中兴微电子技术有限公司 Many-core network processor and micro engine message scheduling method and micro engine message scheduling system thereof
CN106612236B (en) * 2015-10-21 2020-02-07 深圳市中兴微电子技术有限公司 Many-core network processor and message scheduling method and system of micro-engine thereof
WO2017067215A1 (en) * 2015-10-21 2017-04-27 深圳市中兴微电子技术有限公司 Method and system for packet scheduling using many-core network processor and micro-engine thereof, and storage medium
CN107896199B (en) * 2017-10-20 2021-03-16 深圳市风云实业有限公司 Method and device for transmitting message
CN107896199A (en) * 2017-10-20 2018-04-10 深圳市风云实业有限公司 The method and apparatus of transmitting message
CN109495401A (en) * 2018-12-13 2019-03-19 迈普通信技术股份有限公司 The management method and device of caching
CN109495401B (en) * 2018-12-13 2022-06-24 迈普通信技术股份有限公司 Cache management method and device
CN110147254A (en) * 2019-05-23 2019-08-20 苏州浪潮智能科技有限公司 A kind of data buffer storage processing method, device, equipment and readable storage medium storing program for executing
WO2021082969A1 (en) * 2019-10-29 2021-05-06 Oppo广东移动通信有限公司 Inter-core data processing method and system, system on chip and electronic device
CN111107536A (en) * 2019-12-30 2020-05-05 联想(北京)有限公司 User plane function forwarding method, device, system and storage medium
CN111651373A (en) * 2020-05-15 2020-09-11 南京南瑞继保电气有限公司 Message receiving method, device, terminal and storage medium
CN112286679A (en) * 2020-10-20 2021-01-29 烽火通信科技股份有限公司 DPDK-based inter-multi-core buffer dynamic migration method and device
CN112511460A (en) * 2020-12-29 2021-03-16 安徽皖通邮电股份有限公司 Lock-free shared message forwarding method for single-transceiving-channel multi-core network communication equipment
CN112702275A (en) * 2020-12-29 2021-04-23 迈普通信技术股份有限公司 Method, device, network equipment and computer storage medium based on packet-by-packet forwarding
CN112511460B (en) * 2020-12-29 2022-09-09 安徽皖通邮电股份有限公司 Lock-free shared message forwarding method for single-transceiving-channel multi-core network communication equipment
WO2022252590A1 (en) * 2021-06-04 2022-12-08 展讯通信(上海)有限公司 Data packet processing method and apparatus
CN115292023A (en) * 2022-10-08 2022-11-04 北京中科网威信息技术有限公司 Timing task processing method and device
CN115292023B (en) * 2022-10-08 2023-01-17 北京中科网威信息技术有限公司 Timing task processing method and device

Also Published As

Publication number Publication date
CN104394096B (en) 2017-11-03

Similar Documents

Publication Publication Date Title
CN104394096A (en) Multi-core processor based message processing method and multi-core processor
US8381230B2 (en) Message passing with queues and channels
CN109388590B (en) Dynamic cache block management method and device for improving multichannel DMA (direct memory access) access performance
US20150127880A1 (en) Efficient implementations for mapreduce systems
CN104731569B (en) A kind of data processing method and relevant device
CN101222510B (en) Method for implementing CANopen main station
WO2016011811A1 (en) Memory management method and apparatus, and storage medium
CN102857414A (en) Forwarding table writing method and device and message forwarding method and device
CN103312720A (en) Data transmission method, equipment and system
EP3077914B1 (en) System and method for managing and supporting virtual host bus adaptor (vhba) over infiniband (ib) and for supporting efficient buffer usage with a single external memory interface
CN111177017B (en) Memory allocation method and device
CN103227826A (en) Method and device for transferring file
US20130290667A1 (en) Systems and methods for s-list partitioning
US11385900B2 (en) Accessing queue data
CN112698959A (en) Multi-core communication method and device
CN116501657B (en) Processing method, equipment and system for cache data
CN112995261A (en) Configuration method and device of service table, network equipment and storage medium
CN101604261A (en) The method for scheduling task of supercomputer
CN104572498A (en) Cache management method for message and device
US8543722B2 (en) Message passing with queues and channels
CN103312614A (en) Multicast message processing method, line card and communication device
CN106254270A (en) A kind of queue management method and device
CN108632166B (en) DPDK-based packet receiving secondary caching method and system
US20150121376A1 (en) Managing data transfer
CN104052831A (en) Data transmission method and device based on queues and communication system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: Cangshan District of Fuzhou City, Fujian province 350002 Jinshan Road No. 618 Garden State Industrial Park 19 floor

Patentee after: RUIJIE NETWORKS Co.,Ltd.

Address before: Cangshan District of Fuzhou City, Fujian province 350002 Jinshan Road No. 618 Garden State Industrial Park 19 floor

Patentee before: Beijing Star-Net Ruijie Networks Co.,Ltd.

CP01 Change in the name or title of a patent holder