CN109508237A - A kind of processing method and processing device of long term evolution LTE protocol stack data interaction - Google Patents
A kind of processing method and processing device of long term evolution LTE protocol stack data interaction Download PDFInfo
- Publication number
- CN109508237A CN109508237A CN201811548228.9A CN201811548228A CN109508237A CN 109508237 A CN109508237 A CN 109508237A CN 201811548228 A CN201811548228 A CN 201811548228A CN 109508237 A CN109508237 A CN 109508237A
- Authority
- CN
- China
- Prior art keywords
- processing
- core
- processing core
- task
- communication equipment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5027—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
- G06F9/505—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W80/00—Wireless network protocols or protocol adaptations to wireless operation
Abstract
The invention discloses a kind of processing method and processing devices of long term evolution LTE protocol stack data interaction.This method comprises: the communication equipment determines the load factor of each processing core in N number of processing core of the communication equipment;N is the integer greater than 2;The communication equipment receives processing task, and the processing task is distributed to the smallest processing core of load factor in N number of processing core and is executed.
Description
Technical field
The present invention relates to communication data process field more particularly to a kind of places of long term evolution LTE protocol stack data interaction
Manage method and device.
Background technique
In long term evolution (Long Term Evolution, LTE) protocol stack architecture, led between protocal layers entity
When letter, communication equipment needs to handle a large amount of data packet.In the prior art, data packet only by a processing core of communication equipment
Completion processing, however, the flexibility of LTE protocol trestle structure is poor with the increase of data volume, single core that handles is no longer satisfied
The requirement of service quality.Specific requirement includes: that the protocol stack upstream data processing speed peak value of communication equipment is 150 megabits every
Second, down peak magnitude is 75 megabits per second.Meanwhile in 1 millisecond, protocol stack, which must assure that, completes upstream data processing and downlink
Data processing.
In addition, each processing core only handles the data of specific several protocol layers in double processing core frameworks of the prior art, when
When certain Protocol layer data amounts are larger, the load imbalances of double processing core systems cause the duration of data processing larger.
Therefore, in the prior art processing core processing data flexibility is poor, load imbalance and data processing when
It is longer, it is a urgent problem to be solved.
Summary of the invention
The embodiment of the present application provides a kind of processing method and processing device of long term evolution LTE protocol stack data interaction, solves
The flexibility of processing core processing data is poor in the prior art, the duration of load imbalance and data processing is biggish asks
Topic.
The embodiment of the present invention provides a kind of processing method of long term evolution LTE protocol stack data interaction, this method comprises:
The communication equipment determines the load factor of each processing core in N number of processing core of the communication equipment;N is greater than 2
Integer;
The communication equipment receives processing task, and the processing task is distributed to load factor minimum in N number of processing core
Processing core execute.
Optionally, which comprises
The smallest processing core of load factor is the shortest processing core of remaining handling duration in N number of processing core;It is described
Remaining handling duration is duration needed for processing core has handled current task.
Optionally, which comprises
The communication equipment determines the processing chained list of the processing task, and by the processing chained list and a thread pair
It answers;The processing chained list stores the execution parameter of the processing task;
The communication equipment reads the execution parameter from the processing chained list;
The smallest processing core of the load factor of the communication equipment executes ginseng according to described by executing the thread
Number, executes the processing task.
Optionally, the communication equipment reads the execution parameter from the processing chained list, which comprises
The communication equipment reads the execution parameter in the first predetermined period;
If the processing chained list is occupied when the communication equipment reads the execution parameter, the communication equipment exists
Next first predetermined period, re-reads the execution parameter.
Optionally, the method also includes:
The communication equipment determines the load balancing degrees of N number of processing core by following formula:
Wherein, BqFor the load balancing degrees, TmCurrent all processing have been handled for N number of processing core
The handling duration of task, TsThe sum of the handling duration of currently processed task has been handled for each processing core in N number of processing core.
Optionally, the method also includes:
The communication equipment reports the status information of N number of processing core to the control software of the communication equipment;It is described
Status information includes: idle, normal operation, capacity reducing.
The embodiment of the present invention provides a kind of processing unit of long term evolution LTE protocol stack data interaction, which includes:
Determining module, for determining the load factor of each processing core in N number of processing core;N is the integer greater than 2;
Processing module distributes to load factor minimum in N number of processing core for receiving processing task, and by the processing task
Processing core execute.
Optionally, which includes:
The smallest processing core of load factor is the shortest processing core of remaining handling duration in N number of processing core;It is described
Remaining handling duration is duration needed for processing core has handled current task.
Optionally, the processing module, is specifically used for:
Determine the processing chained list of the processing task, and the processing chained list is corresponding with a thread;The process chain
Table stores the execution parameter of the processing task;The execution parameter is read from the processing chained list;The smallest place of load factor
Core is managed, the processing task is executed according to the execution parameter by executing the thread.
Optionally, the processing module, is specifically used for:
In the first predetermined period, the execution parameter is read;
If the processing chained list is occupied when reading the execution parameter, in next first predetermined period,
Re-read the execution parameter.
Optionally, the processing module, is also used to:
By following formula, the load balancing degrees of N number of processing core are determined:
Wherein, BqFor the load balancing degrees, TmCurrent all processing have been handled for N number of processing core
The handling duration of task, TsThe sum of the handling duration of currently processed task has been handled for each processing core in N number of processing core.
Optionally, the processing module, is also used to:
To control software, the status information of N number of processing core is reported;The status information includes: idle, normal fortune
Row, capacity reducing.
In the embodiment of the present invention, the communication equipment includes N number of processing core, and processing capacity handles core, double processing cores than single
Framework is stronger, and unbound execution processing task and processing core relationship, chooses processing core according to load factor, flexibility is stronger;Separately
Outside, the processing task received is distributed to the smallest processing core of load factor in N number of processing core and executed by communication equipment, reduces negative
The pressure of the biggish processing core of load rate, and the lesser processing core of load factor will not be allowed idle, the load for improving N number of processing core is equal
Weighing apparatus ability, processing task waiting time is shorter, executes the more efficient of processing task.
Detailed description of the invention
Fig. 1 is the corresponding LTE association of the processing method of long term evolution LTE protocol stack data interaction a kind of in the embodiment of the present application
Discuss the architecture diagram of user-plane protocol stack in stack;
Fig. 2 is the corresponding LTE association of the processing method of long term evolution LTE protocol stack data interaction a kind of in the embodiment of the present application
Discuss the architecture diagram that plane protocol stack is controlled in stack;
Fig. 3 corresponds to each module for the processing method of long term evolution LTE protocol stack data interaction a kind of in the embodiment of the present application
Interactive configuration diagram;
Fig. 4 is a kind of step that the processing method of long term evolution LTE protocol stack data interaction is corresponding in the embodiment of the present application
Flow chart;
Fig. 5 is that a kind of processing method for long term evolution LTE protocol stack data interaction that the embodiment of the present invention proposes is corresponding
The test experiments result figure of downlink data processing speed;
Fig. 6 is that a kind of processing method for long term evolution LTE protocol stack data interaction that the embodiment of the present invention proposes is corresponding
The test experiments result figure of data transmission delay rate;
Fig. 7 is a kind of structural representation of the processing unit of long term evolution LTE protocol stack data interaction in the embodiment of the present application
Figure.
Specific embodiment
In order to better understand the above technical scheme, below in conjunction with Figure of description and specific embodiment to above-mentioned
Technical solution is described in detail, it should be understood that the specific features in the embodiment of the present application and embodiment are to the application skill
The detailed description of art scheme, rather than the restriction to technical scheme, in the absence of conflict, the embodiment of the present application
And the technical characteristic in embodiment can be combined with each other.
The embodiment of the present invention is described in further detail with reference to the accompanying drawings of the specification.
When being communicated between communication equipment, indeed through what is carried out in each protocol layer transmitting data packet, each agreement
Layer constitutes protocol stack.Wherein, necessary between the participant of communication equipment in order to make data complete to transmit between communication equipment
Identical rule of communication is followed, this set rule is known as agreement.LTE protocol stack is different according to the type of message of transmission, can be divided into
User-plane protocol stack and control plane protocol stack.The object of chain of command agreement stack carrying is control signaling;User face protocol stack
The object of carrying is user data.The EMS memory management process that Fig. 1 is communicated between protocol layer a kind of in the embodiment of the present application is corresponding
LTE protocol stack in user-plane protocol stack architecture diagram, Fig. 2 be communicated between a kind of protocol layer in the embodiment of the present application it is interior
Deposit the architecture diagram that plane protocol stack is controlled in the corresponding LTE protocol stack of management method.Wherein, it is wrapped in Fig. 1 user-plane protocol stack
The communication equipment included has: user terminal (User Equipment, UE), evolved base station (Evolved Node B, eNodeB),
Fourth generation mobile phone mobile communication standard core network (EPC).The communication equipment for including in Fig. 2 control plane protocol stack has:
UE,eNodeB.Specifically, user-plane protocol stack includes: physical layer PHY, multiple access access channel layer MAC, radio link layer control
Preparative layer RLC, packet data convergence protocol PDCP, radio resource control layer RRC, Non-Access Stratum NAS;Control plane protocol stack packet
It includes: PHY, MAC, RLC, PDCP, RRC.
LTE is applied in each network system, for example, cloud wireless access network (Cloud-Radio Access
Network, C-RAN) it is a kind of based on the green of collaborative radio, centralization processing and real-time cloud framework of China Mobile's proposition
Color wireless access planar network architecture.In order to solve the serious problems such as the high density base station deployment bring wasting of resources and inter-cell interference,
This centralized architecture of C-RAN is come into being.
In protocol stack architecture towards C-RAN, when being communicated between protocal layers entity, communication equipment needs to handle big
The data packet of amount.In the prior art, data packet only is completed to handle by a processing core of communication equipment, however, with data
The increase of amount, single requirement for handling core and being no longer satisfied service quality.Specific requirement includes: the protocol stack uplink of communication equipment
Data processing rate peak value is 150 megabits per second, and down peak magnitude is 75 megabits per second.Meanwhile in 1 millisecond, protocol stack
It must assure that and complete upstream data processing and downlink data processing.Therefore, with the increase of data volume, single core that handles is not able to satisfy
The requirement of service quality is a urgent problem to be solved.
The embodiment of the invention provides a kind of processing methods of long term evolution LTE protocol stack data interaction, as shown in figure 3,
It corresponds to the framework of each module interaction for the processing method of long term evolution LTE protocol stack data interaction a kind of in the embodiment of the present application and shows
It is intended to.
Each module is described as follows in the framework:
Task initiator 301: for generating processing task, specific task initiator can be eNodeB, LTE access net
The key control node (Mobility Management Entity, MME) of network, gateway (Serving GateWay,
SGW)。
Data distribution module 302: corresponding protocol processes thread is distributed to for task will to be handled.
Protocol process module 303: executing the entity of processing task, including N number of thread, and per thread is unique corresponding by one
Processing core execute, therefore N number of thread altogether by N number of processing core execution.N is the integer greater than 2.
Time management module 304: for the processing time according to N number of thread, management notice is generated, and is sent to data point
Send out module 302.
Management module 305: including management main module and managing process, managing process is for controlling agreement stack state
System, generates the management instruction of processing task, and the processing time of N number of thread is sent to time management module, manages main module
The data needed for storage management process.
Scheduling time TTI thread 306: it exercises supervision and records for the processing time to N number of thread.
Control software 307: the control information reported for receiving management main module.
The step of disposing this framework is as follows:
(1) base station software protocol stack parameter is configured, outside can be read simultaneously in code while operating file in compiling
Configuration file, this mentality of designing mainly facilitate the flexible change of base station inherent parameters, avoid replacing parameter information every time, be
It modifies and recompilates in source code.Parameter in configuration file includes the Internet protocol (Internet of eNodeB
Protocol, IP) address, the IP address of MME and SGW, the level of the protocol layer entity of print log etc..Particularly, for C-
The entire bulky systems of RAN have very convenient and fast function.
(2) protocal layers entity is decoupled, a protocol layer entity, which changes, will not influence another protocol layer entity
Work guarantees that reuse of code is high when code occurs correcting or develops again.Particularly, C-RAN is because whole system is huge
Greatly, it is related to each units such as protocol stack sofeware, control software 307, MME and SGW while runs and handling corresponding relationship.From grinding
Study carefully the engineering steps such as realization and test later, joint debugging, the mode of whole system division module is operated.It is fixed first
Interface message between adopted modules, then each block code and function are not interfere with each other, as long as the change to wherein some module
The change of external interface information it is not related to, then being all the variation of itself for whole system.It is noted that especially
Ground, C-RAN protocol stack sofeware are also that each protocol layer is divided into a module, and the interaction between modules is by specific
Data distribution systems transmitted.This way keeps message between protocol layer module more general, facilitates management.For the overall situation
Property common variable, structural body, message signaling etc. can also be managed collectively, to obtain more comfortable engineering-environment and experience.
(3) timer, memory management, global variable of entire LTE protocol stack software etc. are managed by management module 305 is unified
Reason, so as to guarantee the basic scheduling time demand of 1 millisecond of LTE protocol stack data distribution systems.When memory overhead and processing
Prolonging expense all is to manage software by higher to control.Control software passes through to each C-RAN protocol stack accessing user number, object
Reason layer bandwidth, runing time and CPU usage etc. are monitored to ensure the excellent of performance.
(4) finally in order to solve the problems, such as inter-core load unevenness, by reasonable distribution resource, resource utilization is improved.Busy,
Management module can handle more processing streams using central processing unit (Central Processing Unit, CPU) resource
Journey.
(5) to improve protocol handling capability, LTE protocol software needs to be promoted each layer protocol parallel processing degree, therefore each
Layer wants independent maintenance entity information and memory information, while to release interlayer coupling in interlayer interaction, supports message asynchronous
Processing.In addition, to improve treatment effeciency, protocol software internal support multi-threading parallel process, it is therefore desirable to use special number
Interlayer data interaction is handled according to dissemination system.
To realize computing resource statistic multiplexing function, protocol software operating status and computing resource to realize it is controlled,
Therefore a management module for the control of agreement stack state is increased inside the protocol software.The module independently of each layer it
Outside, it is responsible for management protocol stack operating status and resource allocation conditions.In initial phase, CPU money is distributed for LTE internal multi-thread
Initial memory address and size are distributed for each protocol module of LTE in source;During protocol migration, it is responsible for each layer memory of collection and moves
Data are moved, select proper timing to be migrated, and restore protocol state information in target protocol entity, guarantee agreement entity can
It is migrated with correct, seamless.
The computational resource allocation of protocol stack is by control agentbased control, therefore agreement stack entities need to act on behalf of to control and provide phase
Answer interactive interface, carried out for dynamically applying for resource, uploaded state information being acted on behalf of to control, with control agency protocol stack foundation,
A series of control operations such as protocol stack migration and agreement stack deletion.In addition, protocol stack sofeware will realize that (such as memory provides computing resource
Source) self-management, thus dynamic expand and reduce capacity, support telescopic protocol stack structure.
The framework of above-mentioned steps deployment has the advantages that
(1) improve the flexibility of protocol stack: each module of protocol stack needs to have preferable flexibility and independence, agreement
Stack entities realize that an important foundation of multi-core parallel concurrent is modularization, i.e. each layer entity of protocol stack is all used as a separate single
Member operation, can concurrently carry out data packet processing, release coupling completely between protocol layer entity, use unified interlayer interface
It is communicated.In this way, it is reduced into the parallel processing granularity between data packet by original entire protocol stack rank
Level is other, can greatly promote entire protocol stack data processing capacity when amount of user data increases.
(2) inter-core load is more balanced: LTE protocol stack multicore architecture should have good internuclear balanced basis.By new
Increase processing core number, according to the statistic algorithm of time, reasonable distribution handles core.
Below with reference to Fig. 3, as shown in figure 4, for long term evolution LTE protocol stack data interaction a kind of in the embodiment of the present application
The corresponding flow chart of steps of processing method.
Step 401: the communication equipment determines the load factor of each processing core in N number of processing core of the communication equipment.
N is the integer greater than 2.
Step 402: the communication equipment receives processing task, and the processing task is distributed in N number of processing core and is born
The smallest processing core of load rate executes.
In step 401, any one in communication equipment UE, eNodeB, EPC.
The load factor of each processing core is to characterize the index of the processing core payload size in N number of processing core of communication equipment.
For example, the ratio of the maximum quantity of data packet is handled for processing core currently processed data packet number and the processing core.
In step 402, the communication equipment receives the processing task that task initiator sends, and the processing task is divided
The smallest processing core of load factor executes in the N number of processing core of dispensing.
For example, when communication equipment receives task 1, totally 10 processing core.Handle core 1, processing core 2, processing core
3, core 4, processing core 5, processing core 6, processing core 7, processing core 8, processing core 9, processing core 10 are handled;Its load factor is followed successively by,
0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,0.95, then it will handle processing core of the core 1 as execution task 1.
A kind of the smallest mode for handling core of determining load factor is that remaining handling duration is shortest in N number of processing core
Handle core;The residue handling duration is duration needed for processing core has handled current task.Specifically, processing core is to execute
The processing time of each independent task calculates the remaining processing time.
For example, processing core 1 has handled processing task 1 and has taken time altogether 10 milliseconds, handles 1 processing task of core when distributing to
When 2, if processing core 1 also needs 2 milliseconds of processing task task 1 is disposed, the processing time for handling core 1 at this time is 2 millis
Second.
A kind of optional implementation is that the communication equipment determines the processing chained list of the processing task, and will be described
It is corresponding with a thread to handle chained list;The processing chained list stores the execution parameter of the processing task;
The communication equipment reads the execution parameter from the processing chained list;
The smallest processing core of the load factor of the communication equipment executes ginseng according to described by executing the thread
Number, executes the processing task.
When handling core execution processing task, need to execute the processing task according to some data, these data are stored in place
It manages in chained list, it is corresponding with a thread.Processing chained list includes but is not limited to the execution parameter of the processing task.For example,
Parameter is executed to include maximum execution time, execute unsuccessfully movement etc..
Before handling core execution processing task, communication equipment can read from processing chained list and execute parameter, according to reading
The execution parameter arrived executes the processing task by executing the thread by the smallest processing core of load factor.
When the communication equipment reads the execution parameter from the processing chained list, a kind of optional implementation is as follows:
The communication equipment reads the execution parameter in the first predetermined period;
If the processing chained list is occupied when the communication equipment reads the execution parameter, the communication equipment exists
Next first predetermined period, re-reads the execution parameter.
In step 401~step 402, a kind of optional embodiment is, by following formula, to determine N number of processing
The load balancing degrees of core:
Wherein, BqFor the load balancing degrees, TmCurrent all processing have been handled for N number of processing core
The handling duration of task, TsThe sum of the handling duration of currently processed task has been handled for each processing core in N number of processing core.
For example, processing core shares 3, and the time that this 3 processing cores have executed 3 tasks is 9 milliseconds, and this 3
The time that task has respectively executed is respectively 4 milliseconds, 6 milliseconds, 8 milliseconds, therefore load balancing degrees are 2/3.
A kind of optional embodiment is that the communication equipment reports described N number of to the control software of the communication equipment
Handle the status information of core;The status information includes but is not limited to: idle, normal operation, capacity reducing can increase according to specific requirements
Add, customized status information.
Wherein, the free time, it is available to represent the processing core, and does not execute processing task currently;It operates normally and represents, the processing core
It can use, and current execution processing task;It is unavailable that capacity reducing represents the processing core, then processing task will not be distributed to the processing
Core.
In the embodiment of the present invention, by the modularization to protocol stack, the flexibility of protocol stack is improved, i.e., protocol stack is each
Module is all used as a separate unit operation, can concurrently carry out data packet processing, release coupling completely between layers,
It is communicated using unified interlayer interface.In this way, make parallel processing granularity between data packet by original entire
It is other that protocol stack rank is reduced into level, and entire protocol stack data processing energy can be greatly promoted when amount of user data increases
Power.
In the embodiment of the present invention, the communication equipment includes N number of processing core, and processing capacity is more stronger than single processing core;In addition,
The processing task received is distributed to the smallest processing core of load factor in N number of processing core and executed by communication equipment, handles task dispatching
It is lower to the time, execute the more efficient of processing task.
As shown in figure 5, a kind of processing method of the long term evolution LTE protocol stack data interaction proposed for the embodiment of the present invention
The test experiments result figure of corresponding downlink data processing speed.
Specifically test process: data are issued from the reception of protocol stack data dissemination system, data are a length of 64-1500 words of packet
The IP data packet of section, then the long data packet for using 200 and 1000 bytes of packet is sent out data packet as experimental data in test
Processing thread is given, data are finally averagely given to the user of access.Increase number of users during the test to observe lower line number
According to the peak value of processing speed.Based on above-mentioned steps, respectively in the old framework of monokaryon, monokaryon new architecture, two core new architectures, the old frame of two cores
In the case of 6 kinds of structure, four core new architectures and old framework of four cores etc., test result is obtained.Wherein old framework is the association of LTE in the prior art
The framework of stack is discussed, new architecture is the framework of Fig. 3 in the embodiment of the present invention.
Test result shows: with the increase of terminal user's number, the peak rate of system can decline, this be primarily due to be
System increases with accessing user, and the increasing of system resource allocation difficulty causes system peak to decline, and belongs to normal condition.When CPU's
When nucleus number increases, the task treatment effeciency of each core can be improved, and peak value accordingly can also increase accordingly.Therefore multi-core parallel concurrent framework
CPU core number it is more, the downlink data of protocol stack processing peak value will increase.When the access quantity very little of user, old monokaryon frame
Structure is better than new multi-core parallel concurrent framework.User's access amount is few, and system is to the of less demanding of performance, for multicore architecture,
The other problems such as intercore communication lead to not reach desired peak.The size of data scale also influences whether system descending peak value speed
Rate, when user accesses one timing of quantity, data scale is bigger, and framework parallel processing is more, and corresponding processing speed also can be faster,
To which peak rate can also improve.
In addition, the time delay of data transmission can also reflect that the performance of system, the time delay of transmission depend on system load balancing
Ability.If the problems such as time delay of transmission is smaller, data packetloss or other Data reception errors can avoid as far as possible, thus system
Performance will improve.
As shown in fig. 6, a kind of processing method of the long term evolution LTE protocol stack data interaction proposed for the embodiment of the present invention
The test experiments result figure of corresponding data transmission delay rate.
Test process: one filling packet of construction, which sends program and fills packet, receives program, constructs 10000 in advance with number
IP data packet, the length of these data packets is all 1000 bytes, and concrete configuration is as shown in table 1.Wherein, iperf is to fill packet hair
Send program.
Table 1
Packet program is filled first to issue data packet, records sending time, while being received program and being started to receive data.Work as reception
When program starts to receive data, records the received time and observe corresponding coding, the data for counting identical coding send and connect
Difference between time receiving, this difference are exactly the time delay of data transmission.Since time delay has contingency, so in test process, it is right
10000 data packets sent are averaged, and guarantee the accuracy of result.
As the result is shown according to Fig. 6:
With the increase of terminal user's number, the data transmission delay of system can be increased.
When the access quantity very little of user, the time delay of monokaryon framework is better than new multi-core parallel concurrent framework.Monokaryon framework is only
It needs to carry out serial data processing using a function, and multicore architecture needs are cut into parallel, therefore the time delay of monokaryon wants more excellent.
When the nucleus number of CPU increases, the task treatment effeciency of each core can be improved, and time delay accordingly can also improve therewith.Cause
The CPU core number of this multi-core parallel concurrent framework is more, and the data processing time delay of protocol stack is more excellent.
As shown in fig. 7, being a kind of processing unit of long term evolution LTE protocol stack data interaction provided in an embodiment of the present invention
Structural schematic diagram, which includes:
Determining module 701, for determining the load factor of each processing core in N number of processing core;N is the integer greater than 2;
Processing module 702 distributes to load factor in N number of processing core for receiving processing task, and by the processing task
The smallest processing core executes.
Optionally, which includes:
The smallest processing core of load factor is the shortest processing core of remaining handling duration in N number of processing core;It is described
Remaining handling duration is duration needed for processing core has handled current task.
Optionally, the processing module 702, is specifically used for:
Determine the processing chained list of the processing task, and the processing chained list is corresponding with a thread;The process chain
Table stores the execution parameter of the processing task;The execution parameter is read from the processing chained list;The smallest place of load factor
Core is managed, the processing task is executed according to the execution parameter by executing the thread.
Optionally, the processing module 702, is specifically used for:
In the first predetermined period, the execution parameter is read;
If the processing chained list is occupied when reading the execution parameter, in next first predetermined period,
Re-read the execution parameter.
Optionally, the processing module 702, is also used to:
By following formula, the load balancing degrees of N number of processing core are determined:
Wherein, BqFor the load balancing degrees, TmCurrent all processing have been handled for N number of processing core
The handling duration of task, TsThe sum of the handling duration of currently processed task has been handled for each processing core in N number of processing core.
Optionally, the processing module 702, is also used to:
To control software, the status information of N number of processing core is reported;The status information includes: idle, normal fortune
Row, capacity reducing.
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more,
The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces
The form of product.
The present invention be referring to according to the method for the embodiment of the present invention, the process of device (system) and computer program product
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real
The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
Although preferred embodiments of the present invention have been described, it is created once a person skilled in the art knows basic
Property concept, then additional changes and modifications may be made to these embodiments.So it includes excellent that the following claims are intended to be interpreted as
It selects embodiment and falls into all change and modification of the scope of the invention.
Obviously, various changes and modifications can be made to the invention without departing from essence of the invention by those skilled in the art
Mind and range.In this way, if these modifications and changes of the present invention belongs to the range of the claims in the present invention and its equivalent technologies
Within, then the present invention is also intended to include these modifications and variations.
Claims (12)
1. a kind of processing method of long term evolution LTE protocol stack data interaction characterized by comprising
The communication equipment determines the load factor of each processing core in N number of processing core of the communication equipment;N is whole greater than 2
Number;
The communication equipment receives processing task, and the processing task is distributed to the smallest place of load factor in N number of processing core
Core is managed to execute.
2. the method as described in claim 1 characterized by comprising
The smallest processing core of load factor is the shortest processing core of remaining handling duration in N number of processing core;The residue
Handling duration is duration needed for processing core has handled current task.
3. the method as described in claim 1 characterized by comprising
The communication equipment determines the processing chained list of the processing task, and the processing chained list is corresponding with a thread;Institute
State the execution parameter that processing chained list stores the processing task;
The communication equipment reads the execution parameter from the processing chained list;
The smallest processing core of the load factor of the communication equipment is held by executing the thread according to the execution parameter
The row processing task.
4. method as claimed in claim 3, which is characterized in that the communication equipment reads the execution from the processing chained list
Parameter, comprising:
The communication equipment reads the execution parameter in the first predetermined period;
If the processing chained list is occupied when the communication equipment reads the execution parameter, the communication equipment is next
A first predetermined period, re-reads the execution parameter.
5. the method as described in claim 1-4 is any, which is characterized in that further include:
The communication equipment determines the load balancing degrees of N number of processing core by following formula:
Wherein, BqFor the load balancing degrees, TmCurrent all processing tasks have been handled for N number of processing core
Handling duration, TsThe sum of the handling duration of currently processed task has been handled for each processing core in N number of processing core.
6. the method as described in claim 1-4 is any, which is characterized in that further include:
The communication equipment reports the status information of N number of processing core to the control software of the communication equipment;The state
Information includes: idle, normal operation, capacity reducing.
7. a kind of processing unit of long term evolution LTE protocol stack data interaction characterized by comprising
Determining module, for determining the load factor of each processing core in N number of processing core;N is the integer greater than 2;
Processing module distributes to the smallest place of load factor in N number of processing core for receiving processing task, and by the processing task
Core is managed to execute.
8. device as claimed in claim 7 characterized by comprising
The smallest processing core of load factor is the shortest processing core of remaining handling duration in N number of processing core;The residue
Handling duration is duration needed for processing core has handled current task.
9. device as claimed in claim 7, which is characterized in that the processing module is specifically used for:
Determine the processing chained list of the processing task, and the processing chained list is corresponding with a thread;The processing chained list is deposited
The execution parameter of the processing task is stored up;The execution parameter is read from the processing chained list;The smallest processing core of load factor,
The processing task is executed according to the execution parameter by executing the thread.
10. device as claimed in claim 9, which is characterized in that the processing module is specifically used for:
In the first predetermined period, the execution parameter is read;
If the processing chained list is occupied when reading the execution parameter, in next first predetermined period, again
Read the execution parameter.
11. the device as described in claim 7-10 is any, which is characterized in that the processing module is also used to:
By following formula, the load balancing degrees of N number of processing core are determined:
Wherein, BqFor the load balancing degrees, TmCurrent all processing tasks have been handled for N number of processing core
Handling duration, TsThe sum of the handling duration of currently processed task has been handled for each processing core in N number of processing core.
12. the device as described in claim 7-10 is any, which is characterized in that the processing module is also used to:
To control software, the status information of N number of processing core is reported;The status information includes: idle, normal operation, contracting
Hold.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811548228.9A CN109508237A (en) | 2018-12-18 | 2018-12-18 | A kind of processing method and processing device of long term evolution LTE protocol stack data interaction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811548228.9A CN109508237A (en) | 2018-12-18 | 2018-12-18 | A kind of processing method and processing device of long term evolution LTE protocol stack data interaction |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109508237A true CN109508237A (en) | 2019-03-22 |
Family
ID=65753419
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811548228.9A Pending CN109508237A (en) | 2018-12-18 | 2018-12-18 | A kind of processing method and processing device of long term evolution LTE protocol stack data interaction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109508237A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112243266A (en) * | 2019-07-18 | 2021-01-19 | 大唐联仪科技有限公司 | Data packaging method and device |
CN112243266B (en) * | 2019-07-18 | 2024-04-19 | 大唐联仪科技有限公司 | Data packet method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110146803A1 (en) * | 2008-09-05 | 2011-06-23 | Zhirong Wu | Multifunctional offshore base with liquid displacement system |
CN102681902A (en) * | 2012-05-15 | 2012-09-19 | 浙江大学 | Load balancing method based on task distribution of multicore system |
CN105528330A (en) * | 2014-09-30 | 2016-04-27 | 杭州华为数字技术有限公司 | Load balancing method and device, cluster and many-core processor |
CN106598707A (en) * | 2015-10-19 | 2017-04-26 | 沈阳新松机器人自动化股份有限公司 | Task scheduling optimization method |
-
2018
- 2018-12-18 CN CN201811548228.9A patent/CN109508237A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110146803A1 (en) * | 2008-09-05 | 2011-06-23 | Zhirong Wu | Multifunctional offshore base with liquid displacement system |
CN102681902A (en) * | 2012-05-15 | 2012-09-19 | 浙江大学 | Load balancing method based on task distribution of multicore system |
CN105528330A (en) * | 2014-09-30 | 2016-04-27 | 杭州华为数字技术有限公司 | Load balancing method and device, cluster and many-core processor |
CN106598707A (en) * | 2015-10-19 | 2017-04-26 | 沈阳新松机器人自动化股份有限公司 | Task scheduling optimization method |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112243266A (en) * | 2019-07-18 | 2021-01-19 | 大唐联仪科技有限公司 | Data packaging method and device |
CN112243266B (en) * | 2019-07-18 | 2024-04-19 | 大唐联仪科技有限公司 | Data packet method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3530037B1 (en) | System and method for network slice management in a management plane | |
Chien et al. | End-to-end slicing with optimized communication and computing resource allocation in multi-tenant 5G systems | |
CN107484183B (en) | Distributed base station system, CU, DU and data transmission method | |
CN102932920B (en) | Radio resource scheduling request (SR) configuration method and device | |
RU2744016C1 (en) | Realization of service quality for separation of the user's plane | |
CN102647263B (en) | A kind of transmission method of ACK/NACK information and equipment | |
KR20200108049A (en) | Default quality of service (QoS) control method and apparatus | |
Xia et al. | Mobile edge cloud-based industrial Internet of Things: Improving edge intelligence with hierarchical SDN controllers | |
CN103747274A (en) | Video data center with additionally-arranged cache cluster and cached resource scheduling method thereof | |
CN110521178B (en) | Method, device and system for distributing data | |
CN112019363B (en) | Method, device and system for determining service transmission requirement | |
CN109361762A (en) | A kind of document transmission method, apparatus and system | |
JP2012065314A (en) | Data delivery device and data delivery system | |
Li et al. | Multi-service resource allocation in future network with wireless virtualization | |
WO2020024771A1 (en) | Scheduling-free gf resource allocation method and associated device | |
WO2023278018A1 (en) | Systems and methods for application aware slicing in 5g layer 2 and layer 1 using fine grain scheduling | |
US20230376358A1 (en) | Method and apparatus for managing load of network node | |
CN109508237A (en) | A kind of processing method and processing device of long term evolution LTE protocol stack data interaction | |
CN101808117B (en) | Method for construction and service of time tag business data for communication | |
CN110121186A (en) | Data distributing method and equipment under a kind of dual link | |
CN104184643B (en) | A kind of data transmission system and method | |
CN105933383B (en) | Virtualization carrier wave emigration method based on L3 and L2 layer protocol | |
Tan et al. | A novel approach for bandwidth allocation among soft QoS traffic in wireless networks | |
CN110233803B (en) | Scheduling device and method for transmission network node | |
KR20220064806A (en) | Method and apparatus for assigning gpu of software package |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190322 |
|
RJ01 | Rejection of invention patent application after publication |