CN112243266A - Data packaging method and device - Google Patents

Data packaging method and device Download PDF

Info

Publication number
CN112243266A
CN112243266A CN201910652004.0A CN201910652004A CN112243266A CN 112243266 A CN112243266 A CN 112243266A CN 201910652004 A CN201910652004 A CN 201910652004A CN 112243266 A CN112243266 A CN 112243266A
Authority
CN
China
Prior art keywords
core
data
load
identification information
packing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910652004.0A
Other languages
Chinese (zh)
Other versions
CN112243266B (en
Inventor
侯庆东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Datang Linktester Technology Co ltd
Original Assignee
Datang Linktester Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Datang Linktester Technology Co ltd filed Critical Datang Linktester Technology Co ltd
Priority to CN201910652004.0A priority Critical patent/CN112243266B/en
Priority claimed from CN201910652004.0A external-priority patent/CN112243266B/en
Publication of CN112243266A publication Critical patent/CN112243266A/en
Application granted granted Critical
Publication of CN112243266B publication Critical patent/CN112243266B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
    • H04W28/065Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information using assembly or disassembly of packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/08Load balancing or load distribution
    • H04W28/082Load balancing or load distribution among bearers or channels

Abstract

The invention discloses a data packing method and a device, which determine a data packing strategy when a load value of a current core (the current core is one of cores included in a DSP (digital signal processor) where a protocol stack is located) receives a data packing request message, and pack data according to the determined data packing strategy so as to exert the advantages of multi-core parallel flow packing when the data volume is large, thus completing a packing task within set time and improving the success rate of data transmission.

Description

Data packaging method and device
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a data packing method and apparatus.
Background
To meet the peak rate per user and system capacity increase, one of the most straightforward approaches is to increase the system transmission bandwidth. Therefore, Long Term Evolution-Advanced (LTE-Advanced) introduces a technology of increasing transmission bandwidth, i.e. Carrier Aggregation (CA).
Taking the aggregation of two cell carriers as an example, as shown in fig. 1, the packet packing process of data in the transmission process is as follows: firstly, a Radio Link Control (RLC) layer of a main cell sends the size of data to be sent to a Media Access Control (MAC) layer of the main cell, when the MAC of the main cell receives the size of the data, the data is divided, meanwhile, a user is scheduled and queued, then, a result of the scheduling and queuing is sent to an MAC of an auxiliary cell, the MAC of the auxiliary cell performs resource allocation according to the scheduling result, and sends the resource allocation result to the MAC of the main cell, and finally, the MAC of the main cell sends the received resource allocation result to the RLC of the main cell, and the RLC performs the grouping of the main cell and the auxiliary cell.
In the above data packaging process, if the data size is small, the RLC of the primary cell may complete the packaging operation within a set time, but if the data size is large, the RLC of the primary cell may not complete the packaging operation within the set time due to overload, thereby causing data transmission failure.
Disclosure of Invention
The invention aims to provide a data packaging method and a data packaging device, which aim to solve the problem of data transmission failure caused by that packaging cannot be completed on time when the data volume is large in the prior art.
The purpose of the invention is realized by the following technical scheme:
in a first aspect, the present invention provides a data packaging method, including:
acquiring a load value of a current core, wherein the current core is one of cores included in a Digital Signal Processor (DSP) where a protocol stack is located;
when a data package request message is received, determining a data package strategy according to the load value of the current core;
and packing the data according to the determined data packing strategy.
Optionally, determining a data packet policy according to the load value of the current core includes:
comparing the load value of the current core with a set threshold value;
if the load value of the current core is larger than the set threshold value, a first data packet strategy is adopted for data packet packing;
and if the load value of the current core is smaller than the set threshold value, performing data packet by adopting a second data packet strategy.
Optionally, the data packaging with the first data packaging policy includes:
determining identification information of a load sharing core;
sending a group package request message to a core corresponding to the identification information of the load sharing core; and performing data package by using the core corresponding to the determined identification information of the load sharing core and the current core.
Optionally, the load sharing core includes at least one core;
determining identification information of the load sharing core, including:
acquiring load values of all cores included in a Digital Signal Processor (DSP) where a protocol stack is located;
sorting the load values of all cores according to a preset strategy to obtain a sorting result in a certain order;
and selecting the identification information of the load sharing core corresponding to the load value from the sorting result.
Optionally, before the data package is performed by using the core corresponding to the determined identifier information of the load sharing core and the current core, the method further includes:
and determining to receive a response message of completing data copying sent by the core corresponding to the identification information of the load sharing core, wherein the data copying is performed by using a Direct Memory Access (DMA) mode.
In a second aspect, the present invention provides a data packaging apparatus comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a load value of a current core, and the current core is one of cores included in a Digital Signal Processor (DSP) where a protocol stack is located;
the decision unit is used for determining a data packaging strategy according to the load value of the current core acquired by the acquisition unit when receiving the data packaging request message;
and the processing unit is used for packing the data according to the data packing strategy determined by the decision unit.
Optionally, the decision unit is specifically configured to determine the data group package policy according to the load value of the current core in the following manner:
comparing the load value of the current core with a set threshold value;
if the load value of the current core is larger than the set threshold value, a first data packet strategy is adopted for data packet packing;
and if the load value of the current core is smaller than the set threshold value, performing data packet by adopting a second data packet strategy.
Optionally, the processing unit is specifically configured to perform data packet packing by using the first data packet policy in the following manner:
determining identification information of a load sharing core;
sending a package request message to the core corresponding to the identification information of the load sharing core, and performing data package by using the core corresponding to the identification information of the determined load sharing core and the current core.
Optionally, the load sharing core includes at least one core;
the decision unit is specifically configured to determine the identification information of the load sharing core as follows:
acquiring load values of all cores included in a Digital Signal Processor (DSP) where a protocol stack is located;
sorting the load values of all cores according to a preset strategy to obtain a sorting result in a certain order;
and selecting the identification information of the load sharing core corresponding to the load value from the sorting result.
Optionally, the decision unit is further configured to: and determining to receive a response message of completing data copying sent by the core corresponding to the identification information of the load sharing core, wherein the data copying is performed by using a Direct Memory Access (DMA) mode.
In a third aspect, the present invention further provides a data packaging apparatus, comprising:
a memory for storing program instructions;
a processor for calling program instructions stored in said memory to perform the method of any of the first aspect according to the obtained program.
In a fourth aspect, the present invention provides a computer readable storage medium having stored thereon computer instructions which, when run on a computer, cause the computer to perform the method of any of the first aspects.
The application provides a data package method and a device, which are used for acquiring a load value of a current core, wherein the current core is one of cores included in a DSP (digital signal processor) where a protocol stack is located; when receiving a data packaging request message, determining a data packaging strategy according to the load value of the current core, and packaging data according to the determined data packaging strategy, so that when the data volume is large, a packaging task can be completed within a set time, and the success rate of data transmission is improved.
Drawings
FIG. 1 is a flow chart of a method of data packaging;
FIG. 2 is a software topology diagram of a communication test instrument according to an embodiment of the present disclosure;
fig. 3 is a flowchart of a data packing method according to an embodiment of the present application;
FIG. 4 is a flow chart of another method for packaging data provided by an embodiment of the present application;
fig. 5 is an interaction diagram of a load sharing module according to an embodiment of the present disclosure;
fig. 6 is a block diagram of a data packaging apparatus according to an embodiment of the present application;
fig. 7 is a schematic diagram of a data packaging apparatus according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 2 is a software topology diagram of a communication test instrument according to an embodiment of the present application, where fig. 2 includes three Digital Signal Processors (DSPs). The DSP0 and the DSP1 are respectively used for a Physical Layer (PL) to process Data of CELL0 and CELL1, the DSP2 is used for distribution of a Protocol stack software version, C0 to C2 to process CELL0, C3 to C5 to process CELL1, MAC uplink (Up Load, UL)/downlink (Down Load, DL) respectively occupy one core to process uplink and downlink services, and tasks of a Packet Data Convergence Protocol (PDCP) and an RLC are located on the same core due to small Load.
Under the current architecture, taking a cell as an example, the physical layer only uses the DSP0, and the cores 0 to 2 on the DSP2 are used for realizing the protocol stack function; if two cells are used for carrier aggregation, the physical layer uses the DSP0 and the DSP1 to process the data of the two cells respectively, and meanwhile, the cores 3 to 5 on the DSP2 are used as protocol stack function cores of a second cell. In a single-cell scenario, the grouping and grouping of data are completed by the RLC task of the C1 core, and in a dual-cell scenario, the grouping and grouping of data of the primary and secondary cells of the same user are also completed by the C1 core, and the C4 core is essentially an idle core.
The following description of the present data packaging process is given by way of example in fig. 1:
step 1: when 0us interruption of each subframe header arrives, rlc (pcell rlc) of the main cell sends Buffer Occupancy (BO) of data to be sent in the queue to the MAC. In order to reduce the performance overhead caused by message interaction, the RLC and MAC use a memory sharing mode.
Step 2: and after receiving the BO, the MAC (PCELL MAC) of the main cell carries out scheduling queuing, the BO is divided according to a specific main and auxiliary cell allocation strategy when queuing is carried out, meanwhile, the UE scheduling queuing is carried out according to the modified BO, and the grouped queuing information is copied and added with a message header to the MAC (SCELL MAC) of the auxiliary cell.
And step 3: after receiving the message in the step 2, the MAC of the auxiliary cell directly allocates resources; and after the resource allocation is finished, transmitting the resource allocation result of the auxiliary cell to the MAC of the main cell.
And 4, step 4: the MAC of the main cell checks when the latest waiting time point arrives, and if the resource allocation result of the auxiliary cell is received, the resource allocation result of the main cell and the auxiliary cell is assembled in a message and sent to the RLC of the main cell; if the resource allocation result of the auxiliary cell is not received when the latest waiting time point arrives, only sending the resource allocation result of the cell to RLC; and the RLC of the primary cell is responsible for completing the grouping of the primary cell and the secondary cell after receiving the message, and setting the transmittable mark as TRUE after the grouping is completed.
And 5: and when the MAC of the main and auxiliary cells arrives at the sending time point, checking the sending identifier, and if the sending identifier is TRUE, sending the data to PL and then sending the data from an air interface.
Through the above process of the scheme, the RLC of the primary cell completes the data packet process, the RLC of the secondary cell does not undertake any task, and the packet of Transport Blocks (TB) is completed in the C1 core. When the data amount is small, the process of data packaging may be completed before the MAC data transmission time point comes, and if the data amount is large, the process of data packaging may not be completed, thereby causing data transmission failure.
In view of this, embodiments of the present application provide a data packaging method and apparatus, which perform real-time monitoring on a load of each core, and determine a core that performs data packaging according to the load of the core, so that a data packaging process can be completed on time in an existing time sequence.
It is to be understood that the terms "first," "second," and the like in the description herein are used for descriptive purposes only and not for purposes of indicating or implying relative importance, nor order.
The embodiment of the application can be applied to a communication test instrument, and the implementation process of large data volume package is carried out on the basis of the communication test instrument scene.
Fig. 3 is a flowchart of a data packaging method according to an embodiment of the present application, and referring to fig. 3, the method includes:
s301: and acquiring the load value of the current core.
In the embodiment of the present application, the current core is one of the cores included in the digital signal processor DSP where the protocol stack is located. Corresponding to C0-C5 in the DSP shown in FIG. 2, namely, the current core is one of C0-C5.
S302: and when the data package request message is received, determining a data package strategy according to the load value of the current core.
In one possible implementation, the policy of the data set package may be determined according to the following:
comparing the load value of the current core with a set threshold value; if the load value of the current core is larger than the set threshold value, a first data packet strategy is adopted for data packet packing; and if the load value of the current core is smaller than the set threshold value, performing data packet by adopting a second data packet strategy.
In some embodiments, data packaging with the first data packaging policy comprises:
determining identification information of a load sharing core;
sending a group package request message to a core corresponding to the identification information of the load sharing core;
and when determining that the confirmation message sent by the core corresponding to the identification information of the load sharing core is received, performing data packet packing by using the core corresponding to the determined identification information of the load sharing core and the current core.
In this embodiment of the present application, the load value of the current core may be compared with a set threshold, and when the load value of the current core is greater than the set threshold, the load sharing core may be selected, and a group package request is sent to the load sharing core, a group package task is shared to another core, and then the self core and the sharing core complete the group package task together. And if the load value of the current core is smaller than the set threshold value, the current core completes the packaging task by itself.
S303: and packing the data according to the determined data packing strategy.
When the data package strategy is a first package strategy, the package task can be completed through the core and the sharing core; when the data package strategy is the second package strategy, the package task can be completed through the core.
According to the data packaging method provided by the embodiment of the application, the load value of the current core is obtained, and the current core is one of the cores included in the DSP where the protocol stack is located; when receiving a data packaging request message, determining a data packaging strategy according to the load value of the current core, and packaging data according to the determined data packaging strategy, so that when the data volume is large, a packaging task can be completed within a set time, and the success rate of data transmission is improved.
Preferably, the load sharing cores in the embodiment of the present application include at least one core, that is, one or two load sharing cores may be provided, but the number of the load sharing cores may not exceed 5 (since there are only 6 cores in total, there are at most 5 cores that can share in addition to itself).
In one possible embodiment, the identification information of the load sharing core may be determined as follows:
acquiring load values of all cores included in a Digital Signal Processor (DSP) where a protocol stack is located;
sorting the load values of all cores according to a preset strategy to obtain a sorting result in a certain order;
and selecting the identification information of the load sharing core corresponding to the load value from the sorting result.
In this embodiment of the present application, the load values of all cores may be sorted according to a preset policy, for example, sorted from large to small or from small to large, and then the core corresponding to the smaller load value is selected from all load values. For example, assume that the load values of the cores are arranged in order from large to small: 60. 50, 45, 30, 20, 10, then cores with a load value of 10 and/or a load value of 20 may be selected.
For example, the comprehensive load of each core may be calculated according to a certain weight, and then the comprehensive loads are sorted. It is understood that the preset strategy is not limited to the examples illustrated in the present application, and other strategies may be used, which are not limited in the present application.
Of course, it is understood that the magnitude of the load value and the selected core are not limited to the above examples.
Further, before performing data package by using the core corresponding to the determined identification information of the load sharing core and the current core, the method further includes:
and determining to receive a response message of data copy completion sent by the core corresponding to the identification information of the load sharing core.
And the data copying is carried out by utilizing a direct memory access DMA mode.
Compared with the mode of using the Memory copy function (MEMCPY) in the prior art, in the embodiment of the application, the data is copied by using a Direct Memory Access (DMA) mode, so that the time delay of copying the data can be reduced.
Based on the above embodiments, the present application takes aggregation of two cell carriers as an example to describe the above process, and fig. 4 is a flowchart of another data packet packaging method provided in the embodiments of the present application. In fig. 4, steps 1 to 4 and step 5 are the same as those in the prior art, and are not described herein again. The description will be made mainly with respect to the method between step 4 and step 5.
(1) The main cell RLC Core1 (Core 1) firstly performs virtual package operation of the auxiliary cells TB0 and TB1 according to the resource allocation result reported by the MAC, acquires a Core number which needs to bear the copy task to the load monitoring module after completing the virtual package operation of one TB block, and if the load threshold is exceeded and the burden Core selected from the priority list is C4, C1 sends a data package request to the auxiliary cell RLC Core4 to trigger the package operation of Core 4; after the virtual package of the primary cell TB is completed, if C1 determines that the core does not exceed the load threshold, the package of the primary cell TB is completed by the core itself without performing sharing processing.
(2) And the secondary cell RLC Core4 completes actual group packing according to the virtual group packing result by using a DMA (direct memory access) group packing mode, and simultaneously sends a group packing completion response to the primary cell RLC Core 1.
(3) After receiving the response message, the primary cell RLC Core1 determines the source of the message (i.e. determines who sent the response message) according to the message header, further determines that if the data in the primary and secondary cells are moved, the RLC entity node is released, and sends a data sending request to the MAC, so that the data is sent out from the air interface.
In the embodiment of the present application, the C1 core may be understood as a decision module, and the C0, C2 to C5 may be understood as a monitoring module. The monitoring module reports a group of parameter information of the core to the decision module at regular intervals, for example, the parameter information includes CPU, IO load information, and the like, and the decision module calculates the comprehensive load of the core where each monitoring module is located according to a certain weight after receiving the parameter information and sorts the load.
Further, in the present application, the RLC downlink data processing process mainly includes two processes, i.e., a data packetization and a data transfer, and the packetization and packetization processing of the data is not independent, so that this process cannot be implemented by independent parallel processing of 2 cores, but the group TBs are independent from each other, so that the data transfer process can be processed by 2 cores (or multiple cores) respectively. Therefore, in the implementation of the scheme, a task is added to each of the other cores, the task is specially used for receiving the data sending request of the multi-core load module, the actual data copying task is completed according to the message content, and a response is replied to the decision core after the copying is completed. Meanwhile, in order to further reduce the copying time delay, DMA is adopted for all copying tasks in the scheme to replace the traditional MEMCPY scheme, and the copying performance of the scheme is further improved compared with that of the existing scheme. Specifically, as shown in fig. 5, an interactive schematic view of a load sharing module provided in an embodiment of the present application is provided, and refer to fig. 5.
In fig. 5, the C1 decision core sends a copy request to one or more copy cores, the copy cores perform a copy task after receiving the copy request, and a message is responded to the C1 core after the copy is completed. In fig. 5, O _ rlc _ DMA _ REQ indicates that the decision core sends a data COPY DMA request to the load core, O _ rlc _ MEMORY _ COPY _ RSP indicates that the load core sends a response to the decision core after completing the COPY task, O _ DDRLC _ DMA _ RSP indicates that the driver completes the COPY completion message to the local core, and OSP _ STRU _ TSM _ DMA _ TRANSFER _ ARG indicates that the local core sends a DMA COPY request to the driver. Where DD denotes a Device Driver (Device Driver).
Based on the same concept as the above-mentioned data package method, an embodiment of the present invention further provides a data package device, and fig. 6 is a block diagram illustrating a structure of the data package device according to the embodiment of the present application, where the block diagram includes: an obtaining unit 601, a decision unit 602, and a processing unit 603.
The obtaining unit 601 is configured to obtain a load value of a current core, where the current core is one of cores included in a digital signal processor DSP where a protocol stack is located.
A decision unit 602, configured to determine a data packaging policy according to the load value of the current core acquired by the acquisition unit when receiving the data packaging request message.
The processing unit 603 is configured to package the data according to the data packaging policy determined by the decision unit.
Optionally, the decision unit 602 is specifically configured to determine the data packet policy according to the load value of the current core as follows:
comparing the load value of the current core with a set threshold value;
if the load value of the current core is larger than the set threshold value, a first data packet strategy is adopted for data packet packing;
and if the load value of the current core is smaller than the set threshold value, performing data packet by adopting a second data packet strategy.
Optionally, the processing unit 603 is specifically configured to perform data packet by using the first data packet policy according to the following manner:
determining identification information of a load sharing core;
sending a package request message to the core corresponding to the identification information of the load sharing core, and performing data package by using the core corresponding to the identification information of the determined load sharing core and the current core.
Optionally, the load sharing core includes at least one core;
the decision unit 602 is specifically configured to determine the identifier information of the load sharing core as follows:
acquiring load values of all cores included in a Digital Signal Processor (DSP) where a protocol stack is located;
sorting the load values of all cores according to a preset strategy to obtain a sorting result in a certain order;
and selecting the identification information of the load sharing core corresponding to the load value from the sorting result.
Optionally, the decision unit 602 is further configured to: and determining to receive a response message of completing data copying sent by the core corresponding to the identification information of the load sharing core, wherein the data copying is performed by using a Direct Memory Access (DMA) mode.
It should be noted that, for the function implementation of each unit in the data packet assembly device in the embodiment of the present invention, further reference may be made to the description of the related method embodiment, which is not described herein again.
An embodiment of the present application further provides another data packaging apparatus, as shown in fig. 7, the apparatus includes:
a memory 702 for storing program instructions.
A transceiver 701 for receiving and transmitting data packet packaging instructions.
The processor 700 is configured to call the program instructions stored in the memory, and execute any method flow described in the embodiments of the present application according to the obtained program according to the instructions received by the transceiver 701. The processor 700 is configured to implement the methods performed by the decision unit (602) and the processing unit (603) shown in fig. 6.
Where in fig. 7, the bus architecture may include any number of interconnected buses and bridges, with one or more processors represented by processor 700 and various circuits of memory represented by memory 702 being linked together. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. The bus interface provides an interface.
The transceiver 701 may be multiple elements, i.e., include a transmitter and a transceiver, providing a means for communicating with various other apparatus over a transmission medium.
The processor 700 is responsible for managing the bus architecture and general processing, and the memory 702 may store data used by the processor 700 in performing operations.
The processor 700 may be a Central Processing Unit (CPU), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), or a Complex Programmable Logic Device (CPLD).
Embodiments of the present application also provide a computer storage medium for storing computer program instructions for any apparatus described in the embodiments of the present application, which includes a program for executing any method provided in the embodiments of the present application.
The computer storage media may be any available media or data storage device that can be accessed by a computer, including, but not limited to, magnetic memory (e.g., floppy disks, hard disks, magnetic tape, magneto-optical disks (MOs), etc.), optical memory (e.g., CDs, DVDs, BDs, HVDs, etc.), and semiconductor memory (e.g., ROMs, EPROMs, EEPROMs, non-volatile memory (NAND FLASH), Solid State Disks (SSDs)), etc.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (12)

1. A method of packaging data, comprising:
acquiring a load value of a current core, wherein the current core is one of cores included in a Digital Signal Processor (DSP) where a protocol stack is located;
when a data package request message is received, determining a data package strategy according to the load value of the current core;
and packing the data according to the determined data packing strategy.
2. The method of claim 1, wherein determining a data packing policy based on the load value of the current core comprises:
comparing the load value of the current core with a set threshold value;
if the load value of the current core is larger than the set threshold value, a first data packet strategy is adopted for data packet packing;
and if the load value of the current core is smaller than the set threshold value, performing data packet by adopting a second data packet strategy.
3. The method of claim 2, wherein data packaging using a first data packaging policy comprises:
determining identification information of a load sharing core;
sending a package request message to the core corresponding to the identification information of the load sharing core, and performing data package by using the core corresponding to the identification information of the determined load sharing core and the current core.
4. The method of claim 3, wherein the load sharing cores comprise at least one core;
determining identification information of the load sharing core, including:
acquiring load values of all cores included in a Digital Signal Processor (DSP) where a protocol stack is located;
sorting the load values of all cores according to a preset strategy to obtain a sorting result in a certain order;
and selecting the identification information of the load sharing core corresponding to the load value from the sorting result.
5. The method as claimed in claim 3, wherein before performing data packaging with the core corresponding to the determined identification information of the load sharing core and the current core, the method further comprises:
and determining to receive a response message of completing data copying sent by the core corresponding to the identification information of the load sharing core, wherein the data copying is performed by using a Direct Memory Access (DMA) mode.
6. A data packaging apparatus, comprising:
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a load value of a current core, and the current core is one of cores included in a Digital Signal Processor (DSP) where a protocol stack is located;
the decision unit is used for determining a data packaging strategy according to the load value of the current core acquired by the acquisition unit when receiving the data packaging request message;
and the processing unit is used for packing the data according to the data packing strategy determined by the decision unit.
7. The apparatus of claim 6, wherein the decision unit is specifically configured to determine the data group packaging policy based on the load value of the current core as follows:
comparing the load value of the current core with a set threshold value;
if the load value of the current core is larger than the set threshold value, a first data packet strategy is adopted for data packet packing;
and if the load value of the current core is smaller than the set threshold value, performing data packet by adopting a second data packet strategy.
8. The apparatus as claimed in claim 7, wherein the processing unit is specifically configured to perform data packing using the first data packing policy as follows:
determining identification information of a load sharing core;
sending a package request message to the core corresponding to the identification information of the load sharing core, and performing data package by using the core corresponding to the identification information of the determined load sharing core and the current core.
9. The apparatus of claim 7, wherein the load sharing core comprises at least one core;
the decision unit is specifically configured to determine the identification information of the load sharing core as follows:
acquiring load values of all cores included in a Digital Signal Processor (DSP) where a protocol stack is located;
sorting the load values of all cores according to a preset strategy to obtain a sorting result in a certain order;
and selecting the identification information of the load sharing core corresponding to the load value from the sorting result.
10. The apparatus of claim 7, wherein the decision unit is further to: and determining to receive a response message of completing data copying sent by the core corresponding to the identification information of the load sharing core, wherein the data copying is performed by using a Direct Memory Access (DMA) mode.
11. A data packaging apparatus, comprising:
a memory for storing program instructions;
a processor for calling the program instructions stored in the memory and executing the method of any one of claims 1 to 5 according to the obtained program.
12. A computer-readable storage medium having stored thereon computer instructions which, when executed on a computer, cause the computer to perform the method of any one of claims 1 to 5.
CN201910652004.0A 2019-07-18 Data packet method and device Active CN112243266B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910652004.0A CN112243266B (en) 2019-07-18 Data packet method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910652004.0A CN112243266B (en) 2019-07-18 Data packet method and device

Publications (2)

Publication Number Publication Date
CN112243266A true CN112243266A (en) 2021-01-19
CN112243266B CN112243266B (en) 2024-04-19

Family

ID=

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113132061A (en) * 2021-04-23 2021-07-16 展讯通信(上海)有限公司 Uplink data transmission method, device, storage medium and terminal
CN114039989A (en) * 2021-10-28 2022-02-11 诺领科技(南京)有限公司 Low-cost and low-power-consumption data processing method based on NB-IoT system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6041400A (en) * 1998-10-26 2000-03-21 Sony Corporation Distributed extensible processing architecture for digital signal processing applications
US20020056030A1 (en) * 2000-11-08 2002-05-09 Kelly Kenneth C. Shared program memory for use in multicore DSP devices
CN101335603A (en) * 2008-07-17 2008-12-31 华为技术有限公司 Data transmission method and apparatus
US20100185766A1 (en) * 2009-01-16 2010-07-22 Fujitsu Limited Load distribution apparatus, load distribution method, and storage medium
CN102131239A (en) * 2010-10-27 2011-07-20 华为技术有限公司 Business processing unit and method, business control gateway and load balancing method
CN105045658A (en) * 2015-07-02 2015-11-11 西安电子科技大学 Method for realizing dynamic dispatching distribution of task by multi-core embedded DSP (Data Structure Processor)
CN106171004A (en) * 2015-02-09 2016-11-30 华为技术有限公司 A kind of RLC packet shunt method and base station
CN106937331A (en) * 2015-12-31 2017-07-07 大唐移动通信设备有限公司 A kind of baseband processing method and device
US20190045396A1 (en) * 2016-02-03 2019-02-07 Zte Corporation Data packet sending method, data packet receiving method, data packet sending device and data packet receiving device
CN109508237A (en) * 2018-12-18 2019-03-22 北京神州绿盟信息安全科技股份有限公司 A kind of processing method and processing device of long term evolution LTE protocol stack data interaction
CN109905898A (en) * 2017-12-07 2019-06-18 北京中科晶上科技股份有限公司 Baseband processing resource distribution method

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6041400A (en) * 1998-10-26 2000-03-21 Sony Corporation Distributed extensible processing architecture for digital signal processing applications
US20020056030A1 (en) * 2000-11-08 2002-05-09 Kelly Kenneth C. Shared program memory for use in multicore DSP devices
CN101335603A (en) * 2008-07-17 2008-12-31 华为技术有限公司 Data transmission method and apparatus
US20100185766A1 (en) * 2009-01-16 2010-07-22 Fujitsu Limited Load distribution apparatus, load distribution method, and storage medium
CN102131239A (en) * 2010-10-27 2011-07-20 华为技术有限公司 Business processing unit and method, business control gateway and load balancing method
CN106171004A (en) * 2015-02-09 2016-11-30 华为技术有限公司 A kind of RLC packet shunt method and base station
US20170339599A1 (en) * 2015-02-09 2017-11-23 Huawei Technologies Co., Ltd. Rlc data packet offloading method and base station
CN105045658A (en) * 2015-07-02 2015-11-11 西安电子科技大学 Method for realizing dynamic dispatching distribution of task by multi-core embedded DSP (Data Structure Processor)
CN106937331A (en) * 2015-12-31 2017-07-07 大唐移动通信设备有限公司 A kind of baseband processing method and device
US20190045396A1 (en) * 2016-02-03 2019-02-07 Zte Corporation Data packet sending method, data packet receiving method, data packet sending device and data packet receiving device
CN109905898A (en) * 2017-12-07 2019-06-18 北京中科晶上科技股份有限公司 Baseband processing resource distribution method
CN109508237A (en) * 2018-12-18 2019-03-22 北京神州绿盟信息安全科技股份有限公司 A kind of processing method and processing device of long term evolution LTE protocol stack data interaction

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
魏智伟;: "多核DSP间基于SRIO数据传输的设计与实现", 微型机与应用, no. 04, 25 February 2017 (2017-02-25) *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113132061A (en) * 2021-04-23 2021-07-16 展讯通信(上海)有限公司 Uplink data transmission method, device, storage medium and terminal
CN113132061B (en) * 2021-04-23 2022-11-25 展讯通信(上海)有限公司 Uplink data transmission method, device, storage medium and terminal
CN114039989A (en) * 2021-10-28 2022-02-11 诺领科技(南京)有限公司 Low-cost and low-power-consumption data processing method based on NB-IoT system

Similar Documents

Publication Publication Date Title
WO2016119160A1 (en) Data transmission method and device
JP2019533395A (en) Method and system for sending and receiving data
CN110999385A (en) Method and apparatus relating to splitting bearers with an uplink in a new radio
CN102348292B (en) Data transmission method and device based on MAC (media access control) sublayer and RLC (radio link control) sublayer
WO2017049558A1 (en) Uplink data transmission method and device
WO2018233446A1 (en) Data transmission method, apparatus and system, network element, storage medium and processor
CN109842947B (en) Base station task oriented scheduling method and system
US20210067459A1 (en) Methods, systems and appratuses for optimizing time-triggered ethernet (tte) network scheduling by using a directional search for bin selection
EP2996417A1 (en) Timing adjustment method and device
EP3836735A1 (en) Method and apparatus for restoring data radio bearer, and storage medium and electronic apparatus
TW201836328A (en) Method and device for transmitting data using protocol data unit
CN113453315B (en) Terminal access method, device and storage medium
CN105992365B (en) A kind of resource allocation, service customizing method and device
KR20160036947A (en) Base station and control method thereof
CN109688606A (en) Data processing method, device, computer equipment and storage medium
WO2019062725A1 (en) Method and device for uplink data transmission
CN112243266B (en) Data packet method and device
CN112243266A (en) Data packaging method and device
CN108075870B (en) Method and device for inter-station carrier aggregation scheduling
CN112020099B (en) Uplink data distribution method and device
CN107566093B (en) RLC ARQ method of cross-base-station carrier aggregation system and base station
WO2021036621A1 (en) Information interaction method and related device
CN107889152A (en) More idle port communication method and apparatus
CN107204836B (en) Parallel scheduling method and device for carrier aggregation
CN111741534B (en) Resource scheduling method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant