CN116193508B - Multithreading acceleration processing control method for user plane data of core network - Google Patents

Multithreading acceleration processing control method for user plane data of core network Download PDF

Info

Publication number
CN116193508B
CN116193508B CN202310473606.6A CN202310473606A CN116193508B CN 116193508 B CN116193508 B CN 116193508B CN 202310473606 A CN202310473606 A CN 202310473606A CN 116193508 B CN116193508 B CN 116193508B
Authority
CN
China
Prior art keywords
data
service
module
data frame
service module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310473606.6A
Other languages
Chinese (zh)
Other versions
CN116193508A (en
Inventor
康志杰
候春辉
马铁军
张治涛
白清
孙磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HEBEI FAREAST COMMUNICATION SYSTEM ENGINEERING CO LTD
Original Assignee
HEBEI FAREAST COMMUNICATION SYSTEM ENGINEERING CO LTD
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HEBEI FAREAST COMMUNICATION SYSTEM ENGINEERING CO LTD filed Critical HEBEI FAREAST COMMUNICATION SYSTEM ENGINEERING CO LTD
Priority to CN202310473606.6A priority Critical patent/CN116193508B/en
Publication of CN116193508A publication Critical patent/CN116193508A/en
Application granted granted Critical
Publication of CN116193508B publication Critical patent/CN116193508B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0289Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W24/00Supervisory, monitoring or testing arrangements
    • H04W24/02Arrangements for optimising operational condition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0231Traffic management, e.g. flow control or congestion control based on communication conditions
    • H04W28/0236Traffic management, e.g. flow control or congestion control based on communication conditions radio quality, e.g. interference, losses or delay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0278Traffic management, e.g. flow control or congestion control using buffer status reports
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W56/00Synchronisation arrangements
    • H04W56/003Arrangements to increase tolerance to errors in transmission or reception timing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention discloses a multithreading acceleration processing control method for core network user plane data, belonging to the field of computer network communication. The service modules of the invention have the characteristics of high cohesion and low coupling, thereby simplifying the realization of the service logic of the user plane of the core network; according to the characteristics of the service field, a single control thread and multiple work thread model is adopted, and a cross access algorithm of context data among multiple threads is realized by utilizing barrier synchronization between the control thread and the work thread and avoiding using a synchronous lock algorithm of the threads; the data frame queue is adopted, so that the data transmission of zero memory copy among the service modules is realized, and the transmission time delay of the data frames among the modules is shortened; by adopting the data stream batch processing idea, a plurality of data frames can be processed in one service processing cycle, thereby improving the efficiency of service data processing.

Description

Multithreading acceleration processing control method for user plane data of core network
Technical Field
The invention relates to the field of computer network communication, in particular to a multithreading acceleration processing control method for core network user plane data.
Background
LTE (Long Term Evolution) is a long term evolution technology of a 3GPP radio access network, which is a fourth generation mobile communication technology, capable of rapidly transmitting voice, text, video and image information, and capable of meeting the requirements of users for wireless services. The LTE system consists of two parts: a radio access network (eNodeB) provides the functions of wireless connection and access of users to a core network; the core network (EPC) is mainly responsible for user management and data link transmission, the network elements involved in the user plane function of the core network include a Service Gateway (SGW) and a packet data network gateway (PGW), the SGW is responsible for connecting the user plane of the eNodeB, serving as a local anchor point for handover between the enodebs, and meanwhile is responsible for routing and forwarding data packets, the PGW is used as a border gateway, and is responsible for forwarding data traffic between the SGW and the data network, and providing functions such as user bearer control, charging, address allocation, and the like. The user plane data adopts GPRS tunnel protocol based on UDP to meet the real-time requirement of the system.
The core network user plane has a plurality of functions and complex logic, and the users in the specific field have strict requirements on indexes such as throughput rate, time delay and the like of the wireless network. Therefore, a method for controlling the high-speed processing of the user plane data of the core network is needed to improve the data throughput rate of the core network and reduce the data exchange delay.
Disclosure of Invention
The invention provides a multithread acceleration processing control method for core network user plane data, which aims to solve the problems that the processing speed of the core network user plane data is difficult to meet the indexes of high throughput rate, low time delay and the like, so as to achieve the purpose of high-speed processing of the user plane data.
The technical scheme adopted by the invention is as follows:
a multithread acceleration processing control method for core network user plane data comprises the following steps:
(1) Establishing a control thread and a plurality of working threads;
(2) Seven business modules are established for each working thread: the system comprises a data receiving module, a data sending module, a user data module, a GTP module, a packet filter module, a service quality module and a charging control module, wherein each service module respectively realizes one data processing link in the uplink or downlink data processing process;
(3) Creating a data frame buffer pool and distributing a data frame queue for each service module; the control thread establishes user bearing information for each service module and maintains the topological relation between the service modules;
(4) The control thread initiates barrier synchronization and waits for all working threads to enter a blocking state;
(5) The control thread modifies the user bearing information of the service modules according to the service request and adjusts the topological relation among the service modules, and then destroys barrier synchronization to realize lock-free access of the user bearing information among the multiple threads;
(6) The working thread processes data according to the new user bearing information, and the specific mode is as follows:
(601) The data receiving module adopts a user-mode polling driving mechanism to realize high-speed data receiving;
(602) The data receiving module reads the data cache address, applies for data frame resources from the data frame cache pool, and stores the data cache address into the data frame;
(603) The data receiving module acquires the data frame queue information of the next service module A according to the topological relation of the service modules, and presses the data frames into the data frame queues of the service modules A;
(604) The data receiving module suspends a data frame queue of the service module A to mark that the queue has data to be processed;
(605) Repeating the steps (602) - (604) until the maximum batch processing times of the data receiving module are reached, and entering the next step of processing;
(606) The working thread polls the suspended queue and enters a service module A to which the current suspended queue belongs;
(607) The service module A reads a first data frame in a data frame queue of the service module A and processes the data frame according to processing logic of the service module A;
(608) The service module A acquires the data frame queue information of the next service module B according to the topological relation of the service module, and presses the processed data frame into the data frame queue of the service module B;
(609) The service module A suspends a data frame queue of the service module B to mark that the queue has data to be processed;
(610) Repeating the steps (607) - (609) until the maximum batch processing times of the service module A are reached, and entering the next step of processing;
(611) The working thread polls the suspension queues, repeats steps (606) - (610) until all suspension queues are processed, and returns to step (601);
when a new user accesses the network, the control thread establishes user bearing information for each service module again, updates the topological relation among the service modules, and repeats the steps (4) - (6).
The beneficial effects of the invention are as follows:
(1) According to the characteristics of the service field, the embodiment of the invention adopts a single control thread and multiple working threads model, utilizes barrier synchronization between the control thread and the working thread, avoids using a synchronous lock mechanism of the threads, and realizes lock-free access between the multiple threads.
(2) The data frame queue is adopted to realize the data transmission of zero memory copy between service modules, and the transmission time delay of the data frames between the modules is shortened.
(3) By adopting a data stream batch processing algorithm, a plurality of data frames can be processed in one service processing cycle, and the service data processing efficiency is improved.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings required to be used in the description of the present specification will be briefly described below.
Fig. 1 is a diagram of a multithreading model of a user plane of a core network in an embodiment of the invention.
Fig. 2 is a schematic diagram of a core network user plane multithreaded lock-free access in an embodiment of the invention.
Fig. 3 is a flowchart of processing working thread data of a core network user plane in an embodiment of the present invention.
Detailed Description
In order to better understand the technical solutions in the present application, the following descriptions will be clearly and completely describing the technical solutions in the embodiments of the present application with reference to the drawings in the embodiments of the present application, so that the advantages and features of the present invention can be more easily understood by those skilled in the art, and thus the protection scope of the present invention is more clearly and definitely defined.
A multi-thread acceleration processing control method of core network user plane data is shown in figure 1, the method adopts a single control thread and multiple working thread models, the control thread is responsible for the establishment flow of user bearing information and maintains the topological relation between service modules, and the working thread is responsible for the processing flow of service data. Seven business modules are established for each working thread: the system comprises a data receiving module, a data sending module, a user data module, a GTP module, a packet filter module, a service quality module and a charging control module, wherein each service module respectively realizes a data processing link in the uplink or downlink data processing process, service data are exchanged among the service modules in a working thread by adopting a data frame queue, and the functions of GTP encapsulation, data packet bearing filtration, data flow control, charging control and the like of the core network service data are realized by each service module according to user bearing information and topological relation.
FIG. 2 is a schematic diagram of a multithreaded lock-free access. The user plane adopts a single control thread and multiple working threads model, the control thread establishes user bearing control information, and the working threads process service data according to the user bearing information and the module topological relation. The multithreaded lock-free access specifically comprises the following steps:
in terms of control threads:
s101: when a new user accesses the network, the control thread initiates barrier synchronization;
s102: the control thread waits whether all the working threads enter a blocking state;
s103: modifying user bearing information of the service modules according to the service request by the control thread and adjusting topological relation among the service modules;
s104: after the control thread is modified, destroying obstacle synchronization;
s105: and the control thread waits for the working thread to exit from the blocking state, and the whole data synchronization operation is completed.
In terms of worker threads:
s106: before each logic processing cycle, the working thread detects whether the control thread initiates an obstacle synchronization request, otherwise, the working thread returns to the step S110 to continue executing the subsequent logic;
s107: if the control thread initiates an obstacle synchronization request, adding one to the number of blocked threads;
s108: the working thread enters a blocking state and waits for the control thread to finish obstacle synchronization;
s109: the worker thread receives the obstacle synchronous destruction request, and the worker thread reduces the number of the blocked threads by one;
s110: the working thread enters a business logic processing flow, and performs data processing and forwarding according to the new user bearing information and the topological relation of the module, so as to realize a lock-free access algorithm among multiple threads.
When a new user accesses the network, the control thread establishes user bearing information for each service module again, updates the topological relation among the service modules, and repeats the above steps of the control thread and the related steps of the working thread.
Fig. 3 is a flowchart of processing working thread data of a user plane of a core network, which specifically includes the following steps:
s201: each service module performs initialization work, creates a data frame buffer pool, and distributes a data frame queue for each service module;
s202: the data receiving module adopts a user-mode polling driving mechanism to realize high-speed data receiving;
s203: the data receiving module reads the data cache address, applies for data frame resources from the data frame cache pool, and stores the data cache address into the data frame;
s204: the data receiving module acquires the data frame queue information of the next service module A according to the topological relation of the service modules, and presses the data frames into the data frame queues of the service modules A;
s205: the data receiving module suspends a data frame queue of the service module A to mark that the queue has data to be processed;
s206: repeating the steps S202-S205 until the maximum batch processing times of the data receiving module are reached, and executing the next operation;
s207: the working thread judges whether the suspension queue is empty or not, and if the suspension queue is empty, the step S202 is skipped to receive the network data;
s208: if the suspended queue is not empty, the working thread polls the suspended queue and enters a service module A to which the current suspended queue belongs;
s209: the service module A reads a first data frame in a data frame queue of the service module A and processes the data frame according to processing logic of the service module A;
s210: the service module A acquires the data frame queue information of the next service module B according to the topological relation of the service module, and presses the processed data frame into the data frame queue of the service module B;
s211: the service module A suspends a data frame queue of the service module B to mark that the queue has data to be processed;
s212: repeating the steps S208-S211 until the maximum batch processing times of the service module A are reached, and jumping back to the step S207;
in summary, the invention provides a multithreading acceleration processing control method for core network user plane data, wherein the service modules have the characteristics of high cohesion and low coupling, and the realization of core network user plane service logic is simplified. According to the characteristics of the service field, the invention adopts a single control thread and multiple working threads model, utilizes barrier synchronization between the control thread and the working thread, avoids using a synchronous lock algorithm of the threads, and realizes a mutual access algorithm of context data between the multiple threads. The invention also adopts the data frame queue, realizes the data transmission of zero memory copy between service modules, and shortens the transmission time delay of the data frames between the modules. In addition, the invention adopts the data flow batch processing idea, can process a plurality of data frames in one service processing cycle, and improves the efficiency of service data processing.
The invention solves the problems that the processing speed of the user plane data of the core network is difficult to meet the indexes of high throughput rate, low time delay and the like, and achieves the purpose of high-speed processing of the user plane data.
The foregoing description is only illustrative of the present invention and is not intended to limit the scope of the invention, and all equivalent structures or equivalent processes or direct or indirect application in other related technical fields are included in the scope of the present invention.

Claims (1)

1. A multithreading acceleration processing control method for core network user plane data is characterized by comprising the following steps:
(1) Establishing a control thread and a plurality of working threads;
(2) Seven business modules are established for each working thread: the system comprises a data receiving module, a data sending module, a user data module, a GTP module, a packet filter module, a service quality module and a charging control module, wherein each service module respectively realizes one data processing link in the uplink or downlink data processing process;
(3) Creating a data frame buffer pool and distributing a data frame queue for each service module; the control thread establishes user bearing information for each service module and maintains the topological relation between the service modules;
(4) The control thread initiates barrier synchronization and waits for all working threads to enter a blocking state;
(5) The control thread modifies the user bearing information of the service modules according to the service request and adjusts the topological relation among the service modules, and then destroys barrier synchronization to realize lock-free access of the user bearing information among the multiple threads;
(6) The working thread processes data according to the new user bearing information, and the specific mode is as follows:
(601) The data receiving module adopts a user-mode polling driving mechanism to realize high-speed data receiving;
(602) The data receiving module reads the data cache address, applies for data frame resources from the data frame cache pool, and stores the data cache address into the data frame;
(603) The data receiving module acquires the data frame queue information of the next service module A according to the topological relation of the service modules, and presses the data frames into the data frame queues of the service modules A;
(604) The data receiving module suspends a data frame queue of the service module A to mark that the queue has data to be processed;
(605) Repeating the steps (602) - (604) until the maximum batch processing times of the data receiving module are reached, and entering the next step of processing;
(606) The working thread polls the suspended queue and enters a service module A to which the current suspended queue belongs;
(607) The service module A reads a first data frame in a data frame queue of the service module A and processes the data frame according to processing logic of the service module A;
(608) The service module A acquires the data frame queue information of the next service module B according to the topological relation of the service module, and presses the processed data frame into the data frame queue of the service module B;
(609) The service module A suspends a data frame queue of the service module B to mark that the queue has data to be processed;
(610) Repeating the steps (607) - (609) until the maximum batch processing times of the service module A are reached, and entering the next step of processing;
(611) The working thread polls the suspension queues, repeats steps (606) - (610) until all suspension queues are processed, and returns to step (601);
when a new user accesses the network, the control thread establishes user bearing information for each service module again, updates the topological relation among the service modules, and repeats the steps (4) - (6).
CN202310473606.6A 2023-04-28 2023-04-28 Multithreading acceleration processing control method for user plane data of core network Active CN116193508B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310473606.6A CN116193508B (en) 2023-04-28 2023-04-28 Multithreading acceleration processing control method for user plane data of core network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310473606.6A CN116193508B (en) 2023-04-28 2023-04-28 Multithreading acceleration processing control method for user plane data of core network

Publications (2)

Publication Number Publication Date
CN116193508A CN116193508A (en) 2023-05-30
CN116193508B true CN116193508B (en) 2023-06-27

Family

ID=86446622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310473606.6A Active CN116193508B (en) 2023-04-28 2023-04-28 Multithreading acceleration processing control method for user plane data of core network

Country Status (1)

Country Link
CN (1) CN116193508B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106937309A (en) * 2017-02-08 2017-07-07 京信通信技术(广州)有限公司 A kind of data transmission method and device
CN107133092A (en) * 2017-05-24 2017-09-05 努比亚技术有限公司 Multi-thread synchronization processing method, terminal and computer-readable recording medium
CN113840342A (en) * 2020-06-24 2021-12-24 大唐移动通信设备有限公司 Data forwarding and retransmitting method and device
CN115334586A (en) * 2022-10-17 2022-11-11 深圳市领创星通科技有限公司 Data forwarding method and device, computer equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2957141B1 (en) * 2013-02-12 2019-01-02 Altiostar Networks, Inc. Long term evolution radio access network
US9900801B2 (en) * 2014-08-08 2018-02-20 Parallel Wireless, Inc. Congestion and overload reduction

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106937309A (en) * 2017-02-08 2017-07-07 京信通信技术(广州)有限公司 A kind of data transmission method and device
CN107133092A (en) * 2017-05-24 2017-09-05 努比亚技术有限公司 Multi-thread synchronization processing method, terminal and computer-readable recording medium
CN113840342A (en) * 2020-06-24 2021-12-24 大唐移动通信设备有限公司 Data forwarding and retransmitting method and device
CN115334586A (en) * 2022-10-17 2022-11-11 深圳市领创星通科技有限公司 Data forwarding method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN116193508A (en) 2023-05-30

Similar Documents

Publication Publication Date Title
US11558916B2 (en) Method and apparatus for establishing dual-connectivity to transmit data in new radio communication architecture
RU2755205C2 (en) Session management system and methods
US20230164881A1 (en) Method for user plane connection activation or deactivation per session
US10470152B2 (en) Method for supporting efficient PDU session activation and deactivation in cellular networks
US20230164861A1 (en) Method and apparatus for establishing dual-connectivity to transmit data in new radio communication architecture
KR102631263B1 (en) UE context and UE connection resumption method
US20210227462A1 (en) Methods and Apparatus Relating to Inactive Mode in a Wireless Communications Network
US10499326B2 (en) User equipment paging method and MME
EP2340678B1 (en) Qos management in lte for a self-backhauling base station
CN110366271A (en) Communication means and communication device
CN107734563B (en) Method and device for processing QoS (quality of service) parameters in switching scene
CN106576279B (en) Multi-communication-system transmission method and device
US11968565B2 (en) User plane information reporting method and apparatus
CN101938798A (en) Mobile management method and system for terminal in wireless relay system
US20220394459A1 (en) Communications method and apparatus
US20220083393A1 (en) Communication method and apparatus, and network architecture
JP7382462B2 (en) Method and apparatus for load balancing in cloud radio access networks
EP3761711A1 (en) Capability opening method, related device and system
US20230262557A1 (en) Methods and devices for enhancing integrated access backhaul networks for new radio
KR20210010562A (en) Rate control method, apparatus, and system
WO2018171380A1 (en) Method and device for triggering buffer status report, and terminal and storage medium
CN101754412B (en) Method, equipment and system for implementing local exchange for base station group in GSM (global system for mobile communications)
CN116193508B (en) Multithreading acceleration processing control method for user plane data of core network
CN114339724B (en) Cluster communication method, system and storage medium applied to cluster communication system
CN106572030A (en) Multipath sending control method and multipath sending control system in distributed network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant