WO2024098816A1 - 一种数据传输处理方法、装置、存储介质及电子装置 - Google Patents

一种数据传输处理方法、装置、存储介质及电子装置 Download PDF

Info

Publication number
WO2024098816A1
WO2024098816A1 PCT/CN2023/105184 CN2023105184W WO2024098816A1 WO 2024098816 A1 WO2024098816 A1 WO 2024098816A1 CN 2023105184 W CN2023105184 W CN 2023105184W WO 2024098816 A1 WO2024098816 A1 WO 2024098816A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
edge node
node
nodes
data transmission
Prior art date
Application number
PCT/CN2023/105184
Other languages
English (en)
French (fr)
Inventor
刘志龙
郭成峰
陈俊江
李军
丁元欣
卢建
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Publication of WO2024098816A1 publication Critical patent/WO2024098816A1/zh

Links

Definitions

  • the embodiments of the present disclosure relate to the field of communications, and in particular, to a data transmission processing method, device, storage medium, and electronic device.
  • the media transmission networks of different scenarios cannot be universal.
  • each manufacturer has built its own unified media transmission network (RTN) according to its own business to carry the transmission of various media services.
  • RTN media transmission network
  • the embodiments of the present disclosure provide a data transmission processing method, device, storage medium and electronic device to at least solve the problem in the related art that only media data is accelerated in the current RTN system and network resource utilization is low.
  • a data transmission processing method which is applied to a routing scheduling center, and the method includes:
  • the target data transmission path is returned to the originating edge node, so that the originating edge node transmits the data to be transmitted to the destination edge node according to the target data transmission path.
  • a data transmission processing device which is applied to a routing scheduling center, and the device includes:
  • a first receiving module is configured to receive a path query request initiated by a starting edge node, wherein the path query request carries data to be transmitted and a destination edge node;
  • a first determining module configured to determine a data level according to a data type of the data to be transmitted
  • a second determination module is configured to determine a target data transmission path between the starting edge node and the destination edge node corresponding to the data level
  • the returning module is configured to return the target data transmission path to the starting edge node, so that the starting edge node transmits the data to be transmitted to the destination edge node according to the target data transmission path.
  • a computer-readable storage medium in which a computer program is stored, wherein the computer program is configured to execute the steps of any of the above method embodiments when running.
  • an electronic device including a memory and a processor, wherein the memory stores a computer program, and the processor is configured to run the computer program to execute the steps in any one of the above method embodiments.
  • FIG1 is a hardware structure block diagram of a device for a data transmission processing method according to an embodiment of the present disclosure
  • FIG2 is a flow chart of a data transmission processing method according to an embodiment of the present disclosure
  • FIG3 is a schematic diagram of an RTN system for general data hierarchical acceleration according to an embodiment of the present disclosure
  • FIG4 is a block diagram of edge node composition according to an embodiment of the present disclosure.
  • FIG. 5 is a block diagram of a transit node in an RTN system for general data hierarchical acceleration according to an embodiment of the present disclosure
  • FIG. 6 is a block diagram of a routing and scheduling center in an RTN system with general data hierarchical acceleration according to an embodiment of the present disclosure
  • FIG7 is a schematic diagram of a packaged data format according to an embodiment of the present disclosure.
  • FIG8 is a schematic diagram of a data format after repackaging by a forwarding node according to an embodiment of the present disclosure
  • FIG. 9 is a block diagram of a data transmission processing device according to an embodiment of the present disclosure.
  • FIG1 is a hardware structure block diagram of a device of the data transmission processing method of the embodiment of the present disclosure.
  • the device may include one or more (only one is shown in FIG1 ) processors 102 (the processor 102 may include but is not limited to a processing device such as a microprocessor MCU or a programmable logic device) and a memory 104 for storing data, wherein the above-mentioned device may also include a transmission device 106 and an input-output device 108 for communication functions.
  • processors 102 may include but is not limited to a processing device such as a microprocessor MCU or a programmable logic device
  • a memory 104 for storing data
  • the above-mentioned device may also include a transmission device 106 and an input-output device 108 for communication functions.
  • FIG1 is only for illustration and does not limit the structure of the above-mentioned device.
  • the device may also include more or fewer components than those shown in FIG1 , or have a configuration different from that shown in FIG1 .
  • the memory 104 may be used to store computer programs, for example, software programs and modules of application software, such as the computer program corresponding to the data transmission processing method in the embodiment of the present disclosure.
  • the processor 102 executes various functional applications and data transmission processing by running the computer program stored in the memory 104, that is, implementing the above method.
  • the memory 104 may be Including high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
  • the memory 104 may further include a memory remotely arranged relative to the processor 102, and these remote memories may be connected to the device via a network. Examples of the above-mentioned network include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
  • the transmission device 106 is used to receive or send data via a network.
  • the specific example of the above network may include a wireless network provided by a communication provider of the device.
  • the transmission device 106 includes a network adapter (Network Interface Controller, referred to as NIC), which can be connected to other network devices through a base station so as to communicate with the Internet.
  • the transmission device 106 can be a radio frequency (Radio Frequency, referred to as RF) module, which is used to communicate with the Internet wirelessly.
  • RF Radio Frequency
  • FIG. 2 is a flow chart of the data transmission processing method according to an embodiment of the present disclosure. As shown in FIG. 2 , the method is applied to a routing scheduling center. The flow chart includes the following steps:
  • Step S202 receiving a path query request initiated by a starting edge node, wherein the path query request carries data to be transmitted and a destination edge node;
  • Step S204 determining the data level according to the data type of the data to be transmitted
  • Step S206 determining a target data transmission path between the starting edge node and the destination edge node corresponding to the data level
  • Step S208 returning the target data transmission path to the originating edge node, so that the originating edge node transmits the to-be-transmitted data to the destination edge node according to the target data transmission path.
  • the data level is determined according to the data type, and the data transmission path corresponding to the data level is planned. Not only media data but also signaling and other non-media data are accelerated, thereby improving the overall user experience of the media transmission service.
  • the accelerated data is finely classified, and matching links can be planned for different media transmission scenarios, thereby maximizing the utilization value of RTN transmission resources while improving the user experience.
  • step S206 may specifically include: acquiring a plurality of pre-planned data transmission paths between the starting edge node and the destination edge node; and determining the target data transmission path corresponding to the data level from the plurality of data transmission paths.
  • data transmission paths are planned, which may specifically include: path planning based on the topological relationship between the edge nodes and the transit nodes, and detection data of the edge nodes and the transit nodes, wherein multiple data transmission paths corresponding to different data levels are planned between each group of starting edge nodes and the destination edge nodes; and each group of starting edge nodes and the destination edge nodes, the multiple data transmission paths, and the corresponding data levels are associated and stored.
  • the method further includes: sending a detection request to the edge node and the transit node according to the topological relationship; receiving link data detected by the edge node with the connected transit node, and link data detected by the transit node between the connected transit nodes, wherein the detection data is the link data between the edge node and the connected transit node, and the link data between the connected transit nodes.
  • the above path planning may specifically include:
  • the above S2061 may specifically include: determining the quality score of the link data, and further, obtaining the link data, wherein each link data in the link data includes: packet loss data, delay data, jitter data, and available bandwidth data; respectively smoothing the packet loss data, the delay data, the jitter data, and the available bandwidth data to obtain smoothed packet loss data, smoothed delay data, smoothed jitter data, and smoothed available bandwidth data; determining the quality score of the link data according to the smoothed packet loss data, the smoothed delay data, the smoothed jitter data, and the smoothed available bandwidth data; and then determining the sum of the product of the quality score of the link data and the corresponding weight coefficient as the link quality index.
  • the method also includes: determining the transmission priority of the data to be transmitted; sending the transmission priority to the starting edge node so that the starting edge node transmits the data to be transmitted to the destination edge node according to the transmission priority and the target data transmission path.
  • This embodiment proposes a general data hierarchical acceleration RTN system to address the problem that the current RTN system only accelerates the media, lacks acceleration measures for signaling and non-media data, and the problem that the current RTN system does not perform refined hierarchical acceleration for media acceleration, resulting in low network resource utilization.
  • the RTN system not only accelerates media data, but also accelerates signaling and other non-media data to improve the overall user experience of media transmission services; in addition, the RTN system performs refined grading for accelerated data, plans high-quality transmission paths for high-priority data, plans general transmission paths for low-priority data, and plans matching links for different media transmission scenarios, thereby maximizing the utilization value of RTN transmission resources while improving user experience.
  • It can be applied to industries related to real-time audio and video communications, such as AR/VR, cloud games, interactive live broadcasts, cloud computers, distance education, video conferencing, video surveillance, and various OTT applications.
  • FIG3 is a schematic diagram of an RTN system for general data hierarchical acceleration according to an embodiment of the present disclosure.
  • the system mainly includes a client, an edge node, a transit node, and a routing scheduling center.
  • the client is an audio and video terminal or related SDK, which is mainly responsible for processing audio and video services and accessing the edge node, covering the audio and video media display part and the audio and video media generation part.
  • the video conferencing scenario includes PC clients, mobile clients, and terminals in conference rooms;
  • the cloud desktop scenario includes thin terminals, PCs, and back-end servers;
  • the AR/VR scenario includes VR glasses, terminal helmets, and back-end servers.
  • the edge node is responsible for client access, data priority setting, data encapsulation processing, and has data sending and receiving capabilities. It is also responsible for receiving data detection requests from the routing scheduling center and detecting the network parameters of the transit nodes connected to it.
  • the detection data includes packet loss, delay, jitter, bandwidth, etc., and is responsible for reporting the detection results to the routing scheduling center.
  • the transit node is responsible for processing the data encapsulation part and forwarding the data. It is also responsible for receiving data detection requests from the routing scheduling center and detecting the network parameters of the transit nodes and edge nodes connected to it.
  • the detection data includes packet loss, delay, jitter, bandwidth, etc., and is responsible for reporting the detection results to the routing scheduling center.
  • the routing scheduling center is responsible for the management of the overall network topology, sending link detection requests to edge nodes and transit nodes, collecting link network parameter data reported by edge nodes and forwarding nodes, unified planning of data transmission links, and unified management of transmission data priority.
  • FIG4 is a block diagram of the edge node composition according to an embodiment of the present disclosure.
  • the edge node in the RTN system of the general data hierarchical acceleration of the present embodiment is mainly composed of a client access module, a data classification module, a data forwarding module, and a link detection module.
  • the client access module is responsible for accepting the access request of the client and for data transmission between the client.
  • the data classification module is responsible for classifying the data type.
  • the data priority is divided into four priorities from 1 to 4. The smaller the number, the higher the priority. It is mainly based on the data type that needs to be transmitted to query the routing scheduling center for the priority of data transmission of this type.
  • the data classification module After receiving the priority information and transmission path information of this type of data transmission and the receiving port information of the destination node, the data classification module forms a new transmission data packet with the original data packet to complete the data encapsulation operation.
  • the data transceiver module is responsible for forwarding the encapsulated data packet to the transit node according to the requirements of the transmission path, among which the data with high priority is forwarded first.
  • the link detection module is responsible for receiving the detection request instruction of the routing scheduling module. The instruction will contain the specific forwarding node that needs to be detected by the edge node. After receiving the instruction, the link detection is randomly performed. The detection indicators include packet loss, delay, jitter, bandwidth, etc. After the detection is completed, the specific detection parameters will be sent to the routing scheduling module at regular intervals.
  • FIG5 is a block diagram of the composition of the transfer node in the RTN system of the general data hierarchical acceleration according to the embodiment of the present disclosure.
  • the data receiving module is responsible for high-performance reception of data from edge nodes or other transfer nodes, and the data is handed over to the data processing module for processing.
  • the data processing module is responsible for decapsulating the encapsulated data, then removing the information of the node in the path and updating the path length field, exposing the information of the next node in the path so that the data can be quickly forwarded in the path, and then encapsulating the data.
  • the data forwarding module is responsible for forwarding the encapsulated data packet to the transfer node according to the requirements of the transmission path, wherein the data with high priority is forwarded first.
  • the link detection module is responsible for receiving the detection request instruction of the routing scheduling module, which will contain the specific node (including the forwarding node and other edge nodes) that the edge node needs to detect. After receiving the instruction, the link detection is randomly performed, and the detection indicators include packet loss, delay, jitter, bandwidth, etc. After the detection is completed, the specific parameters of the detection are sent to the routing scheduling module on a regular basis.
  • FIG6 is a block diagram of the composition of the routing dispatch center in the RTN system of the general data hierarchical acceleration according to the embodiment of the present disclosure.
  • the topology management module is responsible for the management of the topological relationship of the entire RTN network, including the addition and deletion of edge nodes and transit nodes, and the change of the connection relationship between nodes.
  • the detection management module is responsible for sending detection requests to edge nodes and transit nodes, and is responsible for receiving detection data of edge nodes and transit nodes.
  • the path planning module is responsible for planning the path in real time according to the topological relationship and detection data of edge nodes and transit nodes.
  • Each group of starting nodes and destination nodes plans up to 4 paths, and is divided into four levels from 1 to 4, where 1 means that the path delay is small and the bandwidth is large; 2 means that the path delay is small and the bandwidth is small; 3 means that the path delay is large and the bandwidth is large; 4 means that the path delay is large and the bandwidth is small.
  • the hierarchical transmission path planned in real time is stored in the memory for edge nodes to query.
  • the data hierarchical management module is responsible for receiving the hierarchical query request and path query request of the edge node, and sends the hierarchical data and path data to the edge module according to the result of the path planning module.
  • the grading strategy is as follows: cloud computers, cloud games, AR/VR, video conferencing, interactive live broadcasts and other scenarios with low latency and large bandwidth use Level 1 paths for media data; Level 2 paths are used for signaling and non-media data; video surveillance, OTT applications and other scenarios with high bandwidth requirements and low latency requirements use Level 3 paths; other non-important data transmission uses Level 4 paths.
  • Step 1 The topology management module of the routing dispatch center saves the nodes and link relationships in the RTN system configured by the user in the memory, including the network address information of the edge nodes and the transit nodes;
  • Step 2 The detection management module of the routing scheduling center sends detection requests to the edge nodes and transit nodes according to the topological relationship;
  • Step 3 After receiving the detection request from the routing scheduling center, the link detection module of the edge node and the transit node randomly starts the detection request of the link network parameters, and reports the detection result to the path planning module of the routing scheduling center at regular intervals;
  • Step 4 The path planning module of the routing dispatch center performs path planning based on the topological relationship between edge nodes and transit nodes and real-time detection data.
  • Each group of start nodes and destination nodes plans up to 4 paths, which are divided into four levels from 1 to 4.
  • the hierarchical transmission paths planned in real time are saved in the memory for edge nodes to query.
  • Step 5 When the service request starts, the client connects to the client access module of the edge node, and the edge node's data
  • the classification module queries the data classification information and data transmission path from the data classification management module of the routing scheduling center according to the business type, and then encapsulates the original data into a new data packet in combination with the received data analysis information, data transmission path, destination port and other information.
  • FIG7 is a schematic diagram of the encapsulated data format according to an embodiment of the present disclosure, as shown in FIG7.
  • the destination port is the receiving port of the last node in the path (generally an edge node, node 6 in FIG6); the intermediate forwarding nodes uniformly use configurable fixed ports to reduce port negotiation actions and speed up transmission.
  • Step 6 After the data is encapsulated, it is sent to the transit node through the data transceiver module of the edge node.
  • Step 7 after the data receiving module of the transit node receives the encapsulated transmission data, the data processing module will decapsulate the data, then remove the information of the current node in the path and update the path length information, expose the information of the next node in the path so that the data can be quickly forwarded in the path, and then encapsulate the data.
  • the current forwarding node is node 1
  • the received encapsulated data is shown in Figure 7.
  • Figure 8 is a schematic diagram of the data format after re-encapsulation of the forwarding node according to the embodiment of the present disclosure. As shown in Figure 8, the next forwarding node is node 3.
  • the transit node will determine whether the path length in the encapsulated data is 1 to determine whether the current node is the second to last node in the transmission path. If not, the data will be forwarded to the fixed data receiving port of the next node, as shown in step 8; if the current node is the second to last node in the transmission path, the data will be forwarded to the port described by the destination port field in the encapsulated data, as shown in step 9.
  • Step 8 The data forwarding module of the transfer node directly obtains the information of the next forwarding node according to the encapsulated data, and forwards the data according to the priority information. High-priority data is forwarded first.
  • Step 9 After receiving the data from the transfer module, the data module of the edge node decapsulates the data, obtains the original data, and then sends the original data to the destination client.
  • FIG. 9 is a block diagram of a data transmission processing device according to an embodiment of the present disclosure. As shown in FIG. 9 , the device is applied to a routing scheduling center. The device includes:
  • a first receiving module 92 is configured to receive a path query request initiated by a starting edge node, wherein the path query request carries data to be transmitted and a destination edge node;
  • a first determining module 94 configured to determine a data level according to a data type of the data to be transmitted
  • a second determination module 96 is configured to determine a target data transmission path between the starting edge node and the destination edge node corresponding to the data level;
  • the return module 98 is configured to return the target data transmission path to the originating edge node, so that the originating edge node transmits the to-be-transmitted data to the destination edge node according to the target data transmission path.
  • the second determination module 96 is further configured to obtain a plurality of pre-planned data transmission paths between the starting edge node and the destination edge node; and determine the target data transmission path corresponding to the data level from the plurality of data transmission paths.
  • the device further comprises:
  • a path planning module is configured to perform path planning based on the topological relationship between the edge nodes and the transit nodes and the detection data of the edge nodes and the transit nodes, wherein a plurality of data transmission paths corresponding to different data levels are planned between each group of starting edge nodes and the destination edge nodes;
  • the association storage module is configured to associate and store each group of starting edge nodes with the destination edge nodes, the multiple data transmission paths, and the corresponding data levels.
  • the device further comprises:
  • a sending module configured to send a detection request to the edge node and the transit node according to the topological relationship
  • the second receiving module is configured to receive link data detected by the edge node and the connected transit node, and link data detected by the transit node between the connected transit nodes, wherein the detection data is link data between the edge node and the connected transit node, and link data between the connected transit nodes.
  • the path planning module includes:
  • a conversion submodule configured to convert the detection data into a link quality index
  • a planning submodule configured to plan, according to a topological relationship between the edge node and the transit node, a plurality of data transmission paths in which the sum of the link quality indexes between each group of starting edge nodes and destination edge nodes is less than a preset threshold;
  • a setting submodule is configured to respectively set corresponding data levels for the plurality of data transmission paths.
  • the conversion submodule includes:
  • a first determining unit configured to determine a quality score of the link data
  • the second determining unit is used to determine the link quality index by multiplying the quality score of the link data by the corresponding weight coefficient.
  • the first determination unit is further used to obtain the link data, wherein each link data in the link data includes: packet loss data, delay data, jitter data, and available bandwidth data; the packet loss data, the delay data, the jitter data, and the available bandwidth data are smoothed respectively to obtain smoothed packet loss data, smoothed delay data, smoothed jitter data, and smoothed available bandwidth data; and the quality score of the link data is determined according to the smoothed packet loss data, the smoothed delay data, the smoothed jitter data, and the smoothed available bandwidth data.
  • the device further comprises:
  • a third determining module is configured to determine the transmission priority of the data to be transmitted
  • the sending module is configured to send the transmission priority to the starting edge node, so that the starting edge node transmits the data to be transmitted to the destination edge node according to the transmission priority and the target data transmission path.
  • An embodiment of the present disclosure further provides a computer-readable storage medium, in which a computer program is stored, wherein the computer program is configured to execute the steps of any of the above method embodiments when running.
  • the above-mentioned computer-readable storage medium may include, but is not limited to: a USB flash drive, a read-only memory (ROM), a random access memory (RAM), a mobile hard disk, a magnetic disk or an optical disk, and other media that can store computer programs.
  • An embodiment of the present disclosure further provides an electronic device, including a memory and a processor, wherein a computer program is stored in the memory, and the processor is configured to run the computer program to execute the steps in any one of the above method embodiments.
  • the electronic device may further include a transmission device and an input/output device, wherein the transmission device is connected to the processor, and the input/output device is connected to the processor.
  • modules or steps of the present disclosure can be implemented by a general-purpose computing device, they can be concentrated on a single computing device, or distributed on a network composed of multiple computing devices, they can be implemented by program codes executable by the computing device, so that they can be stored in a storage device and executed by the computing device.
  • the present invention is performed by a device, and in some cases, the steps shown or described may be performed in a different order than that shown here, or they may be made into individual integrated circuit modules, or multiple modules or steps therein may be made into a single integrated circuit module for implementation.
  • the present disclosure is not limited to any particular combination of hardware and software.

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

本公开实施例提供了一种数据传输处理方法、装置、存储介质及电子装置,该方法包括:接收起始边缘节点发起的路径查询请求,该路径查询请求中携带有待传输数据与目的边缘节点;根据所述待传输数据的数据类型确定数据级别;确定所述数据级别对应的所述起始边缘节点与所述目的边缘节点之间的目标数据传输路径;将所述目标数据传输路径返回给所述起始边缘节点,以使所述起始边缘节点根据所述目标数据传输路径将所述待传输数据传输到所述目的边缘节点,可以解决相关技术中当前RTN系统中仅对媒体数据进行加速,网络资源利用率低的问题,提升媒体传输业务整体用户体验,在提升用户体验的同时实现了RTN传输资源利用价值最大化。

Description

一种数据传输处理方法、装置、存储介质及电子装置
相关申请的交叉引用
本公开基于2022年11月08日提交的发明名称为“一种数据传输处理方法、装置、存储介质及电子装置”的中国专利申请CN202211395107.1,并且要求该专利申请的优先权,通过引用将其所公开的内容全部并入本公开。
技术领域
本公开实施例涉及通信领域,具体而言,涉及一种数据传输处理方法、装置、存储介质及电子装置。
背景技术
随着互联网的不断发展,音视频媒体数据已经成为互联网流量的主体,近年来AR/VR、云游戏、互动直播、云电脑、远程教育、视频会议、视频监控以及各种OTT应用等场景的出现对音视频传输的实时性、稳定性、大容量、低成本提出了新的挑战,也推动实时音视频传输(Real Time Communication,简称为RTC)技术成为当下炙手可热的领域,国内外RTC产业规模均保持着较高增长速度。传统的做法会针对AR/VR、云游戏、互动直播、云电脑、远程教育、视频会议、视频监控以及各种OTT应用等场景分别建设各自的媒体传输网络,这就导致了媒体传输网络的重复建设,同时不同场景的媒体传输网络不能通用的问题。为了解决各种媒体传输场景下媒体传输网络重复建设、通用性差的问题,各个厂商都根据自己的业务构建了自己的统一媒体传输网络(Real Time Network,简称为RTN)来承载各媒体业务的传输。当前RTN系统中仅对媒体数据进行加速,网络资源利用率低。
针对相关技术中当前RTN系统中仅对媒体数据进行加速,网络资源利用率低的问题,尚未提出解决方案。
发明内容
本公开实施例提供了一种数据传输处理方法、装置、存储介质及电子装置,以至少解决相关技术中当前RTN系统中仅对媒体数据进行加速,网络资源利用率低的问题。
根据本公开的一个实施例,提供了一种数据传输处理方法,应用于路由调度中心,所述方法包括:
接收起始边缘节点发起的路径查询请求,其中,所述路径查询请求中携带有待传输数据与目的边缘节点;
根据所述待传输数据的数据类型确定数据级别;
确定所述数据级别对应的所述起始边缘节点与所述目的边缘节点之间的目标数据传输路径;
将所述目标数据传输路径返回给所述起始边缘节点,以使所述起始边缘节点根据所述目标数据传输路径将所述待传输数据传输到所述目的边缘节点。
根据本公开的另一个实施例,还提供了一种数据传输处理装置,应用于路由调度中心,所述装置包括:
第一接收模块,设置为接收起始边缘节点发起的路径查询请求,其中,所述路径查询请求中携带有待传输数据与目的边缘节点;
第一确定模块,设置为根据所述待传输数据的数据类型确定数据级别;
第二确定模块,设置为确定所述数据级别对应的所述起始边缘节点与所述目的边缘节点之间的目标数据传输路径;
返回模块,设置为将所述目标数据传输路径返回给所述起始边缘节点,以使所述起始边缘节点根据所述目标数据传输路径将所述待传输数据传输到所述目的边缘节点。
根据本公开的又一个实施例,还提供了一种计算机可读的存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
根据本公开的又一个实施例,还提供了一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行上述任一项方法实施例中的步骤。
附图说明
图1是本公开实施例的数据传输处理方法的设备的硬件结构框图;
图2是根据本公开实施例的数据传输处理方法的流程图;
图3是根据本公开实施例的通用数据分级加速的RTN系统的示意图;
图4是根据本公开实施例的边缘节点组成的框图;
图5是根据本公开实施例的通用数据分级加速的RTN系统中中转节点的组成框图;
图6是根据本公开实施例的通用数据分级加速的RTN系统中路由调度中心的组成框图;
图7是根据本公开实施例的封装后的数据格式的示意图;
图8是根据本公开实施例的转发节点重新封装后的数据格式的示意图;
图9是根据本公开实施例的数据传输处理装置的框图。
具体实施方式
下文中将参考附图并结合实施例来详细说明本公开的实施例。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。
本公开实施例中所提供的方法实施例可以在设备或者类似的运算装置中执行。以运行在设备上为例,图1是本公开实施例的数据传输处理方法的设备的硬件结构框图,如图1所示,设备可以包括一个或多个(图1中仅示出一个)处理器102(处理器102可以包括但不限于微处理器MCU或可编程逻辑器件等的处理装置)和用于存储数据的存储器104,其中,上述设备还可以包括用于通信功能的传输设备106以及输入输出设备108。本领域普通技术人员可以理解,图1所示的结构仅为示意,其并不对上述设备的结构造成限定。例如,设备还可包括比图1中所示更多或者更少的组件,或者具有与图1所示不同的配置。
存储器104可用于存储计算机程序,例如,应用软件的软件程序以及模块,如本公开实施例中的数据传输处理方法对应的计算机程序,处理器102通过运行存储在存储器104内的计算机程序,从而执行各种功能应用以及数据传输处理,即实现上述的方法。存储器104可 包括高速随机存储器,还可包括非易失性存储器,如一个或者多个磁性存储装置、闪存、或者其他非易失性固态存储器。在一些实例中,存储器104可进一步包括相对于处理器102远程设置的存储器,这些远程存储器可以通过网络连接至设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
传输设备106用于经由一个网络接收或者发送数据。上述的网络具体实例可包括设备的通信供应商提供的无线网络。在一个实例中,传输设备106包括一个网络适配器(Network Interface Controller,简称为NIC),其可通过基站与其他网络设备相连从而可与互联网进行通讯。在一个实例中,传输设备106可以为射频(Radio Frequency,简称为RF)模块,其用于通过无线方式与互联网进行通讯。
在本实施例中提供了一种运行于上述设备的数据传输处理方法,图2是根据本公开实施例的数据传输处理方法的流程图,如图2所示,应用于路由调度中心,该流程包括如下步骤:
步骤S202,接收起始边缘节点发起的路径查询请求,其中,所述路径查询请求中携带有待传输数据与目的边缘节点;
步骤S204,根据所述待传输数据的数据类型确定数据级别;
步骤S206,确定所述数据级别对应的所述起始边缘节点与所述目的边缘节点之间的目标数据传输路径;
步骤S208,将所述目标数据传输路径返回给所述起始边缘节点,以使所述起始边缘节点根据所述目标数据传输路径将所述待传输数据传输到所述目的边缘节点。
通过上述步骤S202至S208,可以解决相关技术中当前RTN系统中仅对媒体数据进行加速,网络资源利用率低的问题,根据数据类型确定数据级别,规划数据级别对应的数据传输路径,不仅针对媒体数据进行加速,同时也针对信令和其他非媒体数据进行加速,提升媒体传输业务整体用户体验;另外,对加速的数据进行精细化分级,可以针对媒体传输场景不同规划与之相匹配的链路,在提升用户体验的同时实现了RTN传输资源利用价值最大化。
本实施例中,步骤S206具体可以包括:获取预先规划好的所述起始边缘节点与所述目的边缘节点之间的多个数据传输路径;从所述多个数据传输路径中确定所述数据级别对应的所述目标数据传输路径。
在一实施例中,在上述步骤S202之前,进行数据传输路径的规划,具体可以包括:根据边缘节点与中转节点的拓扑关系、所述边缘节点和所述中转节点的探测数据进行路径规划,其中,每一组起始边缘节点与目的边缘节点之间规划与不同数据级别对应的多个数据传输路径;将每一组起始边缘节点与目的边缘节点、所述多个数据传输路径以及对应数据级别进行关联存储。
在另一实施例中,在上述步骤S206之前,所述方法还包括:根据所述拓扑关系向所述边缘节点和所述中转节点下发探测请求;接收所述边缘节点探测的与相连中转节点之间的链路数据、所述中转节点探测的相连中转节点之间的链路数据,其中,所述探测数据为所述边缘节点与所述相连中转节点之间的链路数据、所述相连中转节点之间的链路数据。
对应的,上述路径规划具体可以包括:
S2061,将所述探测数据转换为链路质量指数;
S2062,根据所述边缘节点与所述中转节点的拓扑关系,规划每一组起始边缘节点与目的边缘节点之间的所述链路质量指数之和小于预设阈值的多个数据传输路径;
S2063,分别为所述多个数据传输路径设置对应的数据级别。
在一可选的实施例中,上述S2061具体可以包括:确定所述链路数据的质量分数,进一步的,获取所述链路数据,其中,所述链路数据中每个链路数据均包括:丢包数据、延迟数据、抖动数据、可用带宽数据;分别对所述丢包数据、所述延迟数据、所述抖动数据、所述可用带宽数据进行平滑处理,得到平滑后的丢包数据、平滑后的延迟数据、平滑后的抖动数据、平滑后的可用带宽数据;根据所述平滑后的丢包数据、所述平滑后的延迟数据、所述平滑后的抖动数据、所述平滑后的可用带宽数据确定所述链路数据的质量分数;之后将所述链路数据的质量分数与对应的权重系数的乘积之和确定为所述链路质量指数。
在一实施例中,所述方法还包括:确定所述待传输数据的传输优先级;将所述传输优先级发送给所述起始边缘节点,以使所述起始边缘节点根据所述传输优先级与所述目标数据传输路径将所述待传输数据传输到所述目的边缘节点。
本实施例针对当前RTN系统中仅针对媒体进行加速,对信令和非媒体数据缺乏加速措施的问题以及当前RTN系统中对于媒体的加速没有做精细化分级加速导致网络资源利用率底的问题提出了一种通用数据分级加速的RTN系统。该RTN系统不仅针对媒体数据进行加速,同时也针对信令和其他非媒体数据进行加速,提升媒体传输业务整体用户体验;另外,该RTN系统针对加速的数据进行精细化分级,为高优先级数据规划优质的传输路径,为低优先级数据规划一般的传输路径,针对媒体传输场景不同规划与之相匹配的链路,在提升用户体验的同时实现RTN传输资源利用价值最大化。可以应用于实时音视频通信相关行业,如AR/VR、云游戏、互动直播、云电脑、远程教育、视频会议、视频监控以及各种OTT应用等。
图3是根据本公开实施例的通用数据分级加速的RTN系统的示意图,如图3所示,主要包含客户端、边缘节点、中转节点和路由调度中心。
客户端为音视频终端或相关SDK,主要负责音视频业务的处理,并负责向边缘节点接入,涵盖音视频媒体展示部分和音视频媒体生成部分,例如,视频会议场景下包含PC客户端、手机移动客户端、会议室的会议中的终端;云桌面场景包含瘦终端、PC端、后端服务器;AR/VR场景下包含VR眼镜、终端头盔、后台服务器。
边缘节点负责客户端的接入,数据优先级设置、数据封装处理、并具有数据收发能力,同时还负责接收路由调度中心的数据探测请求,并探测与其相连的中转节点的网络参数,探测数据包含丢包、延迟、抖动、带宽等,并负责将探测结果上报到路由调度中心。
中转节点负责数据封装部分的处理和数据的转发,同时还负责接收路由调度中心的数据探测请求,并探测与其相连的中转节点和边缘节点的网络参数,探测数据包含丢包、延迟、抖动、带宽等,并负责将探测结果上报到路由调度中心。
路由调度中心负责整体网络拓扑的管理、链路探测请求下发到边缘节点和中转节点、收集边缘节点和转发节点上报的链路网络参数数据、数据传输链路的统一规划、传输数据优先级的统一管理等功能。
图4是根据本公开实施例的边缘节点组成的框图,如图4所示,本实施例通用数据分级加速的RTN系统中边缘节点的主要由客户端接入模块、数据分级模块、数据转发模块、链路探测模块组成。客户端接入模块负责接受客户端的接入请求,并负责与客户端之间的数据传输。数据分级模块负责对数据类型进行分级,数据优先级从1到4分为四个优先级,数字越小优先级越高。主要是根据需要传输的数据类型向路由调度中心查询该类型数据传输的优先 级,同时获取路由调度中心规划的数据传输的路径信息以及目的节点的接收端口信息。数据分级模块收到该类型数据传输的优先级信息和传输路径信息以及目的节点的接收端口信息后与原始数据包组成新的传输数据包,完成数据封装操作。数据收发模块负责将封装完成后的数据包根据传输路径的要求转发到中转节点,其中优先级高的数据优先转发。链路探测模块负责接收路由调度模块的探测请求指令,指令中会包含需要边缘节点探测的具体转发节点,接收指令后随机进行链路探测,探测指标包含丢包、延迟、抖动、带宽等,探测完成后将探测的具体参数定时发送到路由调度模块。
图5是根据本公开实施例的通用数据分级加速的RTN系统中中转节点的组成框图,如图5所示,数据接收模块负责高性能接收来自边缘节点或者其他中转节点的数据,并将数据交给数据处理模块处理。数据处理模块负责将封装后的数据解封装,然后去掉路径中本节点的信息病更新路径长度字段,暴露路径中下一节点的信息以便数据在路径中的快速转发,然后对数据进行封装。数据转发模块负责将封装完成后的数据包根据传输路径的要求转发到中转节点,其中优先级高的数据优先转发。链路探测模块负责接收路由调度模块的探测请求指令,指令中会包含需要边缘节点探测的具体节点(包含转发节点和其他边缘节点),接收指令后随机进行链路探测,探测指标包含丢包、延迟、抖动、带宽等,探测完成后将探测的具体参数定时发送到路由调度模块。
图6是根据本公开实施例的通用数据分级加速的RTN系统中路由调度中心的组成框图,如图6所示,拓扑管理模块负责整个RTN网络的拓扑关系的管理,包含边缘节点和中转节点的增加、删除操作,节点之间连接关系的变更操作。探测管理模块负责向边缘节点和中转节点下发探测请求,并负责接收边缘节点和中转节点的探测数据。路径规划模块负责根据边缘节点和中转节点的拓扑关系和探测数据实时进行路径的规划,每一组起始节点和目的节点最多规划4条路径,并分1到4四个级别,其中1表示路径延迟小、带宽大;2表示路径延迟小,带宽小;3表示路径延迟大、带宽大;4表示路径延迟大、带宽小。实时规划出的分级传输路径保存在内存中供边缘节点查询。数据分级管理模块负责接收边缘节点的分级查询请求和路径查询请求,并根据路规划模块的结果将分级数据和路径数据下发给边缘模块。其中分级策略为:云电脑、云游戏、AR/VR、视频会议、互动直播等低延迟大带宽等场景媒体数据使用级别1的路径;信令和非媒体数据使用级别2的路径;视频监控、OTT应用等对带宽要求高,对延迟要求第的场景使用级别3的路径;其他非重要数据传输使用级别4路径。
本实施例提出的一种通用数据分级加速的RTN系统流程如下:
步骤1,路由调度中心的拓扑管理模块将用户配置的RTN系统中节点和链接关系保存在内存中,包含边缘节点和中转节点的网络地址信息;
步骤2,路由调度中心的探测管理模块根据拓扑关系向边缘节点和中转节点下发探测请求;
步骤3,边缘节点和中转节点的链路探测模块在接收到路由调度中心的探测请求后随机开始链路网络参数的探测请求,并将探测结果定时上报到路由调度中心的路径规划模块;
步骤4,路由调度中心的路径规划模块根据边缘节点和中转节点的拓扑关系和实时探测数据进行路径规划,每一组起始节点和目的节点最多规划4条路径,并分1到4四个级别。实时规划出的分级传输路径保存在内存中供边缘节点查询。
步骤5,当业务请求开始,客户端连接到边缘节点的客户端接入模块后,边缘节点的数 据分级模块根据业务类型向路由调度中心的数据分级管理模块查询数据分级信息和数据传输路径,然后将原始数据结合收到数据分析信息和数据传输路径以及目的端口等信息封装成新的数据包。图7是根据本公开实施例的封装后的数据格式的示意图,如图7所示。其中目的端口为路径中最后一个节点(一般为边缘节点,图6中为节点6)的接收端口;中间转发节点统一使用可配置的固定端口,以便减少端口协商动作,加速传输。
步骤6,数据封装完成后通过边缘节点的数据收发模块发送到中转节点。
步骤7,中转节点的数据接收模块接收到封装后的传输数据后,数据处理模块会对数据进行解封装,然后去掉路径中本节点的信息并更新路径长度信息,暴露路径中下一节点的信息以便数据在路径中的快速转发,然后对数据进行封装。例如,本转发节点为节点1,收到的封装数据如图7所示,去掉本节点1信息后重新封装后的数据格式,图8是根据本公开实施例的转发节点重新封装后的数据格式的示意图,如图8所示,下一转发节点为节点3。同时中转节点会判封装数据中的路径长度是否为1来判断当前节点是否是传输路径中的倒数第二个节点,如果不是,则将数据转发到下一节点的固定数据接收端口,如步骤8所示;如果当前节点是传输路径中的倒数第二个节点,则将数据转发到封装数据中目的端口字段描述的端口,如步骤9所示。
步骤8,中转节点的数据转发模块根据封装后的数据直接获取下一转发节点的信息,并根据优先级信息进行数据转发。高优先级的数据优先转发。
步骤9,边缘节点的数据模块在接收到中转模块的数据后,对数据进行解封装,获取到原始数据,然后将原始数据发送到目的客户端。
至此,完成了通用数据在RTN系统中的分级加速传输的过程。
根据本实施例的另一方面,还提供了一种数据传输处理装置,图9是根据本公开实施例的数据传输处理装置的框图,如图9所示,应用于路由调度中心,所述装置包括:
第一接收模块92,设置为接收起始边缘节点发起的路径查询请求,其中,所述路径查询请求中携带有待传输数据与目的边缘节点;
第一确定模块94,设置为根据所述待传输数据的数据类型确定数据级别;
第二确定模块96,设置为确定所述数据级别对应的所述起始边缘节点与所述目的边缘节点之间的目标数据传输路径;
返回模块98,设置为将所述目标数据传输路径返回给所述起始边缘节点,以使所述起始边缘节点根据所述目标数据传输路径将所述待传输数据传输到所述目的边缘节点。
在一实施例中,所述第二确定模块96,还设置为获取预先规划好的所述起始边缘节点与所述目的边缘节点之间的多个数据传输路径;从所述多个数据传输路径中确定所述数据级别对应的所述目标数据传输路径。
在一实施例中,所述装置还包括:
路径规划模块,设置为根据边缘节点与中转节点的拓扑关系、所述边缘节点和所述中转节点的探测数据进行路径规划,其中,每一组起始边缘节点与目的边缘节点之间规划与不同数据级别对应的多个数据传输路径;
关联存储模块,设置为将每一组起始边缘节点与目的边缘节点、所述多个数据传输路径以及对应数据级别进行关联存储。
在一实施例中,所述装置还包括:
下发模块,设置为根据所述拓扑关系向所述边缘节点和所述中转节点下发探测请求;
第二接收模块,设置为接收所述边缘节点探测的与相连中转节点之间的链路数据、所述中转节点探测的相连中转节点之间的链路数据,其中,所述探测数据为所述边缘节点与所述相连中转节点之间的链路数据、所述相连中转节点之间的链路数据。
在一实施例中,所述路径规划模块包括:
转换子模块,设置为将所述探测数据转换为链路质量指数;
规划子模块,设置为根据所述边缘节点与所述中转节点的拓扑关系,规划每一组起始边缘节点与目的边缘节点之间的所述链路质量指数之和小于预设阈值的多个数据传输路径;
设置子模块,设置为分别为所述多个数据传输路径设置对应的数据级别。
在一实施例中,所述转换子模块包括:
第一确定单元,用于确定所述链路数据的质量分数;
第二确定单元,用于将所述链路数据的质量分数与对应的权重系数的乘积之和确定为所述链路质量指数。
在一实施例中,所述第一确定单元,还用于获取所述链路数据,其中,所述链路数据中每个链路数据均包括:丢包数据、延迟数据、抖动数据、可用带宽数据;分别对所述丢包数据、所述延迟数据、所述抖动数据、所述可用带宽数据进行平滑处理,得到平滑后的丢包数据、平滑后的延迟数据、平滑后的抖动数据、平滑后的可用带宽数据;根据所述平滑后的丢包数据、所述平滑后的延迟数据、所述平滑后的抖动数据、所述平滑后的可用带宽数据确定所述链路数据的质量分数。
在一实施例中,所述装置还包括:
第三确定模块,设置为确定所述待传输数据的传输优先级;
发送模块,设置为将所述传输优先级发送给所述起始边缘节点,以使所述起始边缘节点根据所述传输优先级与所述目标数据传输路径将所述待传输数据传输到所述目的边缘节点。
本公开的实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,其中,该计算机程序被设置为运行时执行上述任一项方法实施例中的步骤。
在一个示例性实施例中,上述计算机可读存储介质可以包括但不限于:U盘、只读存储器(Read-Only Memory,简称为ROM)、随机存取存储器(Random Access Memory,简称为RAM)、移动硬盘、磁碟或者光盘等各种可以存储计算机程序的介质。
本公开的实施例还提供了一种电子装置,包括存储器和处理器,该存储器中存储有计算机程序,该处理器被设置为运行计算机程序以执行上述任一项方法实施例中的步骤。
在一个示例性实施例中,上述电子装置还可以包括传输设备以及输入输出设备,其中,该传输设备和上述处理器连接,该输入输出设备和上述处理器连接。
本实施例中的具体示例可以参考上述实施例及示例性实施方式中所描述的示例,本实施例在此不再赘述。
显然,本领域的技术人员应该明白,上述的本公开的各模块或各步骤可以用通用的计算装置来实现,它们可以集中在单个的计算装置上,或者分布在多个计算装置所组成的网络上,它们可以用计算装置可执行的程序代码来实现,从而,可以将它们存储在存储装置中由计算 装置来执行,并且在某些情况下,可以以不同于此处的顺序执行所示出或描述的步骤,或者将它们分别制作成各个集成电路模块,或者将它们中的多个模块或步骤制作成单个集成电路模块来实现。这样,本公开不限制于任何特定的硬件和软件结合。
以上所述仅为本公开的优选实施例而已,并不用于限制本公开,对于本领域的技术人员来说,本公开可以有各种更改和变化。凡在本公开的原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的保护范围之内。

Claims (11)

  1. 一种数据传输处理方法,应用于路由调度中心,所述方法包括:
    接收起始边缘节点发起的路径查询请求,其中,所述路径查询请求中携带有待传输数据与目的边缘节点;
    根据所述待传输数据的数据类型确定数据级别;
    确定所述数据级别对应的所述起始边缘节点与所述目的边缘节点之间的目标数据传输路径;
    将所述目标数据传输路径返回给所述起始边缘节点,以使所述起始边缘节点根据所述目标数据传输路径将所述待传输数据传输到所述目的边缘节点。
  2. 根据权利要求1所述的方法,其中,确定所述数据级别对应的所述边缘节点与所述目的边缘节点之间的目标数据传输路径包括:
    获取预先规划好的所述起始边缘节点与所述目的边缘节点之间的多个数据传输路径;
    从所述多个数据传输路径中确定所述数据级别对应的所述目标数据传输路径。
  3. 根据权利要求1所述的方法,其中,在接收起始边缘节点发起的路径查询请求之前,所述方法还包括:
    根据边缘节点与中转节点的拓扑关系、所述边缘节点和所述中转节点的探测数据进行路径规划,其中,每一组起始边缘节点与目的边缘节点之间规划与不同数据级别对应的多个数据传输路径;
    将每一组起始边缘节点与目的边缘节点、所述多个数据传输路径以及对应数据级别进行关联存储。
  4. 根据权利要求3所述的方法,其中,在根据所述边缘节点与所述中转节点的拓扑关系、所述边缘节点和所述中转节点的探测数据进行路径规划之前,所述方法还包括:
    根据所述拓扑关系向所述边缘节点和所述中转节点下发探测请求;
    接收所述边缘节点探测的与相连中转节点之间的链路数据、所述中转节点探测的相连中转节点之间的链路数据,其中,所述探测数据为所述边缘节点与所述相连中转节点之间的链路数据、所述相连中转节点之间的链路数据。
  5. 根据权利要求4所述的方法,其中,根据所述边缘节点与所述中转节点的拓扑关系、所述边缘节点和所述中转节点的探测数据进行路径规划包括:
    将所述探测数据转换为链路质量指数;
    根据所述边缘节点与所述中转节点的拓扑关系,规划每一组起始边缘节点与目的边缘节点之间的所述链路质量指数之和小于预设阈值的多个数据传输路径;
    分别为所述多个数据传输路径设置对应的数据级别。
  6. 根据权利要求5所述的方法,其中,将所述探测数据转换为链路质量指数包括:
    确定所述链路数据的质量分数;
    将所述链路数据的质量分数与对应的权重系数的乘积之和确定为所述链路质量指数。
  7. 根据权利要求6所述的方法,其中,确定所述链路数据的质量分数包括:
    获取所述链路数据,其中,所述链路数据中每个链路数据均包括:丢包数据、延迟数据、抖动数据、可用带宽数据;
    分别对所述丢包数据、所述延迟数据、所述抖动数据、所述可用带宽数据进行平滑处理,得到平滑后的丢包数据、平滑后的延迟数据、平滑后的抖动数据、平滑后的可用带宽数据;
    根据所述平滑后的丢包数据、所述平滑后的延迟数据、所述平滑后的抖动数据、所述平滑后的可用带宽数据确定所述链路数据的质量分数。
  8. 根据权利要求1至7中任一项所述的方法,其中,所述方法还包括:
    确定所述待传输数据的传输优先级;
    将所述传输优先级发送给所述起始边缘节点,以使所述起始边缘节点根据所述传输优先级与所述目标数据传输路径将所述待传输数据传输到所述目的边缘节点。
  9. 一种数据传输处理装置,应用于路由调度中心,所述装置包括:
    第一接收模块,设置为接收起始边缘节点发起的路径查询请求,其中,所述路径查询请求中携带有待传输数据与目的边缘节点;
    第一确定模块,设置为根据所述待传输数据的数据类型确定数据级别;
    第二确定模块,设置为确定所述数据级别对应的所述起始边缘节点与所述目的边缘节点之间的目标数据传输路径;
    返回模块,设置为将所述目标数据传输路径返回给所述起始边缘节点,以使所述起始边缘节点根据所述目标数据传输路径将所述待传输数据传输到所述目的边缘节点。
  10. 一种计算机可读的存储介质,所述存储介质中存储有计算机程序,其中,所述计算机程序被设置为运行时执行所述权利要求1至8任一项中所述的方法。
  11. 一种电子装置,包括存储器和处理器,所述存储器中存储有计算机程序,所述处理器被设置为运行所述计算机程序以执行所述权利要求1至8任一项中所述的方法。
PCT/CN2023/105184 2022-11-08 2023-06-30 一种数据传输处理方法、装置、存储介质及电子装置 WO2024098816A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211395107.1 2022-11-08
CN202211395107.1A CN118018625A (zh) 2022-11-08 2022-11-08 一种数据传输处理方法、装置、存储介质及电子装置

Publications (1)

Publication Number Publication Date
WO2024098816A1 true WO2024098816A1 (zh) 2024-05-16

Family

ID=90945114

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/105184 WO2024098816A1 (zh) 2022-11-08 2023-06-30 一种数据传输处理方法、装置、存储介质及电子装置

Country Status (2)

Country Link
CN (1) CN118018625A (zh)
WO (1) WO2024098816A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113300962A (zh) * 2021-01-26 2021-08-24 阿里巴巴集团控股有限公司 路径获取方法、访问方法、装置及设备
CN113364682A (zh) * 2021-05-31 2021-09-07 浙江大华技术股份有限公司 一种数据传输方法、装置、存储介质及电子装置
US20210409335A1 (en) * 2020-09-11 2021-12-30 Intel Corporation Multi-access management service packet classification and prioritization techniques
CN115277539A (zh) * 2022-07-29 2022-11-01 天翼云科技有限公司 一种数据传输方法、选路集群以及边缘节点

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210409335A1 (en) * 2020-09-11 2021-12-30 Intel Corporation Multi-access management service packet classification and prioritization techniques
CN113300962A (zh) * 2021-01-26 2021-08-24 阿里巴巴集团控股有限公司 路径获取方法、访问方法、装置及设备
CN113364682A (zh) * 2021-05-31 2021-09-07 浙江大华技术股份有限公司 一种数据传输方法、装置、存储介质及电子装置
CN115277539A (zh) * 2022-07-29 2022-11-01 天翼云科技有限公司 一种数据传输方法、选路集群以及边缘节点

Also Published As

Publication number Publication date
CN118018625A (zh) 2024-05-10

Similar Documents

Publication Publication Date Title
CN109600246B (zh) 网络切片管理方法及其装置
CN109391500B (zh) 一种配置管理方法、装置及设备
US20170244792A1 (en) Power-Line Carrier Terminal Control Apparatus, System, and Method
WO2020052605A1 (zh) 一种网络切片的选择方法及装置
US9998298B2 (en) Data transmission method, apparatus, and computer storage medium
CN110502259B (zh) 服务器版本升级方法、视联网系统、电子设备及存储介质
CN110198345B (zh) 一种数据请求方法、系统及装置和存储介质
US11165716B2 (en) Data flow processing method and device
US9125089B2 (en) Method and apparatus for packet aggregation in a network controller
WO2023011450A1 (zh) 网络信息开放方法、装置、电子设备和存储介质
CN109450982B (zh) 一种网络通讯方法和系统
CN109672857B (zh) 监控资源的信息处理方法和装置
CN112383600A (zh) 信息的处理方法、装置、计算机可读介质及电子设备
US20190335253A1 (en) Low-latency data switching device and method
WO2021233313A1 (zh) 配置端口状态的方法、装置、系统及存储介质
CN110708293B (zh) 多媒体业务的分流方法和装置
CN107809387B (zh) 一种报文传输的方法、设备及网络系统
WO2024098816A1 (zh) 一种数据传输处理方法、装置、存储介质及电子装置
CN114095901A (zh) 通信数据处理方法及装置
WO2023125380A1 (zh) 一种数据管理的方法及相应装置
WO2023125056A1 (zh) 网络数据的控制方法、装置和存储介质及电子设备
WO2017193814A1 (zh) 一种业务链生成方法及系统
WO2024098814A1 (zh) 一种数据传输处理方法、装置、存储介质及电子装置
CN110098993B (zh) 一种信令报文的处理方法和装置
CN114422437A (zh) 一种异构报文的转发方法及装置