WO2023193579A1 - Procédé et appareil de transmission de données - Google Patents

Procédé et appareil de transmission de données Download PDF

Info

Publication number
WO2023193579A1
WO2023193579A1 PCT/CN2023/081482 CN2023081482W WO2023193579A1 WO 2023193579 A1 WO2023193579 A1 WO 2023193579A1 CN 2023081482 W CN2023081482 W CN 2023081482W WO 2023193579 A1 WO2023193579 A1 WO 2023193579A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
data packet
network data
priority
indication information
Prior art date
Application number
PCT/CN2023/081482
Other languages
English (en)
Chinese (zh)
Inventor
曹佑龙
秦熠
陈二凯
徐瑞
陈伟超
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023193579A1 publication Critical patent/WO2023193579A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0268Traffic management, e.g. flow control or congestion control using specific QoS parameters for wireless networks, e.g. QoS class identifier [QCI] or guaranteed bit rate [GBR]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
    • H04W28/065Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information using assembly or disassembly of packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • H04W28/24Negotiating SLA [Service Level Agreement]; Negotiating QoS [Quality of Service]

Definitions

  • the embodiments of the present application relate to the field of communications, and more specifically, to a data transmission method and device.
  • XR extended reality
  • VR virtual reality
  • AR augmented reality
  • MR mixed reality
  • SR super resolution
  • SR technology based on neural network (NN) has received widespread attention because of its remarkable image restoration effect.
  • Users convert low-resolution images into high-resolution images based on neural networks, so how to transmit neural network data packets has become a focus The problem.
  • Embodiments of the present application provide a communication method in order to realize differentiated transmission of neural network data packets of different priorities and improve user experience.
  • the first aspect provides a communication method, which can be executed by an access network device, or can be executed by a chip or circuit configured in the access network device, or can also be executed by a device that can realize all or part of the connection.
  • the logic module or software execution of the network access device function is not limited in this application. For convenience of description, the following description takes execution by the access network device as an example.
  • the method includes: receiving a general packet radio service tunneling protocol-user plane (GTP-U) data packet, the payload of the GTP-U data packet includes a neural network data packet, and the GTP-U data packet
  • the packet header includes indication information, the indication information is used to indicate the priority of the neural network data packet; the neural network data packet is transmitted according to the indication information.
  • GTP-U general packet radio service tunneling protocol-user plane
  • the header of the GTP-U data packet received by the access network device includes instruction information indicating the neural network data packet, so that the access network device can read the instruction from the header of the GTP-U data packet. information, and transmit and process the neural network data packet based on the priority of the neural network data packet indicated by the indication information. It can be understood that the access network device takes the priority of the neural network data packets into consideration when transmitting the neural network data packets. It facilitates differentiated transmission of neural network data packets of different priorities.
  • the access network device receives two GTP-U data packets (GTP-U data packet #1 and GTP-U data packet #2).
  • the payload of GTP-U data packet #1 includes a neural network data packet.
  • the header of GTP-U packet #1 includes indication information #1
  • the payload of GTP-U packet #2 includes neural network packet #2
  • the header of GTP-U packet #2 includes indication information #2.
  • the access network device differentially transmits neural network data packet #1 and neural network data packet #2 according to instruction information #1 and instruction information #2.
  • instruction information #1 indicates that the priority of neural network data packet #1 is high priority.
  • indication information #2 indicates that the priority of neural network data packet #2 is low priority.
  • the access network device can transmit neural network data packet #1 first, and then transmit neural network data after the transmission of neural network data packet #1 is completed.
  • Package #2 is
  • transmitting the neural network data packet according to the indication information includes: when the indication information indicates that the neural network data packet is a high priority, transmitting the neural network data packet first Neural network data packet; or, when the indication information indicates that the neural network data packet is of low priority, delay the transmission of the neural network data packet; or, when the indication information indicates that the neural network data packet is of low priority , and when the network status is congested, the transmission of the neural network data packet is given up.
  • the access network equipment can determine the transmission mode of different neural network data packets (such as priority transmission or abandonment of transmission, etc.) according to the priority of the neural network data packet indicated by the indication information, thereby improving the flexibility of the solution.
  • the second aspect provides a communication method, which can be executed by a core network device, or can be executed by a chip or circuit configured in the core network device, or can also be executed by a core network device that can implement all or part of the core network device.
  • Functional logic modules or software execution are not limited in this application. For convenience of description, the following description takes execution by the core network device as an example.
  • the method includes: obtaining a neural network data packet, the neural network data packet including indication information, the indication information being used to indicate the priority of the neural network data packet; sending a General Packet Wireless Service Tunneling Protocol user plane GTP to the access network device -U data packet, the payload of the GTP-U data packet includes the neural network data packet, and the header of the GTP-U data packet includes the indication information.
  • the core network device after the core network device receives the neural network data packet carrying the indication information indicating the priority of the neural network data packet, it can read the priority of the neural network data packet from the neural network data packet. level indication information, and encapsulates the indication information into the GTP-U data packet header according to the GTP-U protocol, and uses the neural network data packet as the payload of the GTP-U data packet. So that the access network device that receives the GTP-U data packet can read the indication information from the header of the GTP-U data packet, and based on the priority of the neural network data packet indicated by the indication information, the neural network Data packets are processed for transmission. It can be understood that the access network device takes the priority of the neural network data packet into consideration when transmitting the neural network data packet, so as to achieve differentiated transmission of neural network data packets of different priorities.
  • the third aspect provides a communication method, which can be executed by a server, or can be executed by a chip or circuit configured in the server, or can also be executed by a logic module or software that can realize all or part of the server functions. execution, this application does not limit this. For the convenience of description, the following description takes execution by the server as an example.
  • the method includes: generating a neural network data packet, the neural network data packet including indication information, the indication information being used to indicate the priority of the neural network data packet; and sending the neural network data packet.
  • the server when the server generates a neural network data packet, it can carry indication information indicating the priority of the neural network data packet in the neural network data packet, so as to facilitate the core network equipment that receives the neural network data packet.
  • the device can read the indication information used to indicate the priority of the neural network data packet from the neural network data packet, and learn the priority of the neural network data packet. In order to achieve differentiated transmission of neural network data packets of different priorities.
  • a fourth aspect provides a communication device, which is used to perform the method provided in the first aspect.
  • the device may be access network equipment, or may be a component of access network equipment (such as a processor, chip, or chip system, etc.), or may be a logic module or software that can realize all or part of the functions of the access network equipment,
  • the device includes:
  • the interface unit is configured to receive a General Packet Radio Service Tunneling Protocol user plane GTP-U data packet.
  • the payload of the GTP-U data packet includes a neural network data packet.
  • the header of the GTP-U data packet includes indication information.
  • the indication The information is used to indicate the priority of the neural network data packet; the processing unit is used to control the device to transmit the neural network data packet according to the indication information.
  • the processing unit controls the device to transmit the neural network data packet according to the indication information, including: when the indication information indicates that the neural network data packet is a high priority In this case, the processing unit controls the device to transmit the neural network data packet with priority; or, in the case where the indication information indicates that the neural network data packet is of low priority, the processing unit controls the device to delay transmission of the neural network data. packet; or, when the indication information indicates that the neural network data packet is of low priority and the network status is congested, the processing unit controls the device to give up transmitting the neural network data packet.
  • the communication device may include units and/or modules for executing the method provided by any implementation of the first aspect, such as a processing unit and an interface unit.
  • the interface unit may be a transceiver, or an input/output interface; the processing unit may be at least one processor.
  • the transceiver may be a transceiver circuit.
  • the input/output interface may be an input/output circuit.
  • the interface unit may be an input/output interface, interface circuit, output circuit, input circuit, pin or related circuit on the chip, chip system or circuit, etc.;
  • the processing unit may be at least one processor , processing circuits or logic circuits, etc.
  • beneficial effects of the device shown in the above fourth aspect and its possible designs may be referred to the beneficial effects of the first aspect and its possible designs.
  • a communication device which is used to perform the method provided in the second aspect.
  • the device can be a core network device, or a component of the core network device (such as a processor, a chip, or a chip system, etc.), or a logic module or software that can realize all or part of the functions of the core network device.
  • the device includes :
  • the interface unit is used to obtain a neural network data packet, the neural network data packet includes indication information, and the indication information is used to indicate the priority of the neural network data packet;
  • the communication device may include units and/or modules for executing the method provided by any implementation of the second aspect, such as a processing unit and an interface unit.
  • the interface unit may be a transceiver, or an input/output interface; the processing unit may be at least one processor.
  • the transceiver may be a transceiver circuit.
  • the input/output interface may be an input/output circuit.
  • the interface unit may be an input/output interface on the chip, chip system or circuit, Interface circuit, output circuit, input circuit, pin or related circuit, etc.; the processing unit can be at least one processor, processing circuit or logic circuit, etc.
  • a communication device which is used to perform the method provided in the third aspect.
  • the device can be a server, or a component of the server (such as a processor, a chip, or a chip system, etc.), or a logic module or software that can realize all or part of the server functions.
  • the device includes:
  • the processing unit is used to generate a neural network data packet.
  • the neural network data packet includes indication information, and the indication information is used to indicate the priority of the neural network data packet.
  • the interface unit is used to send the neural network data packet.
  • the communication device may include units and/or modules for executing the method provided by any implementation of the third aspect, such as a processing unit and an interface unit.
  • the interface unit may be a transceiver, or an input/output interface; the processing unit may be at least one processor.
  • the transceiver may be a transceiver circuit.
  • the input/output interface may be an input/output circuit.
  • the interface unit may be an input/output interface, interface circuit, output circuit, input circuit, pin or related circuit on the chip, chip system or circuit, etc.;
  • the processing unit may be at least one processor , processing circuits or logic circuits, etc.
  • the priority of the neural network data packet is related to the effect of the neural network recovery data corresponding to the neural network data packet.
  • the effect of neural network on data recovery can be indicated by any of the following indicators:
  • PSNR Peak signal to noise ratio
  • structural similarity structural similarity
  • VMAF video multimethod assessment fusion
  • FIG. 1 shows a schematic architectural diagram of a communication system 100a applicable to the embodiment of the present application.
  • the network architecture may include but is not limited to the following network elements (also known as functional network elements, functional entities, nodes, devices, etc.):
  • Data network Provides operator services, Internet access or third-party services, including servers, which implement video source encoding and rendering on the server side.
  • the terminal device After the terminal device is connected to the network, it can establish a protocol data unit (PDU) session and access the DN through the PDU session. It can communicate with the application function network elements (application function network elements such as application function network elements) deployed in the DN. for application server) interaction. As shown in (a) in Figure 1, depending on the DN that the user accesses, the network can select the UPF of the access DN as the PDU session anchor (PSA) according to the network policy, and access it through the N6 interface of the PSA Application function network element.
  • PDU protocol data unit
  • PSA PDU session anchor
  • Session management network element Mainly used for session management, network interconnection protocol (IP) address allocation and management of terminal equipment, selection of endpoints for manageable terminal equipment plane functions, policy control and charging function interfaces, and downlink Data notifications, etc.
  • IP network interconnection protocol
  • the policy control network element may be a policy and charging rules function (PCRF) network element. As shown in (a) in Figure 1, the policy control network element may be a PCF network element. In future communication systems, the policy control network element can still be a PCF network element, or it can also have other names, which is not limited in this application.
  • PCF policy and charging rules function
  • the application function network element may be an application function, AF network element.
  • the application function network element can still be an AF network element, or it can also have other names, which is not limited in this application.
  • Network opening function network element used to provide customized functions for network opening.
  • the network exposure function network element can be a network exposure function (NEF) network element.
  • NEF network exposure function
  • the network exposure function network element can still be an NEF network element.
  • Server Can provide application service data. For example, video data, audio data, or other types of data can be provided.
  • the data types of application services provided by the server are only used as examples in this application and are not limited.
  • the AF network element may be abbreviated as AF
  • the NEF network element may be abbreviated as NEF
  • the AMF network element may be abbreviated as AMF. That is, the AF described later in this application can be replaced by the application function network element, the NEF can be replaced by the network opening function network element, and the AMF can be replaced by the access and mobility management network element.
  • N2 The interface between AMF and RAN, which can be used to transmit wireless bearer control information from the core network side to the RAN.
  • N5 The interface between AF and PCF, which can be used to issue application service requests and report network events.
  • N6 The interface between UPF and DN, used to transmit uplink and downlink user data flows between UPF and DN.
  • N11 The interface between SMF and AMF can be used to transfer PDU session tunnel information between RAN and UPF, transfer control messages sent to the terminal, transfer radio resource control information sent to RAN, etc.
  • service-oriented interfaces can be used between certain network elements in the system, which will not be described again here.
  • FIG. 1 shows a schematic architectural diagram of a communication system 100b applicable to the embodiment of the present application.
  • the architecture is a terminal-network-terminal architecture scenario.
  • This scenario can be a tactile Internet (TI).
  • TI tactile Internet
  • One terminal interfaces with the main domain tactile user and the artificial system, and the other end is subject to Remote control robots or remote operators in the control domain, network transmission core network and access network include LTE, 5G or next-generation air interface 6G.
  • the main domain receives audio/video feedback signals from the controlled domain.
  • the main domain and the controlled domain are connected through two-way communication links on the network domain with the help of various commands and feedback signals, thus forming a global control loop.
  • the network architecture may include but is not limited to the following network elements (also known as functional network elements, functional entities, nodes, devices, etc.):
  • UE#1 It can interface between the main domain tactile user and the artificial system, and receive video, audio and other data from the controlled domain. It can include various handheld devices with wireless communication functions, vehicle-mounted devices, wearable devices such as head-mounted glasses, computing devices or other processing devices connected to wireless modems, as well as various forms of terminals, mobile stations (MS) , user equipment, soft terminals, etc., such as video playback equipment, holographic projectors, etc. The embodiments of the present application are not limited to this.
  • UPF used for packet routing and forwarding and QoS processing of user plane data. Refer to the description of user plane network elements in (a) of Figure 1, which will not be described again here.
  • AN#2 Used to provide network access functions for authorized terminal equipment (such as UE#2) in a specific area, and can use transmission tunnels of different qualities according to the level of the terminal equipment, business requirements, etc.
  • UE#2 It is a remote control robot or remote operator in the controlled domain. Video, audio and other data can be sent to the main domain. It can include various handheld devices with wireless communication functions, vehicle-mounted devices, wearable devices such as head-mounted glasses, computing devices or other processing devices connected to wireless modems, as well as various forms of terminals, mobile stations (MS) , user equipment, soft terminals, etc., such as video playback equipment, holographic projectors, etc. The embodiments of the present application are not limited to this.
  • FIG. 1 shows a schematic architectural diagram of a communication system 100c applicable to the embodiment of the present application.
  • the architecture is a WiFi scenario.
  • the cloud server transmits XR media data or ordinary video to the terminal (XR device) through the fixed network, WiFi router/AP/set-top box.
  • the network architecture may include but is not limited to the following network elements (also known as functional network elements, functional entities, nodes, devices, etc.):
  • Server Can provide application service data. For example, video data, audio data, or other types of data can be provided.
  • the data types of application services provided by the server are only used as examples in this application and are not limited.
  • Fixed network A network that transmits signals through solid media such as metal wires or optical fiber lines.
  • application service data such as video data and audio data can be transmitted to the WiFi router/WiFi AP through the fixed network.
  • WiFi router/WiFi AP Can convert wired network signals and mobile network signals into wireless signals for reception by UEs with wireless communication capabilities.
  • UE It can include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to wireless modems with wireless communication functions, such as head-mounted glasses, video playback devices, and holographic projectors. etc.
  • the embodiments of the present application are not limited to this.
  • network architecture to which the embodiments of the present application can be applied are only illustrative.
  • the network architecture applicable to the embodiments of the present application is not limited to this. Any network architecture that can realize the functions of each of the above network elements is applicable to this application. Application examples.
  • the AMF, SMF, UPF, PCF, NEF, etc. shown in (a) of Figure 1 can be understood as network elements used to implement different functions, and can, for example, be combined into network slices as needed.
  • These network elements can be independent devices, or they can be integrated into the same device to implement different functions, or they can be network elements in hardware devices, software functions running on dedicated hardware, or platforms (for example, cloud The virtualization function instantiated on the platform), this application does not limit the specific form of the above network elements.
  • Extended reality is a general term for various reality-related technologies, including: VR, AR, MR, etc.
  • VR technology mainly refers to the rendering of visual and audio scenes to simulate as much as possible the visual and audio stimulation of the user in the real world.
  • VR technology usually requires users to wear a head-mounted display (HMD) to simulate The visual component completely replaces the user's field of view while requiring the user to wear headphones to provide accompanying audio to the user.
  • HMD head-mounted display
  • AR technology mainly refers to providing additional visual or auditory information or artificially generated content in the real environment perceived by the user.
  • the user's acquisition of the real environment can be direct, that is, without intermediate sensing, processing, and rendering, or it can It is indirect, that is, it is transmitted through sensors and other methods, and further enhanced processing is performed.
  • MR technology is an advanced form of AR.
  • One of its implementation methods is to insert some virtual elements into the physical scene, with the purpose of providing users with an immersive experience in which these elements are part of the real scene.
  • Super resolution refers to the technology of improving the resolution of the original image/video through hardware or software methods, and obtaining high-resolution images through low-resolution images.
  • SR technology based on NN has received widespread attention because of its remarkable picture restoration effect.
  • NN can be a deep neural network (deep neural network). network, DNN).
  • FIG. 2 is a principle block diagram of the DNN-based SR transmission mode and the traditional transmission mode provided by the embodiment of the present application.
  • the DNN-based SR transmission mode specifically includes the following four steps:
  • Step 1 On the server side, the high definition (HD) XR video frame is spatially divided into blocks (tiles) (or slices, for ease of description, collectively referred to as blocks below).
  • the entire video frame has a resolution of 4K (3840*1920) and can be divided into small blocks.
  • the resolution of a small block is (192*192).
  • the entire video can be divided into segments in time. For example, 1-2 seconds can be divided into a segment, or a video frame can be divided into a segment.
  • step one The purpose of step one is to amortize the processing load and speed up processing through parallel operations.
  • Step 2 On the server side, the HD block is downsampled (i.e., sampled in the spatial domain or frequency domain) to obtain low definition (LD) blocks.
  • LD low definition
  • the resolution of a block is (192*192), and the resolution after downsampling is (24*24).
  • traditional video compression techniques such as high efficiency video coding (HEVC) technology can be used to further compress into ultra-low resolution (ULR) blocks.
  • HEVC high efficiency video coding
  • Step 3 On the server side, use the ULR block as the input of the DNN, use the original HD content as the target output of the DNN, use these two as the training set of the DNN, and use PSNR as the loss function for DNN training, so that you can get Adaptive neural network matching application layer services.
  • the ULR block is then sent to the user together with the DNN.
  • the reason for transmitting the information of the neural network is that the receiving end does not know the original video, and the source end needs to generate the neural network based on the original video and transmit it to the receiving end.
  • Step 4 On the user side, the ULR block can be used as the input of DNN, and the output is high-definition video content.
  • SR technology in the current transmission architecture is to divide a frame of XR video into dozens of Internet Protocol (IP) packets at the network transmission layer, such as 50 IP packets, and NN
  • IP Internet Protocol
  • the data is also encoded into multiple IP packets, and then these IP packets are transmitted to the fixed network and/or core network, and then the IP data packets are transmitted to the UE through the RAN.
  • Figure 3 is the implementation of this application
  • the example provides a schematic diagram of XR video transmission based on the SR method.
  • Protocol data unit (PDU) session It is an association between the terminal device and the DN, used to provide a PDU connection service.
  • QoS flow mechanism The current standard stipulates that QoS flow is the minimum QoS control granularity, and QoS flow has corresponding QoS configuration.
  • AN mapping After receiving the downlink data, AN determines the RB and QoS flow corresponding to the QFI. Then perform QoS control corresponding to the QoS flow, and send the data to the UE through the RB. Or, after receiving the uplink data, the AN determines the QoS flow corresponding to the QFI. Then perform QoS control corresponding to the QoS flow, and send the data to UPF through the N3 interface corresponding to the QoS flow.
  • UE mapping When the UE wants to send uplink data, it is mapped to the corresponding QoS flow according to QoS rules. The uplink data is then sent through the RB corresponding to the QoS flow.
  • QoS configuration (QoS profile): SMF can provide QoS configuration to AN through the N2 interface, or it can also be pre-configured in the AN.
  • the UE performs classification and marking of uplink user plane data services, that is, mapping uplink data to corresponding QoS flows according to QoS rules.
  • QoS rules can be explicitly provided to the UE (that is, explicitly configured to the UE through signaling during the PDU session establishment/modification process); or they can be pre-configured on the UE; or the UE can use the reflection QoS mechanism. Implicitly derived.
  • QoS rules have the following characteristics:
  • a QoS rule includes: QFI associated with the QoS flow, packet filter set (a filter list), and priority.
  • a PDU session must be configured with a default QoS rule, and the default QoS rule is associated with a QoS flow.
  • Upstream and downstream packet detection rules (packet detection rules, PDR): SMF provides PDR(s) to UPF through the N4 interface.
  • Group of picture consists of multiple types of video frames.
  • the first frame in the GoP is an I frame (intra frame), which can contain multiple P frames (predicted frames) later.
  • the I frame is an intra-frame reference frame.
  • P frame is a predictive coding frame, usually with a small amount of data. It is used to represent the data that is different from the previous frame.
  • decoding it is necessary to superimpose the previously cached picture on the frame defined by this frame. Differentially generated images, errors have relatively little impact on video quality. Therefore, the data packet can be scheduled according to the type of video frame to which the data packet belongs. For example, since I frames are more important than P frames, the data packets to which I frames belong have a higher scheduling priority, and the data packets to which P frames belong have a lower scheduling priority. .
  • Video quality evaluation indicators The current mainstream video quality evaluation indicators mainly include two categories: one is objective evaluation indicators, such as PSNR and SSIM, which are values obtained by calculating the difference or correlation between each pixel; The second category is subjective evaluation indicators, such as VMAF, which reflects the impact of different image distortions on the user's subjective experience, with scores ranging from 0 to 100 points. Specifically, the higher the VMAF score, the less image distortion and the better the user's subjective experience.
  • Bit structure of floating-point type data The bits of a floating-point number include the sign part, the exponent part and the fraction part.
  • Figure 6 is a schematic diagram of the bit structure of floating-point type data, including a 1-bit sign part: 0 represents a positive number, 1 represents Negative number; 8-bit exponent: the exponent ranges from -127 to +127; 23-bit fraction: the minimum precision is 1/(2 ⁇ 23).
  • the absolute value of the floating point type data can be calculated based on the following formula:
  • the low-order bits of the fractional part have a small impact on the absolute value of the coefficient, while the high-order bits of the sign part, the exponent part, and the fractional part have a greater impact on the absolute value.
  • the low-order bits of the fraction part and the high-order bits of the fraction part involved in the embodiments of this application can be understood as the fraction part is composed of high-order bits and low-order bits, and the fraction part except the high-order bits are all low-order bits. Bits, for example, the fraction part has a total of 23 bits, the first bit is a high-order bit, and the remaining 22 bits are low-order bits; or the first x bits are high-order bits, and the remaining 23-x are low-order bits.
  • this application provides a communication method by carrying indication information indicating the priority of the neural network data packet in the GTP-U data packet, so that the GTP-U data packet is received
  • the access network device can transmit neural network data packets according to the instruction information, in order to achieve differentiated transmission of neural network data packets of different priorities.
  • the embodiments shown below do not specifically limit the specific structure of the execution body of the method provided by the embodiment of the present application, as long as it can be provided according to the embodiment of the present application by running a program that records the code of the method provided by the embodiment of the present application. It suffices to communicate by a method.
  • the execution subject of the method provided by the embodiment of the present application may be the core network device, or a functional module in the core network device that can call the program and execute the program.
  • the information indicated by the information is called information to be indicated.
  • the information to be indicated can be directly indicated, such as the information to be indicated itself or the information to be indicated. Index of information, etc.
  • the information to be indicated may also be indirectly indicated by indicating other information, where there is an association relationship between the other information and the information to be indicated. It is also possible to indicate only part of the information to be indicated, while other parts of the information to be indicated are Some are known or agreed in advance.
  • the indication of specific information can also be achieved by means of a pre-agreed (for example, protocol stipulated) arrangement order of each piece of information, thereby reducing the indication overhead to a certain extent.
  • the common parts of each piece of information can also be identified and indicated in a unified manner to reduce the instruction overhead caused by indicating the same information individually.
  • preconfigured may include predefined, for example, protocol definitions.
  • pre-definition can be realized by pre-saving corresponding codes, tables or other methods that can be used to indicate relevant information in the device (for example, including each network element). This application does not limit its specific implementation method.
  • the “save” involved in the embodiments of this application may refer to saving in one or more memories.
  • the one or more memories may be provided separately, or may be integrated in an encoder or decoder, a processor, or a communication device.
  • the one or more memories may also be partially provided separately and partially integrated in the decoder, processor, or communication device.
  • the type of memory can be any form of storage medium, and this application is not limited thereto.
  • the "protocol” involved in the embodiments of this application may refer to standard protocols in the communication field, which may include, for example, 5G protocols, new radio (NR) protocols, and related protocols applied in future communication systems. There are no restrictions on this application.
  • Figure 7 is a schematic flow chart of a communication method provided by an embodiment of the present application. It can be understood that in Figure 7, the server, the core network device and the access network device are used as the execution subjects of the interactive representation as an example to illustrate the method, but this application does not limit the execution subjects of the interactive representation.
  • the server in Figure 7 can also be a chip, chip system, or processor that supports the server to implement the method, or can be a logic module or software that can realize all or part of the server functions;
  • the core network equipment in Figure 7 can also be It is a chip, chip system, or processor that supports core network equipment to implement this method, or it can be a logic module or software that can realize all or part of the core network equipment functions;
  • the access network equipment in Figure 7 can also be an access network equipment that supports access.
  • the chip, chip system, or processor of the network device that implements the method may also be a logic module or software that can realize all or part of the functions of the access network device.
  • the method includes the following steps:
  • the server generates neural network data packets.
  • the neural network data packet includes indication information, and the indication information is used to indicate the priority of the neural network data packet.
  • the priority of the neural network data packet involved in the embodiment of the present application may refer to the scheduling (or transmission) priority of the neural network data packet.
  • the priority of the neural network data packet is used to determine whether to prioritize the scheduling of the neural network data packet. , for example, when network congestion occurs, neural network data packets with higher priority can be transmitted to the user side in time.
  • the priority of the neural network data packet may refer to the processing priority of the neural network data packet, where the processing method includes but is not limited to any one of the following processing methods:
  • the priority of the neural network data packet is used to determine the physical layer on the user side In the order in which neural network data packets are submitted to the application layer, neural network data packets with higher priority can be submitted to the application layer in time to restore the transmission data.
  • the server when the server generates a neural network data packet, it can carry indication information indicating the priority of the neural network data packet in the neural network data packet, so as to receive the core of the neural network data packet.
  • the network device can read the instruction information used to indicate the priority of the neural network data packet from the neural network data packet, and learn the priority of the neural network data packet, in order to achieve differentiated transmission of neural network data packets of different priorities. .
  • the indication information included in the neural network data packet may include the indication information in the header of the neural network data packet.
  • the server can add indication information to the packet at the transport layer or higher.
  • the indication information can be added to the user datagram protocol (UDP) field and the real-time transport protocol (RTP) field. ) fields.
  • UDP user datagram protocol
  • RTP real-time transport protocol
  • FIG. 8 is a schematic diagram of a neural network data packet provided by an embodiment of the present application.
  • Figure 8 only illustrates that the indication information can be carried in the header of the neural network data packet, and does not constitute any limitation on the protection scope of the present application.
  • the indication information can also be added at other locations in the header of the neural network data packet. For example, between the IP field and the UDP field, or after the RTP field, etc., I will not give examples one by one here.
  • the neural network data packet generated by the server corresponds to a certain neural network.
  • the server uses the ULR block as the input of the DNN, uses the original HD block as the target output of the DNN, uses the two as the training set of the DNN, and uses PSNR as the loss function for training.
  • DNN, ULR block and DNN need to be sent to the user together.
  • the DNN data can be encoded into multiple IPs, and the multiple IP packets are transmitted to the core network, and then the IP data packets are transmitted to the UE through the wireless access network.
  • the IP packet obtained by encoding the data of the neural network is called a neural network data packet.
  • the data of a neural network can be encoded into one or more neural network data packets.
  • the neural network data packet generated by the above-mentioned server may be one or more neural network data packets obtained by encoding data of a certain neural network.
  • the neural network data packet generated by the server is used to carry the parameter information of the neural network corresponding to the neural network data packet (for example, the coefficients of the neural network).
  • the neural network corresponding to the neural network data packet is used to process data (eg, restore data), where the data can be video frame data, audio frame data, or image data, or other types of data.
  • data eg, restore data
  • the data can be video frame data, audio frame data, or image data, or other types of data.
  • the embodiments of this application do not limit the type of data processed.
  • the data is video frame data or image data.
  • the neural network is used to restore low-resolution images to obtain high-resolution images.
  • VMAF corresponding to different neural networks to represent the effect of different neural networks on data recovery
  • VMAF corresponding to the preset algorithm to represent the effect of the preset algorithm to restore the data as an example to illustrate how the server determines the neural network data in this implementation.
  • the priority of the package is not limited to:
  • the server needs to determine the neural network data packet (e.g., the data of neural network #1) encoded by 4 neural networks (e.g., neural network #1, neural network #2, neural network #3, and neural network #4).
  • the encoded neural network data packet is neural network data packet #1
  • the encoded neural network data packet of neural network #2 is neural network data packet #2
  • the encoded neural network data packet of neural network #3 is
  • the neural network data packet obtained by encoding the data of neural network data packet #3 and neural network #4 is the priority of neural network data packet #4).
  • the server can determine the neural network data packet obtained by encoding the data of the four neural networks respectively based on the VMAF corresponding to the four neural networks and the VMAF corresponding to the preset algorithm. priority.
  • the server can use the video quality evaluation improvement information of a certain neural network compared with the traditional algorithm as a criterion for measuring the priority of the neural network.
  • Table 4 below gives an exemplary range evaluation mechanism based on the VMAF difference (the difference between the VMAF corresponding to the neural network and the VMAF corresponding to the preset algorithm) to measure the data encoding of different neural networks to obtain the priority of neural network data packets.
  • VMAF difference the difference between the VMAF corresponding to the neural network and the VMAF corresponding to the preset algorithm
  • the priority of the neural network data packet obtained by encoding the data of neural network #1 is 1; the VMAF score of neural network #2 is the same.
  • the VMAF score is improved [5 points, 10 points], then the priority of the neural network data packet obtained by encoding the data of neural network #2 is 2; the VMAF score of neural network #3 is compared with the VMAF of the traditional algorithm If the score increases (10 points, 20 points), then the priority of the neural network data packet obtained by the data encoding of neural network #3 is 3; the VMAF score of neural network #4 is improved by ⁇ 20 points compared with the VMAF score of the traditional algorithm. Then the priority of the neural network data packet obtained by encoding the data of neural network #4 is 4.
  • Table 4 is only an example and does not constitute any limitation on the scope of protection of the present application.
  • An evaluation mechanism different from that shown in Table 4 can also be developed based on VMAF to measure the data encoding of different neural networks to obtain the priority of neural network data packets.
  • the above-mentioned VMAF score of the neural network represents the effect of the neural network on recovering the data
  • the VMAF corresponding to the preset algorithm represents the effect of the preset algorithm on recovering the data.
  • the specific expression method is similar to the above-mentioned VMAF, and will not be described again here.
  • the representation of the priority of the neural network data packets shown in this implementation explains: the priority of the neural network data packets of the neural network can be determined based on the effect of the neural network on recovering the data and the effect of the preset algorithm on recovering the data.
  • the neural network that recovers the data better than the preset algorithm can be prioritized so that the user can use the recovered data first.
  • the effective neural network restores data and improves user experience.
  • the priority of the neural network data packet is related to the effect of reconstructing the neural network, and the neural network data packet is used to reconstruct the neural network corresponding to the neural network data packet.
  • the server can determine the priority of the neural network data packet based on the effect of reconstructing the neural network from the neural network data packet.
  • the priority of the neural network data packet is related to the degree of influence of the neural network coefficient data included in the neural network data packet on the neural network coefficient.
  • Example 1 The coefficients of the neural network are represented by 32-bit floating point data (as shown in Figure 6). Based on the structural characteristics of the floating point data mentioned above, the bits that will have a large impact on the absolute value of the coefficients of the neural network Bits are regarded as high-priority data, and bits that have a small impact on the absolute value of the coefficient of the neural network are regarded as low-priority data.
  • the sign bit, exponent bit and high-order bits of the fractional part of the floating point number corresponding to the neural network coefficient are placed in a neural network data packet, and the priority in the neural network data packet is set is "H"; and the low-order bits of the fractional part of the floating point number corresponding to the coefficient of the neural network are placed in another neural network data packet, and the priority in the neural network data packet is set to "L", where " H” means high priority, "L” means low priority.
  • S720 The server sends the neural network data packet to the core network device, or the core network device receives the neural network data packet from the server.
  • the core network device receiving the neural network data packet from the server is only a way for the core network device to obtain the neural network data packet.
  • core network equipment can obtain neural network data packets in the following ways:
  • the server directly sends the neural network data packet to the core network device.
  • the server indirectly sends the neural network data packet to the core network device through other devices.
  • the core network equipment and the server can be connected through a fixed network.
  • Method 2 The core network device obtains the neural network data packet generated by the server from the memory (or internal interface).
  • the core network device can also obtain the neural network data through other methods. Bag.
  • the neural network data packet is generated according to the received parameter information of the neural network, which will not be described again here.
  • the core network device generates GTP-U data packets.
  • the payload of the GTP-U data packet includes a neural network data packet, and the header of the GTP-U data packet includes indication information.
  • the indication information is used to indicate the priority of the neural network data packet.
  • the core network device after the core network device receives the neural network data packet carrying the indication information indicating the priority of the neural network data packet, it can read the priority of the neural network data packet from the neural network data packet.
  • class The instruction information is encapsulated into the GTP-U data packet header according to the GTP-U protocol, and the neural network data packet is used as the payload of the GTP-U data packet. So that the access network device that receives the GTP-U data packet can read the indication information from the header of the GTP-U data packet, and based on the priority of the neural network data packet indicated by the indication information, the neural network Data packets are transmitted and processed. It can be understood that the access network device takes the priority of the neural network data packet into consideration when transmitting the neural network data packet, so as to achieve differentiated transmission of neural network data packets of different priorities.
  • the core network device after the core network device receives the neural network data packet, it can read the instruction information #2 used to indicate the priority of the neural network data packet from the neural network data packet, and then can follow the GTP-U
  • the protocol encapsulates the indication information #2 into the GTP-U data packet header, which is called indication information #1, so that the access network equipment station can read it.
  • the instruction information #2 encapsulated according to the GTP-U protocol may still be called instruction information #2, or may be called instruction information #1 for ease of distinction. That is to say, in this embodiment, the specific forms of the indication information #1 and the indication information #2 are not limited, and they can be used to indicate the priority of the neural network data packet. For ease of description, they can be collectively called instruction information.
  • a new field can be added to the GTP-U data packet header to indicate the priority of the neural network data packet.
  • FIG. 10 is a schematic diagram of a GTP-U data packet header provided by an embodiment of the present application.
  • a new byte can be added to the GTP-U data packet to add indication information.
  • Figure 10 only illustrates that the indication information can be carried in the header of the GTP-U data packet, and does not constitute any limitation on the protection scope of the present application.
  • the indication information can also be added to other parts of the header of the GTP-U data packet.
  • the position, for example, in the fourth bit of the GTP-U packet, will not be explained one by one here.
  • the core network device sends the GTP-U data packet to the access network device.
  • the method flow shown in Figure 7 also includes:
  • the core network device sends the GTP-U data packet to the access network device, or the access network device receives the GTP-U data packet from the core network device.
  • the access network device transmits the neural network data packet according to the instruction information.
  • the header of the GTP-U data packet received by the access network device includes indication information indicating the neural network data packet, so that the access network device can read the indication information from the header of the GTP-U data packet, and based on the The priority of the neural network data packet indicated by the indication information, and the neural network data packet is transmitted and processed. It can be understood that the access network device takes the priority of the neural network data packet into consideration when transmitting the neural network data packet, so as to achieve differentiated transmission of neural network data packets of different priorities.
  • the access network device receives two GTP-U data packets (GTP-U data packet #1 and GTP-U data packet #2).
  • the payload of GTP-U data packet #1 includes a neural network data packet.
  • the header of GTP-U packet #1 includes indication information #1
  • the payload of GTP-U packet #2 includes neural network packet #2
  • the header of GTP-U packet #2 includes indication information #2.
  • the access network device differentially transmits neural network data packet #1 and neural network data packet #2 according to instruction information #1 and instruction information #2.
  • instruction information #1 indicates that the priority of neural network data packet #1 is high priority.
  • Instruction information #2 indicates that the priority of neural network data packet #2 is low priority.
  • the access network device can transmit neural network data packet #1 first, and then transmit neural network data packet #1 after the transmission of neural network data packet #1 is completed. 2.
  • the access network device preferentially transmits the neural network data packet; or,
  • the access network device delays transmission of the neural network data packet; or,
  • the access network device can calculate the corresponding data based on the neural network packet priority information (such as the above-mentioned indication information) and other related parameters of air interface scheduling, including but not limited to historical rate, instantaneous rate, user level, etc. transmission priority.
  • the neural network packet priority information such as the above-mentioned indication information
  • other related parameters of air interface scheduling including but not limited to historical rate, instantaneous rate, user level, etc. transmission priority.
  • factor1 represents the proportional fair scheduling priority
  • R i represents the user's instantaneous rate. The better the user's channel condition, the higher the instantaneous rate; R i represents the user's historical rate, which represents the average rate of the channel within a period of time.
  • factor2 represents the scheduling priority of the neural network data packet
  • N represents the priority of the neural network data packet
  • f can be an increasing linear or exponential function.
  • the embodiment shown in Figure 7 records that determining the priority of neural network data packets is implemented by the server.
  • Another possibility is that the above-mentioned action of determining the priority of neural network data packets can be implemented by the core network device.
  • the method is similar to the method determined by the server, except that the execution subject is the core network device.
  • network elements in the existing network architecture are mainly used as examples for illustrative explanations (such as AF, AMF, SMF, etc.). It should be understood that the specific form of the network element is The application examples are not limiting. For example, network elements that can implement the same functions in the future are applicable to the embodiments of this application.
  • the methods and operations implemented by devices can also be implemented by components (such as chips or circuits) that can be used in network equipment.
  • Figure 11 is a schematic block diagram of a communication device provided by an embodiment of the present application.
  • the device 1100 may include an interface unit 1110 and a processing unit 1120.
  • the interface unit 1110 can communicate with the outside, and the processing unit 1120 is used for data processing.
  • the interface unit 1110 may also be called a communication interface, communication unit or transceiver unit.
  • the apparatus 1100 may implement steps or processes corresponding to those executed by the access network equipment in the method embodiments of the present application, and the apparatus 1100 may include units for executing the methods executed by the access network equipment in the method embodiments. . Moreover, each unit in the device 1100 and the above-mentioned other operations and/or functions are respectively intended to implement the corresponding processes of the method embodiment in the access network equipment in the method embodiment.
  • the interface unit 1110 can be used to perform the sending and receiving steps in the method, such as step S740; the processing unit 720 can be used to perform the processing steps in the method, such as step S750.
  • the device 1100 is used to perform the actions performed by the core network equipment in the above method embodiment.
  • the interface unit 1110 is used to obtain a neural network data packet.
  • the neural network data packet includes indication information, and the indication information is used to indicate the priority of the neural network data packet;
  • the device 1100 may implement steps or processes corresponding to those executed by the core network equipment in the method embodiments of the present application.
  • the device 1100 may include a unit for executing the method executed by the core network equipment in the method embodiments. Yuan.
  • each unit in the device 1100 and the above-mentioned other operations and/or functions are respectively intended to implement the corresponding processes of the method embodiment in the core network equipment in the method embodiment.
  • the processing unit 1120 is configured to generate a neural network data packet.
  • the neural network data packet includes indication information, and the indication information is used to indicate the priority of the neural network data packet;
  • the interface unit 1110 is used to send the neural network data packet.
  • the device 1100 may implement steps or processes corresponding to those executed by the server in the method embodiments of the embodiments of the present application, and the device 1100 may include a unit for executing the method executed by the server in the method embodiments. Moreover, each unit in the device 1100 and the above-mentioned other operations and/or functions are respectively intended to implement the corresponding processes of the method embodiment in the server in the method embodiment.
  • the interface unit 1110 can be used to perform the sending and receiving steps in the method, such as step S720; the processing unit 720 can be used to perform the processing steps in the method, such as step S710.
  • the processing unit 1120 in the above embodiments may be implemented by at least one processor or processor-related circuit.
  • the interface unit 1110 may be implemented by a transceiver or transceiver related circuitry.
  • the storage unit may be implemented by at least one memory.
  • the memory 1220 can be integrated with the processor 1210 or provided separately.
  • the device 1200 may also include a transceiver 1230, which is used for receiving and/or transmitting signals.
  • the processor 1210 is used to control the transceiver 1230 to receive and/or transmit signals.
  • the device 1200 is used to implement the operations performed by the transceiver equipment (such as server, core network equipment, and access network equipment) in the above method embodiment.
  • the transceiver equipment such as server, core network equipment, and access network equipment
  • Embodiments of the present application also provide a computer-readable storage medium on which are stored computer instructions for implementing the method executed by the transceiver device (such as a server, a core network device, and an access network device) in the above method embodiment.
  • the transceiver device such as a server, a core network device, and an access network device
  • the computer program when executed by a computer, the computer can implement the method executed by the transceiver device (such as a server, core network device, and access network device) in the above method embodiment.
  • the transceiver device such as a server, core network device, and access network device
  • processors mentioned in the embodiments of this application may be a central processing unit (CPU), or other general-purpose processor, digital signal processor (DSP), or application-specific integrated circuit (ASIC).
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
  • RAM may include the following forms: static random access memory (static RAM, SRAM), dynamic random access memory (dynamic RAM, DRAM), synchronous dynamic random access memory (synchronous DRAM, SDRAM) , double data rate synchronous dynamic random access memory (double data rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), synchronous link dynamic random access memory (synchlink DRAM, SLDRAM) and Direct memory bus random access memory (direct rambus RAM, DR RAM).
  • static random access memory static random access memory
  • dynamic RAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM synchronous DRAM
  • double data rate SDRAM double data rate SDRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous link dynamic random access memory
  • Direct memory bus random access memory direct rambus RAM, DR RAM
  • the processor is a general-purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, or discrete hardware component
  • the memory storage module
  • memories described herein are intended to include, but are not limited to, these and any other suitable types of memories.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or can be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to implement the solution provided by this application.
  • each functional unit in each embodiment of the present application can be integrated into one unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer may be a personal computer, a server, or a network device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another, e.g., the computer instructions may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more available media integrated.
  • the available media may be magnetic media (such as floppy disks, hard disks, magnetic tapes), optical media (such as DVDs), or semiconductor media (such as solid state disks (SSD)), etc.
  • the aforementioned available media may include But it is not limited to: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Les modes de réalisation de la présente demande concernent un procédé et un appareil de transmission de données. Le procédé consiste à : recevoir un paquet de données de protocole de transmission tunnel de service général de radiocommunication par paquets-plan utilisateur (GTP-U), une charge du paquet de données de GTP-U comprenant un paquet de données de réseau neuronal et un en-tête de paquet comprenant des informations d'indication pour indiquer la priorité du paquet de données de réseau neuronal ; et transmettre le paquet de données de réseau neuronal en fonction des informations d'indication. Au moyen du transport, dans un paquet de données de GTP-U, d'informations d'indication destinées à indiquer les priorités de paquets de données de réseau neuronal, un dispositif de réseau d'accès, qui reçoit le paquet de données de GTP-U, peut transmettre les paquets de données de réseau neuronal en fonction des informations d'indication, de façon à réaliser une transmission différenciée de paquets de données de réseau neuronal ayant différentes priorités.
PCT/CN2023/081482 2022-04-08 2023-03-15 Procédé et appareil de transmission de données WO2023193579A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210365108.5 2022-04-08
CN202210365108.5A CN116939702A (zh) 2022-04-08 2022-04-08 数据传输的方法和装置

Publications (1)

Publication Number Publication Date
WO2023193579A1 true WO2023193579A1 (fr) 2023-10-12

Family

ID=88243974

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/081482 WO2023193579A1 (fr) 2022-04-08 2023-03-15 Procédé et appareil de transmission de données

Country Status (2)

Country Link
CN (1) CN116939702A (fr)
WO (1) WO2023193579A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125607A (zh) * 2013-04-23 2014-10-29 中兴通讯股份有限公司 用户面拥塞处理方法、装置及服务网关
US20150358483A1 (en) * 2013-01-18 2015-12-10 Samsung Electronics Co., Ltd. Method and apparatus for adjusting service level in congestion
CN110740481A (zh) * 2018-07-18 2020-01-31 中国移动通信有限公司研究院 基于服务质量的数据处理方法、设备和计算机存储介质
WO2022042528A1 (fr) * 2020-08-24 2022-03-03 华为技术有限公司 Réseau d'accès radio intelligent

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150358483A1 (en) * 2013-01-18 2015-12-10 Samsung Electronics Co., Ltd. Method and apparatus for adjusting service level in congestion
CN104125607A (zh) * 2013-04-23 2014-10-29 中兴通讯股份有限公司 用户面拥塞处理方法、装置及服务网关
CN110740481A (zh) * 2018-07-18 2020-01-31 中国移动通信有限公司研究院 基于服务质量的数据处理方法、设备和计算机存储介质
WO2022042528A1 (fr) * 2020-08-24 2022-03-03 华为技术有限公司 Réseau d'accès radio intelligent

Also Published As

Publication number Publication date
CN116939702A (zh) 2023-10-24

Similar Documents

Publication Publication Date Title
WO2021259112A1 (fr) Appareil et procédé de transmission de service
CN112423340B (zh) 一种用户面信息上报方法及装置
WO2022088833A1 (fr) Procédé de transmission de paquet de données de flux multimédia et appareil de communication
US20230164631A1 (en) Communication method and apparatus
WO2021227781A1 (fr) Procédé de transmission de trame de données et appareil de communication
US20230188472A1 (en) Data transmission method and apparatus
US20230354334A1 (en) Communication method and apparatus
WO2023088009A1 (fr) Procédé de transmission de données et appareil de communication
WO2023046118A1 (fr) Procédé et appareil de communication
WO2023193579A1 (fr) Procédé et appareil de transmission de données
WO2022198613A1 (fr) Procédé de transmission de données multimédia et appareil de communication
WO2022151492A1 (fr) Procédé et appareil de transmission avec planification
WO2022017403A1 (fr) Procédé et appareil de communication
WO2018170863A1 (fr) Procédé d'évitement d'interférence de faisceau et station de base
WO2023185608A1 (fr) Procédé de transmission de données et appareil de communication
WO2023185769A1 (fr) Procédé de communication, appareil de communication et système de communication
WO2023179322A1 (fr) Procédé et appareil de communication
WO2023185402A1 (fr) Procédé et appareil de communication
WO2023185598A1 (fr) Procédé et appareil de communication
US20240031298A1 (en) Communication method and device
WO2023045714A1 (fr) Procédé de programmation et appareil de communication
WO2023193571A1 (fr) Procédé de communication et appareil de communication
WO2023093559A1 (fr) Procédé et appareil de transmission de données
WO2023088155A1 (fr) Procédé et appareil de gestion de qualité de service (qos)
WO2024032211A1 (fr) Procédé et appareil de régulation d'encombrement

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23784143

Country of ref document: EP

Kind code of ref document: A1