WO2023193579A1 - Data transmission method and apparatus - Google Patents

Data transmission method and apparatus Download PDF

Info

Publication number
WO2023193579A1
WO2023193579A1 PCT/CN2023/081482 CN2023081482W WO2023193579A1 WO 2023193579 A1 WO2023193579 A1 WO 2023193579A1 CN 2023081482 W CN2023081482 W CN 2023081482W WO 2023193579 A1 WO2023193579 A1 WO 2023193579A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
data packet
network data
priority
indication information
Prior art date
Application number
PCT/CN2023/081482
Other languages
French (fr)
Chinese (zh)
Inventor
曹佑龙
秦熠
陈二凯
徐瑞
陈伟超
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2023193579A1 publication Critical patent/WO2023193579A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0268Traffic management, e.g. flow control or congestion control using specific QoS parameters for wireless networks, e.g. QoS class identifier [QCI] or guaranteed bit rate [GBR]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
    • H04W28/065Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information using assembly or disassembly of packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/16Central resource management; Negotiation of resources or communication parameters, e.g. negotiating bandwidth or QoS [Quality of Service]
    • H04W28/24Negotiating SLA [Service Level Agreement]; Negotiating QoS [Quality of Service]

Definitions

  • the embodiments of the present application relate to the field of communications, and more specifically, to a data transmission method and device.
  • XR extended reality
  • VR virtual reality
  • AR augmented reality
  • MR mixed reality
  • SR super resolution
  • SR technology based on neural network (NN) has received widespread attention because of its remarkable image restoration effect.
  • Users convert low-resolution images into high-resolution images based on neural networks, so how to transmit neural network data packets has become a focus The problem.
  • Embodiments of the present application provide a communication method in order to realize differentiated transmission of neural network data packets of different priorities and improve user experience.
  • the first aspect provides a communication method, which can be executed by an access network device, or can be executed by a chip or circuit configured in the access network device, or can also be executed by a device that can realize all or part of the connection.
  • the logic module or software execution of the network access device function is not limited in this application. For convenience of description, the following description takes execution by the access network device as an example.
  • the method includes: receiving a general packet radio service tunneling protocol-user plane (GTP-U) data packet, the payload of the GTP-U data packet includes a neural network data packet, and the GTP-U data packet
  • the packet header includes indication information, the indication information is used to indicate the priority of the neural network data packet; the neural network data packet is transmitted according to the indication information.
  • GTP-U general packet radio service tunneling protocol-user plane
  • the header of the GTP-U data packet received by the access network device includes instruction information indicating the neural network data packet, so that the access network device can read the instruction from the header of the GTP-U data packet. information, and transmit and process the neural network data packet based on the priority of the neural network data packet indicated by the indication information. It can be understood that the access network device takes the priority of the neural network data packets into consideration when transmitting the neural network data packets. It facilitates differentiated transmission of neural network data packets of different priorities.
  • the access network device receives two GTP-U data packets (GTP-U data packet #1 and GTP-U data packet #2).
  • the payload of GTP-U data packet #1 includes a neural network data packet.
  • the header of GTP-U packet #1 includes indication information #1
  • the payload of GTP-U packet #2 includes neural network packet #2
  • the header of GTP-U packet #2 includes indication information #2.
  • the access network device differentially transmits neural network data packet #1 and neural network data packet #2 according to instruction information #1 and instruction information #2.
  • instruction information #1 indicates that the priority of neural network data packet #1 is high priority.
  • indication information #2 indicates that the priority of neural network data packet #2 is low priority.
  • the access network device can transmit neural network data packet #1 first, and then transmit neural network data after the transmission of neural network data packet #1 is completed.
  • Package #2 is
  • transmitting the neural network data packet according to the indication information includes: when the indication information indicates that the neural network data packet is a high priority, transmitting the neural network data packet first Neural network data packet; or, when the indication information indicates that the neural network data packet is of low priority, delay the transmission of the neural network data packet; or, when the indication information indicates that the neural network data packet is of low priority , and when the network status is congested, the transmission of the neural network data packet is given up.
  • the access network equipment can determine the transmission mode of different neural network data packets (such as priority transmission or abandonment of transmission, etc.) according to the priority of the neural network data packet indicated by the indication information, thereby improving the flexibility of the solution.
  • the second aspect provides a communication method, which can be executed by a core network device, or can be executed by a chip or circuit configured in the core network device, or can also be executed by a core network device that can implement all or part of the core network device.
  • Functional logic modules or software execution are not limited in this application. For convenience of description, the following description takes execution by the core network device as an example.
  • the method includes: obtaining a neural network data packet, the neural network data packet including indication information, the indication information being used to indicate the priority of the neural network data packet; sending a General Packet Wireless Service Tunneling Protocol user plane GTP to the access network device -U data packet, the payload of the GTP-U data packet includes the neural network data packet, and the header of the GTP-U data packet includes the indication information.
  • the core network device after the core network device receives the neural network data packet carrying the indication information indicating the priority of the neural network data packet, it can read the priority of the neural network data packet from the neural network data packet. level indication information, and encapsulates the indication information into the GTP-U data packet header according to the GTP-U protocol, and uses the neural network data packet as the payload of the GTP-U data packet. So that the access network device that receives the GTP-U data packet can read the indication information from the header of the GTP-U data packet, and based on the priority of the neural network data packet indicated by the indication information, the neural network Data packets are processed for transmission. It can be understood that the access network device takes the priority of the neural network data packet into consideration when transmitting the neural network data packet, so as to achieve differentiated transmission of neural network data packets of different priorities.
  • the third aspect provides a communication method, which can be executed by a server, or can be executed by a chip or circuit configured in the server, or can also be executed by a logic module or software that can realize all or part of the server functions. execution, this application does not limit this. For the convenience of description, the following description takes execution by the server as an example.
  • the method includes: generating a neural network data packet, the neural network data packet including indication information, the indication information being used to indicate the priority of the neural network data packet; and sending the neural network data packet.
  • the server when the server generates a neural network data packet, it can carry indication information indicating the priority of the neural network data packet in the neural network data packet, so as to facilitate the core network equipment that receives the neural network data packet.
  • the device can read the indication information used to indicate the priority of the neural network data packet from the neural network data packet, and learn the priority of the neural network data packet. In order to achieve differentiated transmission of neural network data packets of different priorities.
  • a fourth aspect provides a communication device, which is used to perform the method provided in the first aspect.
  • the device may be access network equipment, or may be a component of access network equipment (such as a processor, chip, or chip system, etc.), or may be a logic module or software that can realize all or part of the functions of the access network equipment,
  • the device includes:
  • the interface unit is configured to receive a General Packet Radio Service Tunneling Protocol user plane GTP-U data packet.
  • the payload of the GTP-U data packet includes a neural network data packet.
  • the header of the GTP-U data packet includes indication information.
  • the indication The information is used to indicate the priority of the neural network data packet; the processing unit is used to control the device to transmit the neural network data packet according to the indication information.
  • the processing unit controls the device to transmit the neural network data packet according to the indication information, including: when the indication information indicates that the neural network data packet is a high priority In this case, the processing unit controls the device to transmit the neural network data packet with priority; or, in the case where the indication information indicates that the neural network data packet is of low priority, the processing unit controls the device to delay transmission of the neural network data. packet; or, when the indication information indicates that the neural network data packet is of low priority and the network status is congested, the processing unit controls the device to give up transmitting the neural network data packet.
  • the communication device may include units and/or modules for executing the method provided by any implementation of the first aspect, such as a processing unit and an interface unit.
  • the interface unit may be a transceiver, or an input/output interface; the processing unit may be at least one processor.
  • the transceiver may be a transceiver circuit.
  • the input/output interface may be an input/output circuit.
  • the interface unit may be an input/output interface, interface circuit, output circuit, input circuit, pin or related circuit on the chip, chip system or circuit, etc.;
  • the processing unit may be at least one processor , processing circuits or logic circuits, etc.
  • beneficial effects of the device shown in the above fourth aspect and its possible designs may be referred to the beneficial effects of the first aspect and its possible designs.
  • a communication device which is used to perform the method provided in the second aspect.
  • the device can be a core network device, or a component of the core network device (such as a processor, a chip, or a chip system, etc.), or a logic module or software that can realize all or part of the functions of the core network device.
  • the device includes :
  • the interface unit is used to obtain a neural network data packet, the neural network data packet includes indication information, and the indication information is used to indicate the priority of the neural network data packet;
  • the communication device may include units and/or modules for executing the method provided by any implementation of the second aspect, such as a processing unit and an interface unit.
  • the interface unit may be a transceiver, or an input/output interface; the processing unit may be at least one processor.
  • the transceiver may be a transceiver circuit.
  • the input/output interface may be an input/output circuit.
  • the interface unit may be an input/output interface on the chip, chip system or circuit, Interface circuit, output circuit, input circuit, pin or related circuit, etc.; the processing unit can be at least one processor, processing circuit or logic circuit, etc.
  • a communication device which is used to perform the method provided in the third aspect.
  • the device can be a server, or a component of the server (such as a processor, a chip, or a chip system, etc.), or a logic module or software that can realize all or part of the server functions.
  • the device includes:
  • the processing unit is used to generate a neural network data packet.
  • the neural network data packet includes indication information, and the indication information is used to indicate the priority of the neural network data packet.
  • the interface unit is used to send the neural network data packet.
  • the communication device may include units and/or modules for executing the method provided by any implementation of the third aspect, such as a processing unit and an interface unit.
  • the interface unit may be a transceiver, or an input/output interface; the processing unit may be at least one processor.
  • the transceiver may be a transceiver circuit.
  • the input/output interface may be an input/output circuit.
  • the interface unit may be an input/output interface, interface circuit, output circuit, input circuit, pin or related circuit on the chip, chip system or circuit, etc.;
  • the processing unit may be at least one processor , processing circuits or logic circuits, etc.
  • the priority of the neural network data packet is related to the effect of the neural network recovery data corresponding to the neural network data packet.
  • the effect of neural network on data recovery can be indicated by any of the following indicators:
  • PSNR Peak signal to noise ratio
  • structural similarity structural similarity
  • VMAF video multimethod assessment fusion
  • FIG. 1 shows a schematic architectural diagram of a communication system 100a applicable to the embodiment of the present application.
  • the network architecture may include but is not limited to the following network elements (also known as functional network elements, functional entities, nodes, devices, etc.):
  • Data network Provides operator services, Internet access or third-party services, including servers, which implement video source encoding and rendering on the server side.
  • the terminal device After the terminal device is connected to the network, it can establish a protocol data unit (PDU) session and access the DN through the PDU session. It can communicate with the application function network elements (application function network elements such as application function network elements) deployed in the DN. for application server) interaction. As shown in (a) in Figure 1, depending on the DN that the user accesses, the network can select the UPF of the access DN as the PDU session anchor (PSA) according to the network policy, and access it through the N6 interface of the PSA Application function network element.
  • PDU protocol data unit
  • PSA PDU session anchor
  • Session management network element Mainly used for session management, network interconnection protocol (IP) address allocation and management of terminal equipment, selection of endpoints for manageable terminal equipment plane functions, policy control and charging function interfaces, and downlink Data notifications, etc.
  • IP network interconnection protocol
  • the policy control network element may be a policy and charging rules function (PCRF) network element. As shown in (a) in Figure 1, the policy control network element may be a PCF network element. In future communication systems, the policy control network element can still be a PCF network element, or it can also have other names, which is not limited in this application.
  • PCF policy and charging rules function
  • the application function network element may be an application function, AF network element.
  • the application function network element can still be an AF network element, or it can also have other names, which is not limited in this application.
  • Network opening function network element used to provide customized functions for network opening.
  • the network exposure function network element can be a network exposure function (NEF) network element.
  • NEF network exposure function
  • the network exposure function network element can still be an NEF network element.
  • Server Can provide application service data. For example, video data, audio data, or other types of data can be provided.
  • the data types of application services provided by the server are only used as examples in this application and are not limited.
  • the AF network element may be abbreviated as AF
  • the NEF network element may be abbreviated as NEF
  • the AMF network element may be abbreviated as AMF. That is, the AF described later in this application can be replaced by the application function network element, the NEF can be replaced by the network opening function network element, and the AMF can be replaced by the access and mobility management network element.
  • N2 The interface between AMF and RAN, which can be used to transmit wireless bearer control information from the core network side to the RAN.
  • N5 The interface between AF and PCF, which can be used to issue application service requests and report network events.
  • N6 The interface between UPF and DN, used to transmit uplink and downlink user data flows between UPF and DN.
  • N11 The interface between SMF and AMF can be used to transfer PDU session tunnel information between RAN and UPF, transfer control messages sent to the terminal, transfer radio resource control information sent to RAN, etc.
  • service-oriented interfaces can be used between certain network elements in the system, which will not be described again here.
  • FIG. 1 shows a schematic architectural diagram of a communication system 100b applicable to the embodiment of the present application.
  • the architecture is a terminal-network-terminal architecture scenario.
  • This scenario can be a tactile Internet (TI).
  • TI tactile Internet
  • One terminal interfaces with the main domain tactile user and the artificial system, and the other end is subject to Remote control robots or remote operators in the control domain, network transmission core network and access network include LTE, 5G or next-generation air interface 6G.
  • the main domain receives audio/video feedback signals from the controlled domain.
  • the main domain and the controlled domain are connected through two-way communication links on the network domain with the help of various commands and feedback signals, thus forming a global control loop.
  • the network architecture may include but is not limited to the following network elements (also known as functional network elements, functional entities, nodes, devices, etc.):
  • UE#1 It can interface between the main domain tactile user and the artificial system, and receive video, audio and other data from the controlled domain. It can include various handheld devices with wireless communication functions, vehicle-mounted devices, wearable devices such as head-mounted glasses, computing devices or other processing devices connected to wireless modems, as well as various forms of terminals, mobile stations (MS) , user equipment, soft terminals, etc., such as video playback equipment, holographic projectors, etc. The embodiments of the present application are not limited to this.
  • UPF used for packet routing and forwarding and QoS processing of user plane data. Refer to the description of user plane network elements in (a) of Figure 1, which will not be described again here.
  • AN#2 Used to provide network access functions for authorized terminal equipment (such as UE#2) in a specific area, and can use transmission tunnels of different qualities according to the level of the terminal equipment, business requirements, etc.
  • UE#2 It is a remote control robot or remote operator in the controlled domain. Video, audio and other data can be sent to the main domain. It can include various handheld devices with wireless communication functions, vehicle-mounted devices, wearable devices such as head-mounted glasses, computing devices or other processing devices connected to wireless modems, as well as various forms of terminals, mobile stations (MS) , user equipment, soft terminals, etc., such as video playback equipment, holographic projectors, etc. The embodiments of the present application are not limited to this.
  • FIG. 1 shows a schematic architectural diagram of a communication system 100c applicable to the embodiment of the present application.
  • the architecture is a WiFi scenario.
  • the cloud server transmits XR media data or ordinary video to the terminal (XR device) through the fixed network, WiFi router/AP/set-top box.
  • the network architecture may include but is not limited to the following network elements (also known as functional network elements, functional entities, nodes, devices, etc.):
  • Server Can provide application service data. For example, video data, audio data, or other types of data can be provided.
  • the data types of application services provided by the server are only used as examples in this application and are not limited.
  • Fixed network A network that transmits signals through solid media such as metal wires or optical fiber lines.
  • application service data such as video data and audio data can be transmitted to the WiFi router/WiFi AP through the fixed network.
  • WiFi router/WiFi AP Can convert wired network signals and mobile network signals into wireless signals for reception by UEs with wireless communication capabilities.
  • UE It can include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to wireless modems with wireless communication functions, such as head-mounted glasses, video playback devices, and holographic projectors. etc.
  • the embodiments of the present application are not limited to this.
  • network architecture to which the embodiments of the present application can be applied are only illustrative.
  • the network architecture applicable to the embodiments of the present application is not limited to this. Any network architecture that can realize the functions of each of the above network elements is applicable to this application. Application examples.
  • the AMF, SMF, UPF, PCF, NEF, etc. shown in (a) of Figure 1 can be understood as network elements used to implement different functions, and can, for example, be combined into network slices as needed.
  • These network elements can be independent devices, or they can be integrated into the same device to implement different functions, or they can be network elements in hardware devices, software functions running on dedicated hardware, or platforms (for example, cloud The virtualization function instantiated on the platform), this application does not limit the specific form of the above network elements.
  • Extended reality is a general term for various reality-related technologies, including: VR, AR, MR, etc.
  • VR technology mainly refers to the rendering of visual and audio scenes to simulate as much as possible the visual and audio stimulation of the user in the real world.
  • VR technology usually requires users to wear a head-mounted display (HMD) to simulate The visual component completely replaces the user's field of view while requiring the user to wear headphones to provide accompanying audio to the user.
  • HMD head-mounted display
  • AR technology mainly refers to providing additional visual or auditory information or artificially generated content in the real environment perceived by the user.
  • the user's acquisition of the real environment can be direct, that is, without intermediate sensing, processing, and rendering, or it can It is indirect, that is, it is transmitted through sensors and other methods, and further enhanced processing is performed.
  • MR technology is an advanced form of AR.
  • One of its implementation methods is to insert some virtual elements into the physical scene, with the purpose of providing users with an immersive experience in which these elements are part of the real scene.
  • Super resolution refers to the technology of improving the resolution of the original image/video through hardware or software methods, and obtaining high-resolution images through low-resolution images.
  • SR technology based on NN has received widespread attention because of its remarkable picture restoration effect.
  • NN can be a deep neural network (deep neural network). network, DNN).
  • FIG. 2 is a principle block diagram of the DNN-based SR transmission mode and the traditional transmission mode provided by the embodiment of the present application.
  • the DNN-based SR transmission mode specifically includes the following four steps:
  • Step 1 On the server side, the high definition (HD) XR video frame is spatially divided into blocks (tiles) (or slices, for ease of description, collectively referred to as blocks below).
  • the entire video frame has a resolution of 4K (3840*1920) and can be divided into small blocks.
  • the resolution of a small block is (192*192).
  • the entire video can be divided into segments in time. For example, 1-2 seconds can be divided into a segment, or a video frame can be divided into a segment.
  • step one The purpose of step one is to amortize the processing load and speed up processing through parallel operations.
  • Step 2 On the server side, the HD block is downsampled (i.e., sampled in the spatial domain or frequency domain) to obtain low definition (LD) blocks.
  • LD low definition
  • the resolution of a block is (192*192), and the resolution after downsampling is (24*24).
  • traditional video compression techniques such as high efficiency video coding (HEVC) technology can be used to further compress into ultra-low resolution (ULR) blocks.
  • HEVC high efficiency video coding
  • Step 3 On the server side, use the ULR block as the input of the DNN, use the original HD content as the target output of the DNN, use these two as the training set of the DNN, and use PSNR as the loss function for DNN training, so that you can get Adaptive neural network matching application layer services.
  • the ULR block is then sent to the user together with the DNN.
  • the reason for transmitting the information of the neural network is that the receiving end does not know the original video, and the source end needs to generate the neural network based on the original video and transmit it to the receiving end.
  • Step 4 On the user side, the ULR block can be used as the input of DNN, and the output is high-definition video content.
  • SR technology in the current transmission architecture is to divide a frame of XR video into dozens of Internet Protocol (IP) packets at the network transmission layer, such as 50 IP packets, and NN
  • IP Internet Protocol
  • the data is also encoded into multiple IP packets, and then these IP packets are transmitted to the fixed network and/or core network, and then the IP data packets are transmitted to the UE through the RAN.
  • Figure 3 is the implementation of this application
  • the example provides a schematic diagram of XR video transmission based on the SR method.
  • Protocol data unit (PDU) session It is an association between the terminal device and the DN, used to provide a PDU connection service.
  • QoS flow mechanism The current standard stipulates that QoS flow is the minimum QoS control granularity, and QoS flow has corresponding QoS configuration.
  • AN mapping After receiving the downlink data, AN determines the RB and QoS flow corresponding to the QFI. Then perform QoS control corresponding to the QoS flow, and send the data to the UE through the RB. Or, after receiving the uplink data, the AN determines the QoS flow corresponding to the QFI. Then perform QoS control corresponding to the QoS flow, and send the data to UPF through the N3 interface corresponding to the QoS flow.
  • UE mapping When the UE wants to send uplink data, it is mapped to the corresponding QoS flow according to QoS rules. The uplink data is then sent through the RB corresponding to the QoS flow.
  • QoS configuration (QoS profile): SMF can provide QoS configuration to AN through the N2 interface, or it can also be pre-configured in the AN.
  • the UE performs classification and marking of uplink user plane data services, that is, mapping uplink data to corresponding QoS flows according to QoS rules.
  • QoS rules can be explicitly provided to the UE (that is, explicitly configured to the UE through signaling during the PDU session establishment/modification process); or they can be pre-configured on the UE; or the UE can use the reflection QoS mechanism. Implicitly derived.
  • QoS rules have the following characteristics:
  • a QoS rule includes: QFI associated with the QoS flow, packet filter set (a filter list), and priority.
  • a PDU session must be configured with a default QoS rule, and the default QoS rule is associated with a QoS flow.
  • Upstream and downstream packet detection rules (packet detection rules, PDR): SMF provides PDR(s) to UPF through the N4 interface.
  • Group of picture consists of multiple types of video frames.
  • the first frame in the GoP is an I frame (intra frame), which can contain multiple P frames (predicted frames) later.
  • the I frame is an intra-frame reference frame.
  • P frame is a predictive coding frame, usually with a small amount of data. It is used to represent the data that is different from the previous frame.
  • decoding it is necessary to superimpose the previously cached picture on the frame defined by this frame. Differentially generated images, errors have relatively little impact on video quality. Therefore, the data packet can be scheduled according to the type of video frame to which the data packet belongs. For example, since I frames are more important than P frames, the data packets to which I frames belong have a higher scheduling priority, and the data packets to which P frames belong have a lower scheduling priority. .
  • Video quality evaluation indicators The current mainstream video quality evaluation indicators mainly include two categories: one is objective evaluation indicators, such as PSNR and SSIM, which are values obtained by calculating the difference or correlation between each pixel; The second category is subjective evaluation indicators, such as VMAF, which reflects the impact of different image distortions on the user's subjective experience, with scores ranging from 0 to 100 points. Specifically, the higher the VMAF score, the less image distortion and the better the user's subjective experience.
  • Bit structure of floating-point type data The bits of a floating-point number include the sign part, the exponent part and the fraction part.
  • Figure 6 is a schematic diagram of the bit structure of floating-point type data, including a 1-bit sign part: 0 represents a positive number, 1 represents Negative number; 8-bit exponent: the exponent ranges from -127 to +127; 23-bit fraction: the minimum precision is 1/(2 ⁇ 23).
  • the absolute value of the floating point type data can be calculated based on the following formula:
  • the low-order bits of the fractional part have a small impact on the absolute value of the coefficient, while the high-order bits of the sign part, the exponent part, and the fractional part have a greater impact on the absolute value.
  • the low-order bits of the fraction part and the high-order bits of the fraction part involved in the embodiments of this application can be understood as the fraction part is composed of high-order bits and low-order bits, and the fraction part except the high-order bits are all low-order bits. Bits, for example, the fraction part has a total of 23 bits, the first bit is a high-order bit, and the remaining 22 bits are low-order bits; or the first x bits are high-order bits, and the remaining 23-x are low-order bits.
  • this application provides a communication method by carrying indication information indicating the priority of the neural network data packet in the GTP-U data packet, so that the GTP-U data packet is received
  • the access network device can transmit neural network data packets according to the instruction information, in order to achieve differentiated transmission of neural network data packets of different priorities.
  • the embodiments shown below do not specifically limit the specific structure of the execution body of the method provided by the embodiment of the present application, as long as it can be provided according to the embodiment of the present application by running a program that records the code of the method provided by the embodiment of the present application. It suffices to communicate by a method.
  • the execution subject of the method provided by the embodiment of the present application may be the core network device, or a functional module in the core network device that can call the program and execute the program.
  • the information indicated by the information is called information to be indicated.
  • the information to be indicated can be directly indicated, such as the information to be indicated itself or the information to be indicated. Index of information, etc.
  • the information to be indicated may also be indirectly indicated by indicating other information, where there is an association relationship between the other information and the information to be indicated. It is also possible to indicate only part of the information to be indicated, while other parts of the information to be indicated are Some are known or agreed in advance.
  • the indication of specific information can also be achieved by means of a pre-agreed (for example, protocol stipulated) arrangement order of each piece of information, thereby reducing the indication overhead to a certain extent.
  • the common parts of each piece of information can also be identified and indicated in a unified manner to reduce the instruction overhead caused by indicating the same information individually.
  • preconfigured may include predefined, for example, protocol definitions.
  • pre-definition can be realized by pre-saving corresponding codes, tables or other methods that can be used to indicate relevant information in the device (for example, including each network element). This application does not limit its specific implementation method.
  • the “save” involved in the embodiments of this application may refer to saving in one or more memories.
  • the one or more memories may be provided separately, or may be integrated in an encoder or decoder, a processor, or a communication device.
  • the one or more memories may also be partially provided separately and partially integrated in the decoder, processor, or communication device.
  • the type of memory can be any form of storage medium, and this application is not limited thereto.
  • the "protocol” involved in the embodiments of this application may refer to standard protocols in the communication field, which may include, for example, 5G protocols, new radio (NR) protocols, and related protocols applied in future communication systems. There are no restrictions on this application.
  • Figure 7 is a schematic flow chart of a communication method provided by an embodiment of the present application. It can be understood that in Figure 7, the server, the core network device and the access network device are used as the execution subjects of the interactive representation as an example to illustrate the method, but this application does not limit the execution subjects of the interactive representation.
  • the server in Figure 7 can also be a chip, chip system, or processor that supports the server to implement the method, or can be a logic module or software that can realize all or part of the server functions;
  • the core network equipment in Figure 7 can also be It is a chip, chip system, or processor that supports core network equipment to implement this method, or it can be a logic module or software that can realize all or part of the core network equipment functions;
  • the access network equipment in Figure 7 can also be an access network equipment that supports access.
  • the chip, chip system, or processor of the network device that implements the method may also be a logic module or software that can realize all or part of the functions of the access network device.
  • the method includes the following steps:
  • the server generates neural network data packets.
  • the neural network data packet includes indication information, and the indication information is used to indicate the priority of the neural network data packet.
  • the priority of the neural network data packet involved in the embodiment of the present application may refer to the scheduling (or transmission) priority of the neural network data packet.
  • the priority of the neural network data packet is used to determine whether to prioritize the scheduling of the neural network data packet. , for example, when network congestion occurs, neural network data packets with higher priority can be transmitted to the user side in time.
  • the priority of the neural network data packet may refer to the processing priority of the neural network data packet, where the processing method includes but is not limited to any one of the following processing methods:
  • the priority of the neural network data packet is used to determine the physical layer on the user side In the order in which neural network data packets are submitted to the application layer, neural network data packets with higher priority can be submitted to the application layer in time to restore the transmission data.
  • the server when the server generates a neural network data packet, it can carry indication information indicating the priority of the neural network data packet in the neural network data packet, so as to receive the core of the neural network data packet.
  • the network device can read the instruction information used to indicate the priority of the neural network data packet from the neural network data packet, and learn the priority of the neural network data packet, in order to achieve differentiated transmission of neural network data packets of different priorities. .
  • the indication information included in the neural network data packet may include the indication information in the header of the neural network data packet.
  • the server can add indication information to the packet at the transport layer or higher.
  • the indication information can be added to the user datagram protocol (UDP) field and the real-time transport protocol (RTP) field. ) fields.
  • UDP user datagram protocol
  • RTP real-time transport protocol
  • FIG. 8 is a schematic diagram of a neural network data packet provided by an embodiment of the present application.
  • Figure 8 only illustrates that the indication information can be carried in the header of the neural network data packet, and does not constitute any limitation on the protection scope of the present application.
  • the indication information can also be added at other locations in the header of the neural network data packet. For example, between the IP field and the UDP field, or after the RTP field, etc., I will not give examples one by one here.
  • the neural network data packet generated by the server corresponds to a certain neural network.
  • the server uses the ULR block as the input of the DNN, uses the original HD block as the target output of the DNN, uses the two as the training set of the DNN, and uses PSNR as the loss function for training.
  • DNN, ULR block and DNN need to be sent to the user together.
  • the DNN data can be encoded into multiple IPs, and the multiple IP packets are transmitted to the core network, and then the IP data packets are transmitted to the UE through the wireless access network.
  • the IP packet obtained by encoding the data of the neural network is called a neural network data packet.
  • the data of a neural network can be encoded into one or more neural network data packets.
  • the neural network data packet generated by the above-mentioned server may be one or more neural network data packets obtained by encoding data of a certain neural network.
  • the neural network data packet generated by the server is used to carry the parameter information of the neural network corresponding to the neural network data packet (for example, the coefficients of the neural network).
  • the neural network corresponding to the neural network data packet is used to process data (eg, restore data), where the data can be video frame data, audio frame data, or image data, or other types of data.
  • data eg, restore data
  • the data can be video frame data, audio frame data, or image data, or other types of data.
  • the embodiments of this application do not limit the type of data processed.
  • the data is video frame data or image data.
  • the neural network is used to restore low-resolution images to obtain high-resolution images.
  • VMAF corresponding to different neural networks to represent the effect of different neural networks on data recovery
  • VMAF corresponding to the preset algorithm to represent the effect of the preset algorithm to restore the data as an example to illustrate how the server determines the neural network data in this implementation.
  • the priority of the package is not limited to:
  • the server needs to determine the neural network data packet (e.g., the data of neural network #1) encoded by 4 neural networks (e.g., neural network #1, neural network #2, neural network #3, and neural network #4).
  • the encoded neural network data packet is neural network data packet #1
  • the encoded neural network data packet of neural network #2 is neural network data packet #2
  • the encoded neural network data packet of neural network #3 is
  • the neural network data packet obtained by encoding the data of neural network data packet #3 and neural network #4 is the priority of neural network data packet #4).
  • the server can determine the neural network data packet obtained by encoding the data of the four neural networks respectively based on the VMAF corresponding to the four neural networks and the VMAF corresponding to the preset algorithm. priority.
  • the server can use the video quality evaluation improvement information of a certain neural network compared with the traditional algorithm as a criterion for measuring the priority of the neural network.
  • Table 4 below gives an exemplary range evaluation mechanism based on the VMAF difference (the difference between the VMAF corresponding to the neural network and the VMAF corresponding to the preset algorithm) to measure the data encoding of different neural networks to obtain the priority of neural network data packets.
  • VMAF difference the difference between the VMAF corresponding to the neural network and the VMAF corresponding to the preset algorithm
  • the priority of the neural network data packet obtained by encoding the data of neural network #1 is 1; the VMAF score of neural network #2 is the same.
  • the VMAF score is improved [5 points, 10 points], then the priority of the neural network data packet obtained by encoding the data of neural network #2 is 2; the VMAF score of neural network #3 is compared with the VMAF of the traditional algorithm If the score increases (10 points, 20 points), then the priority of the neural network data packet obtained by the data encoding of neural network #3 is 3; the VMAF score of neural network #4 is improved by ⁇ 20 points compared with the VMAF score of the traditional algorithm. Then the priority of the neural network data packet obtained by encoding the data of neural network #4 is 4.
  • Table 4 is only an example and does not constitute any limitation on the scope of protection of the present application.
  • An evaluation mechanism different from that shown in Table 4 can also be developed based on VMAF to measure the data encoding of different neural networks to obtain the priority of neural network data packets.
  • the above-mentioned VMAF score of the neural network represents the effect of the neural network on recovering the data
  • the VMAF corresponding to the preset algorithm represents the effect of the preset algorithm on recovering the data.
  • the specific expression method is similar to the above-mentioned VMAF, and will not be described again here.
  • the representation of the priority of the neural network data packets shown in this implementation explains: the priority of the neural network data packets of the neural network can be determined based on the effect of the neural network on recovering the data and the effect of the preset algorithm on recovering the data.
  • the neural network that recovers the data better than the preset algorithm can be prioritized so that the user can use the recovered data first.
  • the effective neural network restores data and improves user experience.
  • the priority of the neural network data packet is related to the effect of reconstructing the neural network, and the neural network data packet is used to reconstruct the neural network corresponding to the neural network data packet.
  • the server can determine the priority of the neural network data packet based on the effect of reconstructing the neural network from the neural network data packet.
  • the priority of the neural network data packet is related to the degree of influence of the neural network coefficient data included in the neural network data packet on the neural network coefficient.
  • Example 1 The coefficients of the neural network are represented by 32-bit floating point data (as shown in Figure 6). Based on the structural characteristics of the floating point data mentioned above, the bits that will have a large impact on the absolute value of the coefficients of the neural network Bits are regarded as high-priority data, and bits that have a small impact on the absolute value of the coefficient of the neural network are regarded as low-priority data.
  • the sign bit, exponent bit and high-order bits of the fractional part of the floating point number corresponding to the neural network coefficient are placed in a neural network data packet, and the priority in the neural network data packet is set is "H"; and the low-order bits of the fractional part of the floating point number corresponding to the coefficient of the neural network are placed in another neural network data packet, and the priority in the neural network data packet is set to "L", where " H” means high priority, "L” means low priority.
  • S720 The server sends the neural network data packet to the core network device, or the core network device receives the neural network data packet from the server.
  • the core network device receiving the neural network data packet from the server is only a way for the core network device to obtain the neural network data packet.
  • core network equipment can obtain neural network data packets in the following ways:
  • the server directly sends the neural network data packet to the core network device.
  • the server indirectly sends the neural network data packet to the core network device through other devices.
  • the core network equipment and the server can be connected through a fixed network.
  • Method 2 The core network device obtains the neural network data packet generated by the server from the memory (or internal interface).
  • the core network device can also obtain the neural network data through other methods. Bag.
  • the neural network data packet is generated according to the received parameter information of the neural network, which will not be described again here.
  • the core network device generates GTP-U data packets.
  • the payload of the GTP-U data packet includes a neural network data packet, and the header of the GTP-U data packet includes indication information.
  • the indication information is used to indicate the priority of the neural network data packet.
  • the core network device after the core network device receives the neural network data packet carrying the indication information indicating the priority of the neural network data packet, it can read the priority of the neural network data packet from the neural network data packet.
  • class The instruction information is encapsulated into the GTP-U data packet header according to the GTP-U protocol, and the neural network data packet is used as the payload of the GTP-U data packet. So that the access network device that receives the GTP-U data packet can read the indication information from the header of the GTP-U data packet, and based on the priority of the neural network data packet indicated by the indication information, the neural network Data packets are transmitted and processed. It can be understood that the access network device takes the priority of the neural network data packet into consideration when transmitting the neural network data packet, so as to achieve differentiated transmission of neural network data packets of different priorities.
  • the core network device after the core network device receives the neural network data packet, it can read the instruction information #2 used to indicate the priority of the neural network data packet from the neural network data packet, and then can follow the GTP-U
  • the protocol encapsulates the indication information #2 into the GTP-U data packet header, which is called indication information #1, so that the access network equipment station can read it.
  • the instruction information #2 encapsulated according to the GTP-U protocol may still be called instruction information #2, or may be called instruction information #1 for ease of distinction. That is to say, in this embodiment, the specific forms of the indication information #1 and the indication information #2 are not limited, and they can be used to indicate the priority of the neural network data packet. For ease of description, they can be collectively called instruction information.
  • a new field can be added to the GTP-U data packet header to indicate the priority of the neural network data packet.
  • FIG. 10 is a schematic diagram of a GTP-U data packet header provided by an embodiment of the present application.
  • a new byte can be added to the GTP-U data packet to add indication information.
  • Figure 10 only illustrates that the indication information can be carried in the header of the GTP-U data packet, and does not constitute any limitation on the protection scope of the present application.
  • the indication information can also be added to other parts of the header of the GTP-U data packet.
  • the position, for example, in the fourth bit of the GTP-U packet, will not be explained one by one here.
  • the core network device sends the GTP-U data packet to the access network device.
  • the method flow shown in Figure 7 also includes:
  • the core network device sends the GTP-U data packet to the access network device, or the access network device receives the GTP-U data packet from the core network device.
  • the access network device transmits the neural network data packet according to the instruction information.
  • the header of the GTP-U data packet received by the access network device includes indication information indicating the neural network data packet, so that the access network device can read the indication information from the header of the GTP-U data packet, and based on the The priority of the neural network data packet indicated by the indication information, and the neural network data packet is transmitted and processed. It can be understood that the access network device takes the priority of the neural network data packet into consideration when transmitting the neural network data packet, so as to achieve differentiated transmission of neural network data packets of different priorities.
  • the access network device receives two GTP-U data packets (GTP-U data packet #1 and GTP-U data packet #2).
  • the payload of GTP-U data packet #1 includes a neural network data packet.
  • the header of GTP-U packet #1 includes indication information #1
  • the payload of GTP-U packet #2 includes neural network packet #2
  • the header of GTP-U packet #2 includes indication information #2.
  • the access network device differentially transmits neural network data packet #1 and neural network data packet #2 according to instruction information #1 and instruction information #2.
  • instruction information #1 indicates that the priority of neural network data packet #1 is high priority.
  • Instruction information #2 indicates that the priority of neural network data packet #2 is low priority.
  • the access network device can transmit neural network data packet #1 first, and then transmit neural network data packet #1 after the transmission of neural network data packet #1 is completed. 2.
  • the access network device preferentially transmits the neural network data packet; or,
  • the access network device delays transmission of the neural network data packet; or,
  • the access network device can calculate the corresponding data based on the neural network packet priority information (such as the above-mentioned indication information) and other related parameters of air interface scheduling, including but not limited to historical rate, instantaneous rate, user level, etc. transmission priority.
  • the neural network packet priority information such as the above-mentioned indication information
  • other related parameters of air interface scheduling including but not limited to historical rate, instantaneous rate, user level, etc. transmission priority.
  • factor1 represents the proportional fair scheduling priority
  • R i represents the user's instantaneous rate. The better the user's channel condition, the higher the instantaneous rate; R i represents the user's historical rate, which represents the average rate of the channel within a period of time.
  • factor2 represents the scheduling priority of the neural network data packet
  • N represents the priority of the neural network data packet
  • f can be an increasing linear or exponential function.
  • the embodiment shown in Figure 7 records that determining the priority of neural network data packets is implemented by the server.
  • Another possibility is that the above-mentioned action of determining the priority of neural network data packets can be implemented by the core network device.
  • the method is similar to the method determined by the server, except that the execution subject is the core network device.
  • network elements in the existing network architecture are mainly used as examples for illustrative explanations (such as AF, AMF, SMF, etc.). It should be understood that the specific form of the network element is The application examples are not limiting. For example, network elements that can implement the same functions in the future are applicable to the embodiments of this application.
  • the methods and operations implemented by devices can also be implemented by components (such as chips or circuits) that can be used in network equipment.
  • Figure 11 is a schematic block diagram of a communication device provided by an embodiment of the present application.
  • the device 1100 may include an interface unit 1110 and a processing unit 1120.
  • the interface unit 1110 can communicate with the outside, and the processing unit 1120 is used for data processing.
  • the interface unit 1110 may also be called a communication interface, communication unit or transceiver unit.
  • the apparatus 1100 may implement steps or processes corresponding to those executed by the access network equipment in the method embodiments of the present application, and the apparatus 1100 may include units for executing the methods executed by the access network equipment in the method embodiments. . Moreover, each unit in the device 1100 and the above-mentioned other operations and/or functions are respectively intended to implement the corresponding processes of the method embodiment in the access network equipment in the method embodiment.
  • the interface unit 1110 can be used to perform the sending and receiving steps in the method, such as step S740; the processing unit 720 can be used to perform the processing steps in the method, such as step S750.
  • the device 1100 is used to perform the actions performed by the core network equipment in the above method embodiment.
  • the interface unit 1110 is used to obtain a neural network data packet.
  • the neural network data packet includes indication information, and the indication information is used to indicate the priority of the neural network data packet;
  • the device 1100 may implement steps or processes corresponding to those executed by the core network equipment in the method embodiments of the present application.
  • the device 1100 may include a unit for executing the method executed by the core network equipment in the method embodiments. Yuan.
  • each unit in the device 1100 and the above-mentioned other operations and/or functions are respectively intended to implement the corresponding processes of the method embodiment in the core network equipment in the method embodiment.
  • the processing unit 1120 is configured to generate a neural network data packet.
  • the neural network data packet includes indication information, and the indication information is used to indicate the priority of the neural network data packet;
  • the interface unit 1110 is used to send the neural network data packet.
  • the device 1100 may implement steps or processes corresponding to those executed by the server in the method embodiments of the embodiments of the present application, and the device 1100 may include a unit for executing the method executed by the server in the method embodiments. Moreover, each unit in the device 1100 and the above-mentioned other operations and/or functions are respectively intended to implement the corresponding processes of the method embodiment in the server in the method embodiment.
  • the interface unit 1110 can be used to perform the sending and receiving steps in the method, such as step S720; the processing unit 720 can be used to perform the processing steps in the method, such as step S710.
  • the processing unit 1120 in the above embodiments may be implemented by at least one processor or processor-related circuit.
  • the interface unit 1110 may be implemented by a transceiver or transceiver related circuitry.
  • the storage unit may be implemented by at least one memory.
  • the memory 1220 can be integrated with the processor 1210 or provided separately.
  • the device 1200 may also include a transceiver 1230, which is used for receiving and/or transmitting signals.
  • the processor 1210 is used to control the transceiver 1230 to receive and/or transmit signals.
  • the device 1200 is used to implement the operations performed by the transceiver equipment (such as server, core network equipment, and access network equipment) in the above method embodiment.
  • the transceiver equipment such as server, core network equipment, and access network equipment
  • Embodiments of the present application also provide a computer-readable storage medium on which are stored computer instructions for implementing the method executed by the transceiver device (such as a server, a core network device, and an access network device) in the above method embodiment.
  • the transceiver device such as a server, a core network device, and an access network device
  • the computer program when executed by a computer, the computer can implement the method executed by the transceiver device (such as a server, core network device, and access network device) in the above method embodiment.
  • the transceiver device such as a server, core network device, and access network device
  • processors mentioned in the embodiments of this application may be a central processing unit (CPU), or other general-purpose processor, digital signal processor (DSP), or application-specific integrated circuit (ASIC).
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • a general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
  • RAM may include the following forms: static random access memory (static RAM, SRAM), dynamic random access memory (dynamic RAM, DRAM), synchronous dynamic random access memory (synchronous DRAM, SDRAM) , double data rate synchronous dynamic random access memory (double data rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), synchronous link dynamic random access memory (synchlink DRAM, SLDRAM) and Direct memory bus random access memory (direct rambus RAM, DR RAM).
  • static random access memory static random access memory
  • dynamic RAM dynamic random access memory
  • DRAM synchronous dynamic random access memory
  • SDRAM synchronous DRAM
  • double data rate SDRAM double data rate SDRAM
  • DDR SDRAM double data rate SDRAM
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous link dynamic random access memory
  • Direct memory bus random access memory direct rambus RAM, DR RAM
  • the processor is a general-purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, or discrete hardware component
  • the memory storage module
  • memories described herein are intended to include, but are not limited to, these and any other suitable types of memories.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components may be combined or can be integrated into another system, or some features can be ignored, or not implemented.
  • the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to implement the solution provided by this application.
  • each functional unit in each embodiment of the present application can be integrated into one unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
  • the computer may be a personal computer, a server, or a network device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another, e.g., the computer instructions may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more available media integrated.
  • the available media may be magnetic media (such as floppy disks, hard disks, magnetic tapes), optical media (such as DVDs), or semiconductor media (such as solid state disks (SSD)), etc.
  • the aforementioned available media may include But it is not limited to: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program code.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

Provided in the embodiments of the present application are a data transmission method and apparatus. The method comprises: receiving a general packet radio service tunneling protocol-user plane (GTP-U) data packet, wherein a load of the GTP-U data packet comprises a neural network data packet, and a packet header comprises indication information for indicating the priority of the neural network data packet; and transmitting the neural network data packet according to the indication information. By means of carrying, in a GTP-U data packet, indication information for indicating the priorities of neural network data packets, an access network device, which receives the GTP-U data packet, can transmit the neural network data packets according to the indication information, so as to realize differentiated transmission of neural network data packets with different priorities.

Description

数据传输的方法和装置Data transmission methods and devices
本申请要求于2022年04月08日提交中国专利局、申请号为202210365108.5、申请名称为“数据传输的方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims priority to the Chinese patent application filed with the China Patent Office on April 8, 2022, with the application number 202210365108.5 and the application title "Data transmission method and device", the entire content of which is incorporated into this application by reference.
技术领域Technical field
本申请实施例涉及通信领域,更具体地,涉及一种数据传输的方法和装置。The embodiments of the present application relate to the field of communications, and more specifically, to a data transmission method and device.
背景技术Background technique
近年来,随着扩展现实(extended reality,XR)技术的不断进步和完善,相关产业得到了蓬勃的发展。如今,XR技术已经进入到教育、娱乐、军事、医疗、环保、交通运输、公共卫生等各种与人们生产、生活息息相关的领域当中。其中,XR是各种现实相关技术的总称,具体包括:虚拟现实(virtual reality,VR),增强现实(augmented reality,AR)和混合现实(mixed reality,MR)。通过对视觉和听觉的渲染,为用户带来虚拟场景与现实场景的“沉浸式体验”。In recent years, with the continuous advancement and improvement of extended reality (XR) technology, related industries have developed vigorously. Today, XR technology has entered various fields closely related to people's production and life, such as education, entertainment, military, medical care, environmental protection, transportation, and public health. Among them, XR is the general term for various reality-related technologies, including: virtual reality (VR), augmented reality (AR) and mixed reality (MR). Through the rendering of vision and hearing, it brings users an "immersive experience" of virtual scenes and real scenes.
在XR技术中,提供高分辨率的图像(或视频)可提高用户体验。示例性地,通过超级分辨率(super resolution,SR)技术可以将低分辨率图像转化成高分辨率图像。基于神经网络(neural network,NN)的SR技术因其画面恢复效果显著受到了广泛的关注,用户根据神经网络将低分辨率图像转化成高分辨率图像,所以如何传输神经网络数据包成为重点关注的问题。In XR technology, providing high-resolution images (or videos) improves user experience. For example, low-resolution images can be converted into high-resolution images through super resolution (SR) technology. SR technology based on neural network (NN) has received widespread attention because of its remarkable image restoration effect. Users convert low-resolution images into high-resolution images based on neural networks, so how to transmit neural network data packets has become a focus The problem.
发明内容Contents of the invention
本申请实施例提供一种通信方法,以期实现对不同优先级的神经网络数据包的差异化传输,提高用户体验。Embodiments of the present application provide a communication method in order to realize differentiated transmission of neural network data packets of different priorities and improve user experience.
第一方面,提供了一种通信方法,该方法可以由接入网设备执行,或者,也可以由配置于接入网设备中的芯片或电路执行,或者,还可以由能实现全部或部分接入网设备功能的逻辑模块或软件执行,本申请对此不作限定。为了便于描述,下文中以由接入网设备执行为例进行说明。The first aspect provides a communication method, which can be executed by an access network device, or can be executed by a chip or circuit configured in the access network device, or can also be executed by a device that can realize all or part of the connection. The logic module or software execution of the network access device function is not limited in this application. For convenience of description, the following description takes execution by the access network device as an example.
该方法包括:接收通用分组无线业务隧道协议(general packet radio service tunneling protocol-user plane,GTP-U)数据包,该GTP-U数据包的载荷中包括神经网络数据包,该GTP-U数据包的包头中包括指示信息,该指示信息用于指示该神经网络数据包的优先级;根据该指示信息传输该神经网络数据包。The method includes: receiving a general packet radio service tunneling protocol-user plane (GTP-U) data packet, the payload of the GTP-U data packet includes a neural network data packet, and the GTP-U data packet The packet header includes indication information, the indication information is used to indicate the priority of the neural network data packet; the neural network data packet is transmitted according to the indication information.
基于上述技术方案,接入网设备接收到的GTP-U数据包的包头中包括指示神经网络数据包的指示信息,从而接入网设备可以从该GTP-U数据包的包头中读取该指示信息,并基于该指示信息指示的神经网络数据包的优先级,对该神经网络数据包进行传输处理。可以理解,接入网设备在传输神经网络数据包的时候考虑到神经网络数据包的优先级,以 便于实现对不同优先级的神经网络数据包的差异化传输。Based on the above technical solution, the header of the GTP-U data packet received by the access network device includes instruction information indicating the neural network data packet, so that the access network device can read the instruction from the header of the GTP-U data packet. information, and transmit and process the neural network data packet based on the priority of the neural network data packet indicated by the indication information. It can be understood that the access network device takes the priority of the neural network data packets into consideration when transmitting the neural network data packets. It facilitates differentiated transmission of neural network data packets of different priorities.
例如,接入网设备接收到两个GTP-U数据包(GTP-U数据包#1和GTP-U数据包#2),其中,GTP-U数据包#1的载荷中包括神经网络数据包#1,GTP-U数据包#1的包头中包括指示信息#1,GTP-U数据包#2的载荷中包括神经网络数据包#2,GTP-U数据包#2的包头中包括指示信息#2。接入网设备根据指示信息#1和指示信息#2差异化传输神经网络数据包#1和神经网络数据包#2,如,指示信息#1指示神经网络数据包#1的优先级为高优先级,指示信息#2指示神经网络数据包#2的优先级为低优先级,接入网设备可以优先传输神经网络数据包#1,在神经网络数据包#1传输完成之后再传输神经网络数据包#2。For example, the access network device receives two GTP-U data packets (GTP-U data packet #1 and GTP-U data packet #2). Among them, the payload of GTP-U data packet #1 includes a neural network data packet. #1, the header of GTP-U packet #1 includes indication information #1, the payload of GTP-U packet #2 includes neural network packet #2, the header of GTP-U packet #2 includes indication information #2. The access network device differentially transmits neural network data packet #1 and neural network data packet #2 according to instruction information #1 and instruction information #2. For example, instruction information #1 indicates that the priority of neural network data packet #1 is high priority. Level, indication information #2 indicates that the priority of neural network data packet #2 is low priority. The access network device can transmit neural network data packet #1 first, and then transmit neural network data after the transmission of neural network data packet #1 is completed. Package #2.
结合第一方面,在第一方面的某些实现方式中,根据该指示信息传输该神经网络数据包,包括:在该指示信息指示该神经网络数据包为高优先级的情况下,优先传输该神经网络数据包;或者,在该指示信息指示该神经网络数据包为低优先级的情况下,延后传输该神经网络数据包;或者,在该指示信息指示该神经网络数据包为低优先级,且网络状态发生拥堵的情况下,放弃传输该神经网络数据包。In connection with the first aspect, in some implementations of the first aspect, transmitting the neural network data packet according to the indication information includes: when the indication information indicates that the neural network data packet is a high priority, transmitting the neural network data packet first Neural network data packet; or, when the indication information indicates that the neural network data packet is of low priority, delay the transmission of the neural network data packet; or, when the indication information indicates that the neural network data packet is of low priority , and when the network status is congested, the transmission of the neural network data packet is given up.
基于上述技术方案,接入网设备可以根据指示信息指示的神经网络数据包的优先级,确定不同的神经网络数据包的传输方式(如,优先传输或者放弃传输等),提高方案的灵活性。Based on the above technical solution, the access network equipment can determine the transmission mode of different neural network data packets (such as priority transmission or abandonment of transmission, etc.) according to the priority of the neural network data packet indicated by the indication information, thereby improving the flexibility of the solution.
第二方面,提供了一种通信方法,该方法可以由核心网设备执行,或者,也可以由配置于核心网设备中的芯片或电路执行,或者,还可以由能实现全部或部分核心网设备功能的逻辑模块或软件执行,本申请对此不作限定。为了便于描述,下文中以由核心网设备执行为例进行说明。The second aspect provides a communication method, which can be executed by a core network device, or can be executed by a chip or circuit configured in the core network device, or can also be executed by a core network device that can implement all or part of the core network device. Functional logic modules or software execution are not limited in this application. For convenience of description, the following description takes execution by the core network device as an example.
该方法包括:获得神经网络数据包,该神经网络数据包中包括指示信息,该指示信息用于指示该神经网络数据包的优先级;向接入网设备发送通用分组无线业务隧道协议用户面GTP-U数据包,该GTP-U数据包的载荷中包括该神经网络数据包,该GTP-U数据包的包头中包括该指示信息。The method includes: obtaining a neural network data packet, the neural network data packet including indication information, the indication information being used to indicate the priority of the neural network data packet; sending a General Packet Wireless Service Tunneling Protocol user plane GTP to the access network device -U data packet, the payload of the GTP-U data packet includes the neural network data packet, and the header of the GTP-U data packet includes the indication information.
基于上述技术方案,核心网设备接收到携带用于指示神经网络数据包的优先级的指示信息的神经网络数据包之后,可以从该神经网络数据包中读取用于指示神经网络数据包的优先级的指示信息,并将该指示信息按照GTP-U协议封装到GTP-U数据包包头中,以及将神经网络数据包作为该GTP-U数据包的载荷。以便于接收该GTP-U数据包的接入网设备可以从该GTP-U数据包的包头中读取该指示信息,并基于该指示信息指示的神经网络数据包的优先级,对该神经网络数据包进行传输处理。可以理解,接入网设备在传输神经网络数据包的时候考虑到神经网络数据包的优先级,以便于实现对不同优先级的神经网络数据包的差异化传输。Based on the above technical solution, after the core network device receives the neural network data packet carrying the indication information indicating the priority of the neural network data packet, it can read the priority of the neural network data packet from the neural network data packet. level indication information, and encapsulates the indication information into the GTP-U data packet header according to the GTP-U protocol, and uses the neural network data packet as the payload of the GTP-U data packet. So that the access network device that receives the GTP-U data packet can read the indication information from the header of the GTP-U data packet, and based on the priority of the neural network data packet indicated by the indication information, the neural network Data packets are processed for transmission. It can be understood that the access network device takes the priority of the neural network data packet into consideration when transmitting the neural network data packet, so as to achieve differentiated transmission of neural network data packets of different priorities.
第三方面,提供了一种通信方法,该方法可以由服务器执行,或者,也可以由配置于服务器中的芯片或电路执行,或者,还可以由能实现全部或部分服务器功能的逻辑模块或软件执行,本申请对此不作限定。为了便于描述,下文中以由服务器执行为例进行说明。The third aspect provides a communication method, which can be executed by a server, or can be executed by a chip or circuit configured in the server, or can also be executed by a logic module or software that can realize all or part of the server functions. execution, this application does not limit this. For the convenience of description, the following description takes execution by the server as an example.
该方法包括:生成神经网络数据包该神经网络数据包中包括指示信息,该指示信息用于指示该神经网络数据包的优先级;发送该神经网络数据包。The method includes: generating a neural network data packet, the neural network data packet including indication information, the indication information being used to indicate the priority of the neural network data packet; and sending the neural network data packet.
基于上述技术方案,服务器在生成神经网络数据包时,可以将指示该神经网络数据包的优先级的指示信息携带在神经网络数据包中,以便于接收该神经网络数据包的核心网设 备能够从该神经网络数据包中读取用于指示神经网络数据包的优先级的指示信息,获知神经网络数据包的优先级。以期实现对不同优先级的神经网络数据包的差异化传输。Based on the above technical solution, when the server generates a neural network data packet, it can carry indication information indicating the priority of the neural network data packet in the neural network data packet, so as to facilitate the core network equipment that receives the neural network data packet. The device can read the indication information used to indicate the priority of the neural network data packet from the neural network data packet, and learn the priority of the neural network data packet. In order to achieve differentiated transmission of neural network data packets of different priorities.
第四方面,提供了一种通信装置,该装置用于执行上述第一方面提供的方法。该装置可以为接入网设备,也可以为接入网设备的部件(例如处理器、芯片、或芯片系统等),还可以为能实现全部或部分接入网设备功能的逻辑模块或软件,该装置包括:A fourth aspect provides a communication device, which is used to perform the method provided in the first aspect. The device may be access network equipment, or may be a component of access network equipment (such as a processor, chip, or chip system, etc.), or may be a logic module or software that can realize all or part of the functions of the access network equipment, The device includes:
接口单元,用于接收通用分组无线业务隧道协议用户面GTP-U数据包,该GTP-U数据包的载荷中包括神经网络数据包,该GTP-U数据包的包头中包括指示信息,该指示信息用于指示该神经网络数据包的优先级;处理单元,用于根据该指示信息控制该装置传输该神经网络数据包。The interface unit is configured to receive a General Packet Radio Service Tunneling Protocol user plane GTP-U data packet. The payload of the GTP-U data packet includes a neural network data packet. The header of the GTP-U data packet includes indication information. The indication The information is used to indicate the priority of the neural network data packet; the processing unit is used to control the device to transmit the neural network data packet according to the indication information.
结合第四方面,在第四方面的某些实现方式中,该处理单元根据该指示信息控制该装置传输该神经网络数据包,包括:在该指示信息指示该神经网络数据包为高优先级的情况下,该处理单元控制该装置优先传输该神经网络数据包;或者,在该指示信息指示该神经网络数据包为低优先级的情况下,该处理单元控制该装置延后传输该神经网络数据包;或者,在该指示信息指示该神经网络数据包为低优先级,且网络状态发生拥堵的情况下,该处理单元控制该装置放弃传输该神经网络数据包。In connection with the fourth aspect, in some implementations of the fourth aspect, the processing unit controls the device to transmit the neural network data packet according to the indication information, including: when the indication information indicates that the neural network data packet is a high priority In this case, the processing unit controls the device to transmit the neural network data packet with priority; or, in the case where the indication information indicates that the neural network data packet is of low priority, the processing unit controls the device to delay transmission of the neural network data. packet; or, when the indication information indicates that the neural network data packet is of low priority and the network status is congested, the processing unit controls the device to give up transmitting the neural network data packet.
具体地,该通信装置可以包括用于执行第一方面任意一种实现方式提供的方法的单元和/或模块,如处理单元和接口单元。Specifically, the communication device may include units and/or modules for executing the method provided by any implementation of the first aspect, such as a processing unit and an interface unit.
在一种实现方式中,接口单元可以是收发器,或,输入/输出接口;处理单元可以是至少一个处理器。可选地,收发器可以为收发电路。可选地,输入/输出接口可以为输入/输出电路。In one implementation, the interface unit may be a transceiver, or an input/output interface; the processing unit may be at least one processor. Alternatively, the transceiver may be a transceiver circuit. Alternatively, the input/output interface may be an input/output circuit.
在另一种实现方式中,接口单元可以是该芯片、芯片系统或电路上的输入/输出接口、接口电路、输出电路、输入电路、管脚或相关电路等;处理单元可以是至少一个处理器、处理电路或逻辑电路等。In another implementation, the interface unit may be an input/output interface, interface circuit, output circuit, input circuit, pin or related circuit on the chip, chip system or circuit, etc.; the processing unit may be at least one processor , processing circuits or logic circuits, etc.
以上第四方面及其可能的设计所示装置的有益效果可参照第一方面及其可能的设计中的有益效果。The beneficial effects of the device shown in the above fourth aspect and its possible designs may be referred to the beneficial effects of the first aspect and its possible designs.
第五方面,提供了一种通信装置,该装置用于执行上述第二方面提供的方法。该装置可以为核心网设备,也可以为核心网设备的部件(例如处理器、芯片、或芯片系统等),还可以为能实现全部或部分核心网设备功能的逻辑模块或软件,该装置包括:In a fifth aspect, a communication device is provided, which is used to perform the method provided in the second aspect. The device can be a core network device, or a component of the core network device (such as a processor, a chip, or a chip system, etc.), or a logic module or software that can realize all or part of the functions of the core network device. The device includes :
接口单元,用于获得神经网络数据包,该神经网络数据包中包括指示信息,该指示信息用于指示该神经网络数据包的优先级;The interface unit is used to obtain a neural network data packet, the neural network data packet includes indication information, and the indication information is used to indicate the priority of the neural network data packet;
该接口单元,还用于向接入网设备发送通用分组无线业务隧道协议用户面GTP-U数据包,该GTP-U数据包的载荷中包括该神经网络数据包,该GTP-U数据包的包头中包括该指示信息。The interface unit is also used to send a General Packet Wireless Service Tunneling Protocol user plane GTP-U data packet to the access network device. The payload of the GTP-U data packet includes the neural network data packet. The GTP-U data packet contains This instruction is included in the header.
具体地,该通信装置可以包括用于执行第二方面任意一种实现方式提供的方法的单元和/或模块,如处理单元和接口单元。Specifically, the communication device may include units and/or modules for executing the method provided by any implementation of the second aspect, such as a processing unit and an interface unit.
在一种实现方式中,接口单元可以是收发器,或,输入/输出接口;处理单元可以是至少一个处理器。可选地,收发器可以为收发电路。可选地,输入/输出接口可以为输入/输出电路。In one implementation, the interface unit may be a transceiver, or an input/output interface; the processing unit may be at least one processor. Alternatively, the transceiver may be a transceiver circuit. Alternatively, the input/output interface may be an input/output circuit.
在另一种实现方式中,接口单元可以是该芯片、芯片系统或电路上的输入/输出接口、 接口电路、输出电路、输入电路、管脚或相关电路等;处理单元可以是至少一个处理器、处理电路或逻辑电路等。In another implementation, the interface unit may be an input/output interface on the chip, chip system or circuit, Interface circuit, output circuit, input circuit, pin or related circuit, etc.; the processing unit can be at least one processor, processing circuit or logic circuit, etc.
以上第五方面所示装置的有益效果可参照第二方面的有益效果。The beneficial effects of the device shown in the fifth aspect above can be referred to the beneficial effects of the second aspect.
第六方面,提供了一种通信装置,该装置用于执行上述第三方面提供的方法。该装置可以为服务器,也可以为服务器的部件(例如处理器、芯片、或芯片系统等),还可以为能实现全部或部分服务器功能的逻辑模块或软件,该装置包括:In a sixth aspect, a communication device is provided, which is used to perform the method provided in the third aspect. The device can be a server, or a component of the server (such as a processor, a chip, or a chip system, etc.), or a logic module or software that can realize all or part of the server functions. The device includes:
处理单元,用于生成神经网络数据包该神经网络数据包中包括指示信息,该指示信息用于指示该神经网络数据包的优先级;接口单元,用于发送该神经网络数据包。The processing unit is used to generate a neural network data packet. The neural network data packet includes indication information, and the indication information is used to indicate the priority of the neural network data packet. The interface unit is used to send the neural network data packet.
具体地,该通信装置可以包括用于执行第三方面任意一种实现方式提供的方法的单元和/或模块,如处理单元和接口单元。Specifically, the communication device may include units and/or modules for executing the method provided by any implementation of the third aspect, such as a processing unit and an interface unit.
在一种实现方式中,接口单元可以是收发器,或,输入/输出接口;处理单元可以是至少一个处理器。可选地,收发器可以为收发电路。可选地,输入/输出接口可以为输入/输出电路。In one implementation, the interface unit may be a transceiver, or an input/output interface; the processing unit may be at least one processor. Alternatively, the transceiver may be a transceiver circuit. Alternatively, the input/output interface may be an input/output circuit.
在另一种实现方式中,接口单元可以是该芯片、芯片系统或电路上的输入/输出接口、接口电路、输出电路、输入电路、管脚或相关电路等;处理单元可以是至少一个处理器、处理电路或逻辑电路等。In another implementation, the interface unit may be an input/output interface, interface circuit, output circuit, input circuit, pin or related circuit on the chip, chip system or circuit, etc.; the processing unit may be at least one processor , processing circuits or logic circuits, etc.
以上第六方面所示装置的有益效果可参照第三方面的有益效果。The beneficial effects of the device shown in the sixth aspect above can be referred to the beneficial effects of the third aspect.
在第一方面至第六方面的某些实施方式中,该神经网络数据包的优先级与该神经网络数据包对应的神经网络恢复数据的效果相关。其中,神经网络恢复数据的效果可以由以下任意一种指标指示:In some implementations of the first to sixth aspects, the priority of the neural network data packet is related to the effect of the neural network recovery data corresponding to the neural network data packet. Among them, the effect of neural network on data recovery can be indicated by any of the following indicators:
峰值信噪比(peak signal to noise ratio,PSNR)、结构相似性(structural similarity,SSIM)或视频多方法评估融合(video multimethod assessment fusion,VMAF)等。Peak signal to noise ratio (PSNR), structural similarity (structural similarity, SSIM) or video multimethod assessment fusion (VMAF), etc.
例如,神经网络恢复数据的效果满足预期(如,VAMF大于预设阈值)的情况下,该神经网络的数据(如,该神经网络的参数信息、该神经网络的系数等)编码得到的神经网络数据包的优先级为为高优先级。For example, when the effect of a neural network on recovering data meets expectations (for example, VAMF is greater than a preset threshold), the data of the neural network (for example, the parameter information of the neural network, the coefficients of the neural network, etc.) are encoded by the neural network. The priority of the packet is high priority.
还例如,神经网络恢复数据的效果不满足预期(如,VAMF小于或者等于预设阈值)的情况下,该神经网络的数据编码得到的神经网络数据包的优先级为低优先级。For another example, when the effect of the neural network on recovering data does not meet expectations (for example, the VAMF is less than or equal to a preset threshold), the priority of the neural network data packet obtained by encoding the data of the neural network is low priority.
基于上述技术方案,在本申请中可以根据神经网络恢复数据的效果确定该神经网络的神经网络数据包的优先级,以便于优先传输恢复数据的效果好的神经网络的神经网络数据包,从而用户可以优先使用恢复数据的效果好的神经网络恢复数据,提高用户体验。Based on the above technical solution, in this application, the priority of the neural network data packets of the neural network can be determined according to the effect of the neural network on data recovery, so that the neural network data packets of the neural network with good data recovery effect are preferentially transmitted, so that the user You can give priority to using neural networks with good data recovery effects to restore data and improve user experience.
在第一方面至第六方面的某些实施方式中,该神经网络数据包的优先级还与预设算法恢复该数据的效果相关。其中,预设算法恢复该数据的效果可以由以下任意一种指标指示:PSNR、SSIM或VMAF等。In some implementations of the first to sixth aspects, the priority of the neural network data packet is also related to the effect of the preset algorithm on recovering the data. Among them, the effect of the preset algorithm on recovering the data can be indicated by any of the following indicators: PSNR, SSIM or VMAF, etc.
例如,神经网络恢复数据的效果相比于预设算法恢复该数据的效果好的程度高出预期值(如,神经网络对应的VAMF相比于预设算法对应的VAMF大于一个预设值)的情况下,该神经网络的数据编码得到的神经网络数据包的优先级为高优先级。For example, the effect of the neural network on recovering the data is higher than expected compared to the effect of the preset algorithm on recovering the data (for example, the VAMF corresponding to the neural network is greater than a preset value compared to the VAMF corresponding to the preset algorithm). In this case, the priority of the neural network data packet obtained by encoding the neural network data is high priority.
还例如,神经网络恢复数据的效果相比于预设算法恢复该数据的效果好的程度低于预期值(如,神经网络对应的VAMF相比于预设算法对应的VAMF小于或者等于一个预设值)的情况下,该神经网络的数据编码得到的神经网络数据包的优先级为低优先级。 For example, the effect of the neural network on recovering the data is lower than expected compared to the effect of the preset algorithm on recovering the data (for example, the VAMF corresponding to the neural network is less than or equal to a preset value compared to the VAMF corresponding to the preset algorithm. value), the priority of the neural network data packet obtained by encoding the data of the neural network is low priority.
基于上述技术方案,在本申请中可以根据神经网络恢复数据的效果和预设算法恢复该数据的效果,确定该神经网络的神经网络数据包的优先级,在多个神经网络恢复数据的效果都比预设算法恢复该数据的效果好的情况下,以便于优先传输恢复数据的效果比预设算法好的程度高的神经网络,从而用户可以优先使用恢复数据的效果好的神经网络恢复数据,提高用户体验。Based on the above technical solution, in this application, the priority of the neural network data packets of the neural network can be determined based on the effect of restoring data by the neural network and the effect of restoring the data by the preset algorithm, and the effect of restoring data in multiple neural networks can be achieved. If the data recovery effect is better than the preset algorithm, the neural network that recovers the data better than the preset algorithm will be given priority, so that the user can give priority to using the neural network with the best data recovery effect to recover the data. Improve user experience.
在第一方面至第六方面的某些实施方式中,该神经网络数据包用于重构该神经网络数据包对应的神经网络,该神经网络数据包的优先级与重构该神经网络的效果相关。其中,神经网络数据包用于重构该神经网络数据包对应的神经网络可以理解为:该神经网络数据包的数据为用于重构神经网络的数据。如,神经网络数据包的数据为神经网络的系数的数据,而神经网络的系数用于重构神经网络。In some implementations of the first to sixth aspects, the neural network data packet is used to reconstruct the neural network corresponding to the neural network data packet, and the priority of the neural network data packet is related to the effect of reconstructing the neural network. Related. Wherein, the neural network data packet used to reconstruct the neural network corresponding to the neural network data packet can be understood as: the data of the neural network data packet is the data used to reconstruct the neural network. For example, the data of the neural network data packet is the data of the neural network coefficients, and the neural network coefficients are used to reconstruct the neural network.
基于上述技术方案,在本申请中可以根据重构神经网络的效果确定该神经网络的神经网络数据包的优先级,以便于优先传输重构神经网络的效果好的神经网络数据包,提高用户重构神经网络的效率。Based on the above technical solution, in this application, the priority of the neural network data packets of the neural network can be determined according to the effect of the reconstructed neural network, so as to preferentially transmit the neural network data packets with good effect of the reconstructed neural network, and improve user reuse. efficiency of the neural network.
在第一方面至第六方面的某些实施方式中,该神经网络数据包包括该神经网络的系数的数据,该系数的数据用于获得该系数,该系数用于重构该神经网络,该神经网络数据包的优先级与该系数的数据对该系数的影响相关。In some implementations of the first to sixth aspects, the neural network data packet includes data of coefficients of the neural network, the data of the coefficients are used to obtain the coefficients, and the coefficients are used to reconstruct the neural network, the The priority of a neural network packet is related to the impact of that coefficient's data on that coefficient.
例如,在系数数据计算得到的系数#1和系数之间的差值满足预期(如,系数#1和系数之间的差值小于或者等于预设阈值)的情况下,确定神经网络数据包的优先级为高优先级。For example, when the difference between coefficient #1 and the coefficient calculated by the coefficient data meets expectations (for example, the difference between coefficient #1 and the coefficient is less than or equal to the preset threshold), determine the value of the neural network data packet The priority is high priority.
还例如,在系数数据计算得到的系数#1和系数之间的差值不满足预期(如,系数#1和系数之间的差值大于预设阈值)的情况下,确定神经网络数据包的优先级为低优先级。For another example, in the case where the difference between coefficient #1 and the coefficient calculated by the coefficient data does not meet expectations (for example, the difference between coefficient #1 and the coefficient is greater than a preset threshold), determine the value of the neural network data packet The priority is low priority.
基于上述技术方案,在本申请中可以根据神经网络数据包的数据计算得到的值和神经网络的系数之间的差别确定神经网络数据包的优先级,以便于优先传输能够计算得到与神经网络的系数最相近的值的神经网络数据包,以使得用户能够根据接收到的神经网络数据包快速重构神经网络,提供用户重构神经网络的效率。Based on the above technical solution, in this application, the priority of the neural network data packet can be determined based on the difference between the value calculated from the data of the neural network data packet and the coefficient of the neural network, so that the priority transmission can be calculated and compared with the neural network. The neural network data packet with the closest coefficient value enables the user to quickly reconstruct the neural network based on the received neural network data packet and improves the efficiency of the user in reconstructing the neural network.
在第一方面至第六方面的某些实施方式中,该系数由多个比特位表示,该多个比特位的值用于计算该系数的绝对值,该系数的数据由至少一个比特位表示,该至少一个比特位属于该多个比特位。In some implementations of the first to sixth aspects, the coefficient is represented by a plurality of bits, the values of the plurality of bits are used to calculate the absolute value of the coefficient, and the data of the coefficient is represented by at least one bit. , the at least one bit belongs to the plurality of bits.
基于上述技术方案,神经网络的系数可以由多个比特位表示,神经网络数据包的数据可以由至少一个比特位表示,且至少一个比特位属于上述的多个比特位,通过用比特位形式表示系数数据,有利于用户根据接收到的神经网络数据包计算神经网络的系数。Based on the above technical solution, the coefficients of the neural network can be represented by multiple bits, the data of the neural network data packet can be represented by at least one bit, and at least one bit belongs to the above-mentioned multiple bits, by using the bit form to represent Coefficient data is helpful for users to calculate the coefficients of the neural network based on the received neural network data packets.
在第一方面至第六方面的某些实施方式中,该多个比特位包括符号部分、指数部分以及分数部分,在该至少一个比特位为该符号部分、该指数部分和该分数部分的第一部分的情况下,神经网络数据包的优先级为第一优先级;在该至少一个比特位为该分数部分的第二部分的情况下,确定该神经网络对应的调度优先级为第二优先级,其中,该第一优先级高于该第二优先级,该分数部分的第一部分为该分数部分的高位数据部分,该分数部分的第二部分为该分数部分的低位数据部分。In some implementations of the first to sixth aspects, the plurality of bits includes a sign part, an exponent part and a fraction part, and the at least one bit is the sign part, the exponent part and the fraction part. In the case of a part, the priority of the neural network data packet is the first priority; in the case where the at least one bit is the second part of the fractional part, the scheduling priority corresponding to the neural network is determined to be the second priority. , wherein the first priority is higher than the second priority, the first part of the fractional part is the high-order data part of the fractional part, and the second part of the fractional part is the low-order data part of the fractional part.
基于上述技术方案,神经网络的系数可以由符号部分、指数部分以及分数部分表示,有利于确定不同部分对神经网络的系数的绝对值的影响程度,从而可以从神经网络数据包 的数据由系数的那些部分表示确定该神经网络数据包的数据对神经网络的系数的绝对值的影响程度,使得方案更加简洁。Based on the above technical solution, the coefficients of the neural network can be represented by the symbolic part, the exponential part and the fractional part, which is helpful to determine the degree of influence of different parts on the absolute value of the coefficient of the neural network, so that the neural network data packet can be obtained The data represented by those parts of the coefficient determine the degree of influence of the data of the neural network packet on the absolute value of the neural network coefficient, making the scheme more concise.
在第一方面至第六方面的某些实施方式中,该指示信息携带在该神经网络数据包的包头中。In some implementations of the first to sixth aspects, the indication information is carried in a header of the neural network data packet.
在第一方面至第六方面的某些实施方式中,该指示信息位于该神经网络数据包包头中的用户数据报协议UDP字段和实时传输协议RTP字段之间。In some implementations of the first to sixth aspects, the indication information is located between the User Datagram Protocol UDP field and the Real-Time Transport Protocol RTP field in the neural network data packet header.
第七方面,本申请提供一种处理器,用于执行上述各方面提供的方法。In a seventh aspect, this application provides a processor for executing the methods provided in the above aspects.
对于处理器所涉及的发送和获取/接收等操作,如果没有特殊说明,或者,如果未与其在相关描述中的实际作用或者内在逻辑相抵触,则可以理解为处理器输出和接收、输入等操作,也可以理解为由射频电路和天线所进行的发送和接收操作,本申请对此不做限定。For operations such as sending and getting/receiving involved in the processor, if there is no special explanation, or if it does not conflict with its actual role or internal logic in the relevant description, it can be understood as processor output, reception, input and other operations. , can also be understood as the transmitting and receiving operations performed by the radio frequency circuit and the antenna, which is not limited in this application.
第八方面,提供一种计算机可读存储介质,该计算机可读存储介质存储用于设备执行的程序代码,该程序代码包括用于执行上述第一方面至第三方面的任意一种实现方式提供的方法。In an eighth aspect, a computer-readable storage medium is provided. The computer-readable storage medium stores a program code for device execution. The program code includes a method for executing any one of the above-mentioned first to third aspects. Methods.
第九方面,提供一种包含指令的计算机程序产品,当该计算机程序产品在计算机上运行时,使得计算机执行上述第一方面至第三方面的任意一种实现方式提供的方法。A ninth aspect provides a computer program product containing instructions, which when the computer program product is run on a computer, causes the computer to execute the method provided by any one of the above implementations of the first to third aspects.
第十方面,提供一种芯片,芯片包括处理器与通信接口,处理器通过通信接口读取存储器上存储的指令,执行上述第一方面至第三方面的任意一种实现方式提供的方法。In a tenth aspect, a chip is provided. The chip includes a processor and a communication interface. The processor reads instructions stored in the memory through the communication interface and executes the method provided by any one of the above-mentioned implementations of the first to third aspects.
可选地,作为一种实现方式,芯片还包括存储器,存储器中存储有计算机程序或指令,处理器用于执行存储器上存储的计算机程序或指令,当计算机程序或指令被执行时,处理器用于执行上述第一方面至第三方面的任意一种实现方式提供的方法。Optionally, as an implementation manner, the chip also includes a memory, in which computer programs or instructions are stored. The processor is used to execute the computer programs or instructions stored in the memory. When the computer program or instructions are executed, the processor is used to execute The method provided by any one of the above implementations of the first aspect to the third aspect.
第十一方面,提供了一种数据传输的装置,该装置包括处理器,该处理器用于执行上述第一方面至第四方面以及第一方面至第三方面中任一种可能实现方式中的方法。In an eleventh aspect, a data transmission device is provided. The device includes a processor configured to perform any of the possible implementations of the first to fourth aspects and the first to third aspects. method.
第十二方面,提供一种通信系统,包括第四方面的数据传输的装置至第六方面的数据传输的装置。A twelfth aspect provides a communication system, including the data transmission device of the fourth aspect to the data transmission device of the sixth aspect.
以上第七方面至第十二方面的有益效果可参照第一方面至第三方面中的有益效果的描述,不重复赘述。For the beneficial effects of the above seventh aspect to the twelfth aspect, reference can be made to the description of the beneficial effects of the first aspect to the third aspect, and will not be repeated.
附图说明Description of the drawings
图1中的(a)至(c)是适用本申请实施例的应用场景示意图。(a) to (c) in Figure 1 are schematic diagrams of application scenarios applicable to embodiments of the present application.
图2是本申请实施例提供的基于DNN的SR传输模式和传统传输模式的原理框图。Figure 2 is a functional block diagram of the DNN-based SR transmission mode and the traditional transmission mode provided by the embodiment of the present application.
图3是本申请实施例提供的基于SR方法的XR视频传输示意图。Figure 3 is a schematic diagram of XR video transmission based on the SR method provided by the embodiment of the present application.
图4是本申请实施例提供的QoS保障机制的架构的示意图。Figure 4 is a schematic diagram of the architecture of the QoS guarantee mechanism provided by the embodiment of the present application.
图5是本申请实施例适用的QoS流的映射的示意图。Figure 5 is a schematic diagram of QoS flow mapping applicable to the embodiment of the present application.
图6是一种浮点类型数据的比特结构示意图。Figure 6 is a schematic diagram of the bit structure of floating point type data.
图7是本申请实施例提供的一种通信方法的示意性流程图。Figure 7 is a schematic flow chart of a communication method provided by an embodiment of the present application.
图8是本申请实施例提供的一种神经网络数据包示意图。Figure 8 is a schematic diagram of a neural network data packet provided by an embodiment of the present application.
图9是本申请实施例提供的一种生成神经网络数据包的示意图。Figure 9 is a schematic diagram of generating neural network data packets provided by an embodiment of the present application.
图10是本申请实施例提供的一种GTP-U数据包的包头的示意图。Figure 10 is a schematic diagram of a header of a GTP-U data packet provided by an embodiment of the present application.
图11是适用于本申请实施例的一种通信装置的示意性框图。 Figure 11 is a schematic block diagram of a communication device suitable for embodiments of the present application.
图12是适用于本申请实施例的一种通信装置的结构框图。Figure 12 is a structural block diagram of a communication device suitable for embodiments of the present application.
具体实施方式Detailed ways
下面将结合附图,对本申请实施例中的技术方案进行描述。The technical solutions in the embodiments of the present application will be described below with reference to the accompanying drawings.
本申请实施例的技术方案可以应用于各种通信系统,例如:第五代(5th generation,5G)系统或新无线(new radio,NR)、长期演进(long term evolution,LTE)系统、LTE频分双工(frequency division duplex,FDD)系统、LTE时分双工(time division duplex,TDD)等。本申请提供的技术方案还可以应用于未来的通信系统,如第六代移动通信系统。本申请实施例的技术方案还可以应用于设备到设备(device to device,D2D)通信,车辆外联(vehicle-to-everything,V2X)通信,机器到机器(machine to machine,M2M)通信,机器类型通信(machine type communication,MTC),以及物联网(internet of things,IoT)通信系统或者其他通信系统。The technical solutions of the embodiments of this application can be applied to various communication systems, such as: fifth generation (5G) systems or new radio (NR), long term evolution (LTE) systems, LTE frequency Frequency division duplex (FDD) system, LTE time division duplex (TDD), etc. The technical solution provided by this application can also be applied to future communication systems, such as the sixth generation mobile communication system. The technical solutions of the embodiments of this application can also be applied to device-to-device (D2D) communication, vehicle-to-everything (V2X) communication, machine-to-machine (M2M) communication, and machine-to-machine (M2M) communication. Type communication (machine type communication, MTC), and Internet of things (Internet of things, IoT) communication system or other communication systems.
为便于理解本申请实施例,首先结合图1中的(a)至(c)简单介绍本申请实施例适用的通信系统。In order to facilitate understanding of the embodiments of the present application, the communication system applicable to the embodiments of the present application is briefly introduced with reference to (a) to (c) in Figure 1 .
本申请实施例的技术方案可以应用于图1中的(a)至(c)所示的通信系统中,当然也可以用在未来网络架构,比如第六代(6th generation,6G)网络架构等,本申请实施例对此不作具体限定。The technical solutions of the embodiments of the present application can be applied to the communication systems shown in (a) to (c) in Figure 1. Of course, they can also be used in future network architectures, such as the sixth generation (6th generation, 6G) network architecture, etc. , the embodiments of this application do not specifically limit this.
下面将结合图1中的(a)至(c)举例说明本申请实施例适用的通信系统。应理解,本文中描述的通信系统仅是示例,不应对本申请构成任何限定。The communication system applicable to the embodiment of the present application will be illustrated below with reference to (a) to (c) in Figure 1 . It should be understood that the communication system described herein is only an example and should not constitute any limitation on this application.
作为示例性说明,图1中的(a)示出了本申请实施例适用的通信系统100a的架构示意图。如图1中的(a)所示,该网络架构可以包括但不限于以下网元(或者称为功能网元、功能实体、节点、设备等):As an exemplary illustration, (a) in FIG. 1 shows a schematic architectural diagram of a communication system 100a applicable to the embodiment of the present application. As shown in (a) in Figure 1, the network architecture may include but is not limited to the following network elements (also known as functional network elements, functional entities, nodes, devices, etc.):
用户设备(user equipment,UE)、接入网设备(access network,AN)(或者称为无线接入网设备(radio access network,RAN))、接入和移动性管理功能(access and mobility management function,AMF)网元、会话管理功能(session management function,SMF)网元、用户面功能(user plane function,UPF)网元、策略控制功能(policy control function,PCF)网元、应用功能(application function,AF)网元、能力开放功能(network exposure function,NEF)网元、数据网络(data network,DN)和服务器。User equipment (UE), access network equipment (AN) (or radio access network equipment (RAN)), access and mobility management function , AMF) network element, session management function (SMF) network element, user plane function (UPF) network element, policy control function (PCF) network element, application function (application function) , AF) network element, capability exposure function (network exposure function, NEF) network element, data network (data network, DN) and server.
下面对图1中的(a)中示出的各网元进行简单介绍:The following is a brief introduction to each network element shown in (a) in Figure 1:
1、UE:为与(R)AN通信的终端也可以称为终端设备(terminal equipment)、接入终端、用户单元、用户站、移动站、移动台(mobile station,MS)、移动终端(mobile terminal,MT)、远方站、远程终端、移动设备、用户终端、终端、无线通信设备、用户代理或用户装置。终端设备可以是一种向用户提供语音/数据连通性的设备,例如,具有无线连接功能的手持式设备、车载设备等。目前,一些终端的举例可以为:手机(mobile phone)、平板电脑(pad)、带无线收发功能的电脑(如笔记本电脑、掌上电脑等)、移动互联网设备(mobile internet device,MID)、虚拟现实(virtual reality,VR)设备、增强现实(augmented reality,AR)设备、工业控制(industrial control)中的无线终端、无人驾驶(self driving)中的无线终端、远程医疗(remote medical)中的无线终端、智能电网(smart grid)中的无线终端、运输安全(transportation safety)中的无线终端、智慧城市(smart city)中的无线 终端、智慧家庭(smart home)中的无线终端、蜂窝电话、无绳电话、会话启动协议(session initiation protocol,SIP)电话、无线本地环路(wireless local loop,WLL)站、个人数字助理(personal digital assistant,PDA)、具有无线通信功能的手持设备、计算设备或连接到无线调制解调器的其它处理设备、车载设备、可穿戴设备(如,头显扩展现实(extended reality,XR)眼镜)、视频播放器、全息投影仪等设备,5G网络中的终端设备或者未来演进的公用陆地移动通信网络(public land mobile network,PLMN)中的终端设备等。1. UE: A terminal that communicates with (R)AN. It can also be called terminal equipment, access terminal, user unit, user station, mobile station, mobile station (MS), mobile terminal (mobile). terminal, MT), remote station, remote terminal, mobile device, user terminal, terminal, wireless communications equipment, user agent or user device. The terminal device may be a device that provides voice/data connectivity to the user, such as a handheld device, a vehicle-mounted device, etc. with wireless connectivity capabilities. Currently, some examples of terminals include: mobile phones, tablets, computers with wireless transceiver functions (such as laptops, handheld computers, etc.), mobile internet devices (MID), virtual reality (virtual reality, VR) equipment, augmented reality (AR) equipment, wireless terminals in industrial control, wireless terminals in self-driving, wireless terminals in remote medical Terminals, wireless terminals in smart grids, wireless terminals in transportation safety, wireless terminals in smart cities Terminals, wireless terminals in smart homes, cellular phones, cordless phones, session initiation protocol (SIP) phones, wireless local loop (WLL) stations, personal digital assistants assistant (PDA)), handheld devices with wireless communication capabilities, computing devices or other processing devices connected to wireless modems, vehicle-mounted devices, wearable devices (such as head-mounted extended reality (XR) glasses), video players , holographic projectors and other equipment, terminal equipment in the 5G network or terminal equipment in the future evolved public land mobile communication network (public land mobile network, PLMN), etc.
此外,终端设备还可以是物联网(Internet of things,IoT)系统中的终端设备。例如,无人驾驶中的无线终端、远程医疗(remote medical)中的无线终端、智能电网(smart grid)中的无线终端、运输安全(transportation safety)中的无线终端、智慧城市(smart city)中的无线终端、智慧家庭(smart home)中的无线终端、可穿戴终端设备等等。IoT是未来信息技术发展的重要组成部分,其主要技术特点是将物品通过通信技术与网络连接,从而实现人机互连,物物互连的智能化网络。IoT技术可以通过例如窄带(narrow band,NB)技术,做到海量连接,深度覆盖,终端省电。In addition, the terminal device can also be a terminal device in an Internet of things (IoT) system. For example, wireless terminals in driverless driving, wireless terminals in remote medical, wireless terminals in smart grid, wireless terminals in transportation safety, and wireless terminals in smart city. wireless terminals, wireless terminals in smart homes, wearable terminal devices, etc. IoT is an important part of the future development of information technology. Its main technical feature is to connect objects to the network through communication technology, thereby realizing an intelligent network of human-computer interconnection and object interconnection. IoT technology can achieve massive connections, deep coverage, and terminal power saving through narrowband (NB) technology, for example.
应理解,终端设备可以是任何可以接入网络的设备。终端设备与接入网设备之间可以采用某种空口技术相互通信。It should be understood that the terminal device can be any device that can access the network. Terminal equipment and access network equipment can communicate with each other using some air interface technology.
可选地,用户设备可以用于充当基站。例如,用户设备可以充当调度实体,其在V2X或D2D等中的用户设备之间提供侧行链路信号。比如,蜂窝电话和汽车利用侧行链路信号彼此通信。蜂窝电话和智能家居设备之间通信,而无需通过基站中继通信信号。Optionally, the user equipment can be used to act as a base station. For example, user equipment may act as a scheduling entity that provides sidelink signals between user equipments in V2X or D2D, etc. For example, cell phones and cars use sidelink signals to communicate with each other. Cell phones and smart home devices communicate between each other without having to relay communication signals through base stations.
2、AN:用于为特定区域的授权用户设备提供入网功能,并能够根据用户设备的级别,业务的需求等使用不同服务质量的传输隧道。2. AN: Used to provide network access functions for authorized user equipment in a specific area, and can use transmission tunnels with different service qualities according to the level of user equipment, business needs, etc.
AN能够管理无线资源,为用户设备提供接入服务,进而完成控制信号和用户设备数据在用户设备和核心网之间的转发,AN也可以理解为传统网络中的基站。AN can manage wireless resources, provide access services for user equipment, and then complete the forwarding of control signals and user equipment data between user equipment and the core network. AN can also be understood as a base station in a traditional network.
示例性地,本申请实施例中的接入网设备可以是用于与用户设备通信的任意一种具有无线收发功能的通信设备。该接入网设备包括但不限于:演进型节点B(evolved node B,eNB)、无线网络控制器(radio network controller,RNC)、节点B(node B,NB)、基站控制器(base station controller,BSC)、基站收发台(base transceiver station,BTS)、家庭基站(home evolved node B,HeNB,或home node B,HNB)、基带单元(base band unit,BBU),无线保真(wireless fidelity,WIFI)系统中的接入点(access point,AP)、无线中继节点、无线回传节点、传输点(transmission point,TP)或者发送接收点(transmission and reception point,TRP)等,还可以为5G,如,NR,系统中的gNB,或,传输点(TRP或TP),5G系统中的基站的一个或一组(包括多个天线面板)天线面板,或者,还可以为构成gNB或传输点的网络节点,如基带单元(BBU),或,分布式单元(distributed unit,DU)等。Illustratively, the access network device in the embodiment of the present application may be any communication device with wireless transceiver functions used to communicate with user equipment. The access network equipment includes but is not limited to: evolved node B (evolved node B, eNB), wireless network controller (radio network controller, RNC), node B (node B, NB), base station controller (base station controller) , BSC), base transceiver station (BTS), home base station (home evolved node B, HeNB, or home node B, HNB), base band unit (base band unit, BBU), wireless fidelity, WIFI) system access point (access point, AP), wireless relay node, wireless backhaul node, transmission point (transmission point, TP) or sending and receiving point (transmission and reception point, TRP), etc., can also be A gNB in a 5G, e.g., NR, system, or a transmission point (TRP or TP), one or a group (including multiple antenna panels) of antenna panels of a base station in a 5G system, or may also constitute a gNB or transmission Point network nodes, such as baseband unit (BBU), or distributed unit (DU), etc.
在一些部署中,gNB可以包括集中式单元(centralized unit,CU)和DU。gNB还可以包括有源天线单元(active antenna unit,AAU)。CU实现gNB的部分功能,DU实现gNB的部分功能。比如,CU负责处理非实时协议和服务,实现无线资源控制(radio resource control,RRC),分组数据汇聚层协议(packet data convergence protocol,PDCP)层的功能。DU负责处理物理层协议和实时服务,实现无线链路控制(radio link control,RLC)层、媒体接入控制(media access control,MAC)层和物理(physical,PHY)层的功能。 AAU实现部分物理层处理功能、射频处理及有源天线的相关功能。由于RRC层的信息最终会变成PHY层的信息,或者,由PHY层的信息转变而来,因而,在这种架构下,高层信令,如RRC层信令,也可以认为是由DU发送的,或者,由DU+AAU发送的。可以理解的是,接入网设备可以为包括CU节点、DU节点、AAU节点中一项或多项的设备。此外,可以将CU划分为接入网(radio access network,RAN)中的接入网设备,也可以将CU划分为核心网(core network,CN)中的接入网设备,本申请对此不做限定。In some deployments, gNB may include centralized units (CUs) and DUs. The gNB may also include an active antenna unit (AAU). CU implements some functions of gNB, and DU implements some functions of gNB. For example, the CU is responsible for processing non-real-time protocols and services, and implementing radio resource control (RRC) and packet data convergence protocol (PDCP) layer functions. DU is responsible for processing physical layer protocols and real-time services, and implementing the functions of the radio link control (RLC) layer, media access control (MAC) layer and physical (physical, PHY) layer. AAU implements some physical layer processing functions, radio frequency processing and active antenna related functions. Since RRC layer information will eventually become PHY layer information, or transformed from PHY layer information, in this architecture, high-level signaling, such as RRC layer signaling, can also be considered to be sent by DU , or sent by DU+AAU. It can be understood that the access network device may be a device including one or more of a CU node, a DU node, and an AAU node. In addition, the CU can be divided into access network equipment in the access network (radio access network, RAN), or the CU can be divided into access network equipment in the core network (core network, CN). This application does not Make limitations.
3、用户面网元:用于分组路由和转发以及用户面数据的服务质量(quality of service,QoS)处理等。3. User plane network element: used for packet routing and forwarding and quality of service (QoS) processing of user plane data.
如图1中的(a)所示,该用户面网元可以是UPF网元,可以包括中间用户面功能(intermediate user plane function,I-UPF)网元、锚点用户面功能(PDU session anchor user plane function,PSA-UPF)网元。在未来通信系统中,用户面网元仍可以是UPF网元,或者,还可以有其它的名称,本申请不做限定。As shown in (a) in Figure 1, the user plane network element can be a UPF network element, which can include an intermediate user plane function (I-UPF) network element and an anchor user plane function (PDU session anchor). user plane function, PSA-UPF) network element. In future communication systems, user plane network elements can still be UPF network elements, or they can have other names, which are not limited in this application.
4、数据网络:提供例如运营商服务、互联网接入或第三方服务,包含服务器,服务器端实现视频源编码、渲染等。4. Data network: Provides operator services, Internet access or third-party services, including servers, which implement video source encoding and rendering on the server side.
在未来通信系统中,数据网络仍可以是DN,或者,还可以有其它的名称,本申请不做限定。In future communication systems, the data network may still be a DN, or may have other names, which are not limited in this application.
在5G通信系统中,终端设备接入网络后可以建立协议数据单元(protocol data unit,PDU)会话,并通过PDU会话访问DN,可以与部署在DN中的应用功能网元(应用功能网元比如为应用服务器)交互。如图1中的(a)所示,根据用户访问的DN不同,网络可以根据网络策略选择接入DN的UPF作为为PDU会话锚点(PDU session anchor,PSA),并通过PSA的N6接口访问应用功能网元。In the 5G communication system, after the terminal device is connected to the network, it can establish a protocol data unit (PDU) session and access the DN through the PDU session. It can communicate with the application function network elements (application function network elements such as application function network elements) deployed in the DN. for application server) interaction. As shown in (a) in Figure 1, depending on the DN that the user accesses, the network can select the UPF of the access DN as the PDU session anchor (PSA) according to the network policy, and access it through the N6 interface of the PSA Application function network element.
5、接入与移动性管理网元:主要用于移动性管理和接入管理等,可以用于实现移动性管理网元(mobility management entity,MME)功能中除会话管理之外的其它功能,例如,合法监听以及接入授权/鉴权等功能。5. Access and mobility management network element: Mainly used for mobility management and access management, etc., and can be used to implement other functions in the mobility management entity (MME) function except session management. For example, functions such as lawful interception and access authorization/authentication.
如图1中的(a)所示,该接入管理网元可以是AMF网元。在未来通信系统中,接入管理网元仍可以是AMF网元,或者,还可以有其它的名称,本申请不做限定。As shown in (a) in Figure 1, the access management network element may be an AMF network element. In future communication systems, the access management network element can still be an AMF network element, or it can also have other names, which is not limited in this application.
6、会话管理网元:主要用于会话管理、终端设备的网络互连协议(internet protocol,IP)地址分配和管理、选择可管理终端设备平面功能、策略控制和收费功能接口的终结点以及下行数据通知等。6. Session management network element: Mainly used for session management, network interconnection protocol (IP) address allocation and management of terminal equipment, selection of endpoints for manageable terminal equipment plane functions, policy control and charging function interfaces, and downlink Data notifications, etc.
如图1中的(a)所示,该会话管理网元可以是SMF网元,可以包括中间会话管理功能(intermediate session management function,I-SMF)网元、锚点会话管理功能(anchor session management function,A-SMF)网元。在未来通信系统中,会话管理网元仍可以是SMF网元,或者,还可以有其它的名称,本申请不做限定。As shown in (a) in Figure 1, the session management network element can be an SMF network element and can include an intermediate session management function (I-SMF) network element and an anchor session management function (anchor session management function). function, A-SMF) network element. In future communication systems, the session management network element can still be an SMF network element, or it can also have other names, which is not limited in this application.
7、策略控制网元:用于指导网络行为的统一策略框架,为控制面功能网元(例如AMF,SMF网元等)提供策略规则信息等。7. Policy control network element: A unified policy framework used to guide network behavior and provide policy rule information for control plane functional network elements (such as AMF, SMF network elements, etc.).
该策略控制网元可以是策略和计费规则功能(policy and charging rules function,PCRF)网元。如图1中的(a)所示,该策略控制网元可以是PCF网元。在未来通信系统中,策略控制网元仍可以是PCF网元,或者,还可以有其它的名称,本申请不做限定。The policy control network element may be a policy and charging rules function (PCRF) network element. As shown in (a) in Figure 1, the policy control network element may be a PCF network element. In future communication systems, the policy control network element can still be a PCF network element, or it can also have other names, which is not limited in this application.
8、应用功能网元:应用功能网元可以通过应用功能网元与5G系统交互,用于接入网 络开放功能网元或与策略框架交互进行策略控制等。8. Application function network element: Application function network element can interact with the 5G system through the application function network element and be used for access network Network open function network elements or interact with the policy framework for policy control, etc.
如图1中的(a)所示,该应用功能网元可以是application function,AF网元。在未来通信系统中,应用功能网元仍可以是AF网元,或者,还可以有其它的名称,本申请不做限定。As shown in (a) in Figure 1, the application function network element may be an application function, AF network element. In the future communication system, the application function network element can still be an AF network element, or it can also have other names, which is not limited in this application.
9、网络开放功能网元:用于提供网络开放的定制功能。9. Network opening function network element: used to provide customized functions for network opening.
如图1中的(a)所示,该网络开放功能网元可以是网络开放功能(network exposure function,NEF)网元在未来通信系统中,该网络开放功能网元仍可以是NEF网元,或者,还可以有其它的名称,本申请不做限定。As shown in (a) in Figure 1, the network exposure function network element can be a network exposure function (NEF) network element. In future communication systems, the network exposure function network element can still be an NEF network element. Alternatively, there may be other names, which are not limited in this application.
10、服务器:可以提供应用服务数据。例如,可以提供视频数据,也可以提供音频数据,也可以提供其他类型的数据。本申请对于服务器提供的应用服务的数据类型仅作为示例而不进行限定。10. Server: Can provide application service data. For example, video data, audio data, or other types of data can be provided. The data types of application services provided by the server are only used as examples in this application and are not limited.
可以理解的是,上述网元或者功能既可以是硬件设备中的网络元件,也可以是在专用硬件上运行的软件功能,或者是平台(例如,云平台)上实例化的虚拟化功能。上述网元或者功能可划分出一个或多个服务,进一步,还可能会出现独立于网络功能存在的服务。在本申请中,上述功能的实例、或上述功能中包括的服务的实例、或独立于网络功能存在的服务实例均可称为服务实例。It can be understood that the above network element or function can be a network element in a hardware device, a software function running on dedicated hardware, or a virtualization function instantiated on a platform (for example, a cloud platform). The above network elements or functions can be divided into one or more services. Furthermore, there may also be services that exist independently of network functions. In this application, instances of the above functions, or instances of services included in the above functions, or service instances that exist independently of network functions can be called service instances.
进一步地,可以将AF网元简称为AF,NEF网元简称为NEF,AMF网元简称为AMF。即本申请后续所描述的AF均可替换为应用功能网元,NEF均可替换为网络开放功能网元,AMF均可替换为接入与移动性管理网元。Further, the AF network element may be abbreviated as AF, the NEF network element may be abbreviated as NEF, and the AMF network element may be abbreviated as AMF. That is, the AF described later in this application can be replaced by the application function network element, the NEF can be replaced by the network opening function network element, and the AMF can be replaced by the access and mobility management network element.
在图1中的(a)所示的架构中,各个网元之间的接口名称及功能如下:In the architecture shown in (a) in Figure 1, the interface names and functions between each network element are as follows:
1)、N1:AMF与终端之间的接口,可以用于向终端传递QoS控制规则等。1), N1: The interface between AMF and the terminal, which can be used to transmit QoS control rules to the terminal.
2)、N2:AMF与RAN之间的接口,可以用于传递核心网侧至RAN的无线承载控制信息等。2), N2: The interface between AMF and RAN, which can be used to transmit wireless bearer control information from the core network side to the RAN.
3)、N3:RAN与UPF之间的接口,主要用于传递RAN与UPF间的上下行用户面数据。3), N3: The interface between RAN and UPF, mainly used to transmit uplink and downlink user plane data between RAN and UPF.
4)、N4:SMF与UPF之间的接口,可以用于控制面与用户面之间传递信息,包括控制面向用户面的转发规则、QoS控制规则、流量统计规则等的下发以及用户面的信息上报。4), N4: The interface between SMF and UPF can be used to transfer information between the control plane and the user plane, including controlling the delivery of user-oriented forwarding rules, QoS control rules, traffic statistics rules, etc., as well as the user plane Report information.
5)、N5:AF与PCF之间的接口,可以用于应用业务请求下发以及网络事件上报。5), N5: The interface between AF and PCF, which can be used to issue application service requests and report network events.
6)、N6:UPF与DN的接口,用于传递UPF与DN之间的上下行用户数据流。6), N6: The interface between UPF and DN, used to transmit uplink and downlink user data flows between UPF and DN.
7)、N7:PCF与SMF之间的接口,可以用于下发协议数据单元(protocol data unit,PDU)会话粒度以及业务数据流粒度控制策略。7), N7: The interface between PCF and SMF can be used to deliver protocol data unit (PDU) session granularity and business data flow granularity control policy.
8)、N11:SMF与AMF之间的接口,可以用于传递RAN和UPF之间的PDU会话隧道信息、传递发送给终端的控制消息、传递发送给RAN的无线资源控制信息等。8), N11: The interface between SMF and AMF can be used to transfer PDU session tunnel information between RAN and UPF, transfer control messages sent to the terminal, transfer radio resource control information sent to RAN, etc.
这些接口序列号的含义在此不做限制。The meaning of these interface serial numbers is not limited here.
还应理解,系统中某些网元之间可以采用服务化接口,这里不再赘述。It should also be understood that service-oriented interfaces can be used between certain network elements in the system, which will not be described again here.
作为示例性说明,图1中的(b)示出了本申请实施例适用的通信系统100b的架构示意图。如图1中的(b)所示,该架构为终端-网络-终端架构场景,该场景可以是触觉互联网(tactile internet,TI),一个终端为主域触觉用户与人工系统接口,另一端受控域的远程控制机器人或远程操作员,网络传输核心网和接入网包括LTE、5G或下一代空口6G。 主域从受控域接收音频/视频反馈信号,主域和受控域在各种命令和反馈信号的帮助下,通过网络域上的双向通信链接进行连接,从而形成一个全局控制环。As an exemplary illustration, (b) in FIG. 1 shows a schematic architectural diagram of a communication system 100b applicable to the embodiment of the present application. As shown in (b) in Figure 1, the architecture is a terminal-network-terminal architecture scenario. This scenario can be a tactile Internet (TI). One terminal interfaces with the main domain tactile user and the artificial system, and the other end is subject to Remote control robots or remote operators in the control domain, network transmission core network and access network include LTE, 5G or next-generation air interface 6G. The main domain receives audio/video feedback signals from the controlled domain. The main domain and the controlled domain are connected through two-way communication links on the network domain with the help of various commands and feedback signals, thus forming a global control loop.
如图1中的(b)所示,该网络架构可以包括但不限于以下网元(或者称为功能网元、功能实体、节点、设备等):As shown in (b) in Figure 1, the network architecture may include but is not limited to the following network elements (also known as functional network elements, functional entities, nodes, devices, etc.):
UE#1、AN#1、UPF、UE#2、AN#2。UE#1, AN#1, UPF, UE#2, AN#2.
下面对图1中的(b)中示出的各网元进行简单介绍:The following is a brief introduction to each network element shown in (b) in Figure 1:
1、UE#1:可以为主域触觉用户与人工系统接口,从受控域接收视频、音频等数据。可以包括各种具有无线通信功能的手持设备、车载设备、可穿戴设备如头显眼镜、计算设备或连接到无线调制解调器的其它处理设备,以及各种形式的终端、移动台(mobile station,MS)、用户设备、软终端等等,如视频播放设备、全息投影仪等。本申请实施例对此并不限定。1. UE#1: It can interface between the main domain tactile user and the artificial system, and receive video, audio and other data from the controlled domain. It can include various handheld devices with wireless communication functions, vehicle-mounted devices, wearable devices such as head-mounted glasses, computing devices or other processing devices connected to wireless modems, as well as various forms of terminals, mobile stations (MS) , user equipment, soft terminals, etc., such as video playback equipment, holographic projectors, etc. The embodiments of the present application are not limited to this.
2、AN#1:用于为特定区域的授权终端设备(如,UE#1)提供入网功能,并能够根据终端设备的级别,业务的需求等使用不同质量的传输隧道。2. AN#1: used to provide network access functions for authorized terminal equipment (such as UE#1) in a specific area, and can use transmission tunnels of different qualities according to the level of the terminal equipment, business requirements, etc.
3、UPF:用于分组路由和转发以及用户面数据的QoS处理等。参考图1中(a)关于用户面网元的描述,这里不再赘述。3. UPF: used for packet routing and forwarding and QoS processing of user plane data. Refer to the description of user plane network elements in (a) of Figure 1, which will not be described again here.
4、AN#2:用于为特定区域的授权终端设备(如,UE#2)提供入网功能,并能够根据终端设备的级别,业务的需求等使用不同质量的传输隧道。4. AN#2: Used to provide network access functions for authorized terminal equipment (such as UE#2) in a specific area, and can use transmission tunnels of different qualities according to the level of the terminal equipment, business requirements, etc.
5、UE#2:为受控域的远程控制机器人或远程操作员。可以向主域发送视频、音频等数据。可以包括各种具有无线通信功能的手持设备、车载设备、可穿戴设备如头显眼镜、计算设备或连接到无线调制解调器的其它处理设备,以及各种形式的终端、移动台(mobile station,MS)、用户设备、软终端等等,如视频播放设备、全息投影仪等。本申请实施例对此并不限定。5. UE#2: It is a remote control robot or remote operator in the controlled domain. Video, audio and other data can be sent to the main domain. It can include various handheld devices with wireless communication functions, vehicle-mounted devices, wearable devices such as head-mounted glasses, computing devices or other processing devices connected to wireless modems, as well as various forms of terminals, mobile stations (MS) , user equipment, soft terminals, etc., such as video playback equipment, holographic projectors, etc. The embodiments of the present application are not limited to this.
作为示例性说明,图1中的(c)示出了本申请实施例适用的通信系统100c的架构示意图。如图1中的(c)所示,该架构为WiFi场景,在该场景下云端服务器将XR媒体数据或者普通视频通过固网、WiFi路由器/AP/机顶盒传送到终端(XR设备)。As an exemplary illustration, (c) in FIG. 1 shows a schematic architectural diagram of a communication system 100c applicable to the embodiment of the present application. As shown in (c) in Figure 1, the architecture is a WiFi scenario. In this scenario, the cloud server transmits XR media data or ordinary video to the terminal (XR device) through the fixed network, WiFi router/AP/set-top box.
如图1中的(c)所示,该网络架构可以包括但不限于以下网元(或者称为功能网元、功能实体、节点、设备等):As shown in (c) in Figure 1, the network architecture may include but is not limited to the following network elements (also known as functional network elements, functional entities, nodes, devices, etc.):
服务器、固网、WiFi路由器/WiFi接入点(access point,AP)和UE。Server, fixed network, WiFi router/WiFi access point (AP) and UE.
下面对图1中的(c)中示出的各网元进行简单介绍:The following is a brief introduction to each network element shown in (c) in Figure 1:
1、服务器:可以提供应用服务数据。例如,可以提供视频数据,也可以提供音频数据,也可以提供其他类型的数据。本申请对于服务器提供的应用服务的数据类型仅作为示例而不进行限定。1. Server: Can provide application service data. For example, video data, audio data, or other types of data can be provided. The data types of application services provided by the server are only used as examples in this application and are not limited.
2、固网:通过金属线或光纤线等固体媒体传送信号的网络。2. Fixed network: A network that transmits signals through solid media such as metal wires or optical fiber lines.
本申请中,可以通过固网向WiFi路由器/WiFi AP传输视频数据、音频数据等应用服务数据。In this application, application service data such as video data and audio data can be transmitted to the WiFi router/WiFi AP through the fixed network.
3、WiFi路由器/WiFi AP:可以将有线网络信号和移动网络信号转换成无线信号,以供具有无线通信功能的UE接收。3. WiFi router/WiFi AP: Can convert wired network signals and mobile network signals into wireless signals for reception by UEs with wireless communication capabilities.
4、UE:可以包括各种具有无线通信功能的手持设备、车载设备、可穿戴设备、计算设备或连接到无线调制解调器的其它处理设备,如头显眼镜、视频播放设备、全息投影仪 等等。本申请实施例对此并不限定。4. UE: It can include various handheld devices, vehicle-mounted devices, wearable devices, computing devices or other processing devices connected to wireless modems with wireless communication functions, such as head-mounted glasses, video playback devices, and holographic projectors. etc. The embodiments of the present application are not limited to this.
应理解,上述本申请实施例能够应用的网络架构仅是示例性说明,本申请实施例适用的网络架构并不局限于此,任何包括能够实现上述各个网元的功能的网络架构都适用于本申请实施例。It should be understood that the network architecture to which the embodiments of the present application can be applied are only illustrative. The network architecture applicable to the embodiments of the present application is not limited to this. Any network architecture that can realize the functions of each of the above network elements is applicable to this application. Application examples.
还应理解,图1中的(a)所示的AMF、SMF、UPF、PCF、NEF等可以理解为用于实现不同功能的网元,例如可以按需组合成网络切片。这些网元可以各自独立的设备,也可以集成于同一设备中实现不同的功能,或者可以是硬件设备中的网络元件,也可以是在专用硬件上运行的软件功能,或者是平台(例如,云平台)上实例化的虚拟化功能,本申请对于上述网元的具体形态不作限定。It should also be understood that the AMF, SMF, UPF, PCF, NEF, etc. shown in (a) of Figure 1 can be understood as network elements used to implement different functions, and can, for example, be combined into network slices as needed. These network elements can be independent devices, or they can be integrated into the same device to implement different functions, or they can be network elements in hardware devices, software functions running on dedicated hardware, or platforms (for example, cloud The virtualization function instantiated on the platform), this application does not limit the specific form of the above network elements.
还应理解,上述命名仅为便于区分不同的功能而定义,不应对本申请构成任何限定。本申请并不排除在5G网络以及未来其它的网络中采用其他命名的可能。例如,在6G网络中,上述各个网元中的部分或全部可以沿用5G中的术语,也可能采用其他名称等。It should also be understood that the above nomenclature is only defined to facilitate the differentiation of different functions and should not constitute any limitation on this application. This application does not rule out the possibility of using other naming in 5G networks and other future networks. For example, in a 6G network, some or all of the above network elements may use the terminology used in 5G, or may adopt other names.
还应理解,图1中的(a)的各个网元之间的接口名称只是一个示例,具体实现中接口的名称可能为其他的名称,本申请对此不作具体限定。此外,上述各个网元之间的所传输的消息(或信令)的名称也仅仅是一个示例,对消息本身的功能不构成任何限定。It should also be understood that the interface name between each network element in (a) in Figure 1 is just an example. In specific implementation, the name of the interface may be other names, and this application does not specifically limit this. In addition, the names of the messages (or signaling) transmitted between the various network elements are only examples and do not constitute any limitation on the function of the messages themselves.
应理解,本申请实施例提供的方法可以应用于5G通信系统,例如,图1中的(a)至图1中的(c)所示的通信系统。但是,本申请实施例中并不限定该方法能够应用的场景,例如,其他包括能够实现相应功能的网元的网络架构中同样适用。还例如,第六代通信(the6th generation,6G)系统架构等。并且,本申请实施例上述所使用的各个网元的名称,在未来通信系统中,可能保持功能相同,但名称会改变。It should be understood that the methods provided by the embodiments of the present application can be applied to 5G communication systems, for example, the communication systems shown in (a) to (c) in Figure 1 . However, the embodiments of the present application do not limit the scenarios in which this method can be applied. For example, it is also applicable to other network architectures including network elements that can implement corresponding functions. Another example is the sixth generation communication (the6th generation, 6G) system architecture, etc. Moreover, the names of each network element used in the above embodiments of the present application may maintain the same functions in future communication systems, but the names will change.
为了便于理解本申请实施例的技术方案,对本申请实施例可能涉及到的一些术语或概念进行简单描述。In order to facilitate understanding of the technical solutions of the embodiments of the present application, some terms or concepts that may be involved in the embodiments of the present application are briefly described.
1、XR技术:近年来,XR技术的不断进步和完善,相关产业得到了蓬勃的发展。如今,扩展现实技术已经进入到教育、娱乐、军事、医疗、环保、交通运输、公共卫生等各种与人们生产、生活息息相关的领域当中。扩展现实是各种现实相关技术的总称,具体包括:VR、AR和MR等。1. XR technology: In recent years, with the continuous progress and improvement of XR technology, related industries have developed vigorously. Today, extended reality technology has entered various fields closely related to people's production and life, such as education, entertainment, military, medical care, environmental protection, transportation, and public health. Extended reality is a general term for various reality-related technologies, including: VR, AR, MR, etc.
VR技术主要是指对视觉和音频场景的渲染以尽可能地模拟现实世界中的视觉和音频对用户的感官刺激,VR技术通常要求用户佩戴头戴式显示器(head mounted display,HMD)进而以模拟的视觉组件完全取代用户的视野,同时要求用户佩戴耳机以向用户提供随附的音频。此外,通常还需要在VR中对用户进行某种头部和动作跟踪,从而及时更新模拟的视觉和音频内容,达到用户体验的视觉和音频内容与用户的动作保持一致。VR technology mainly refers to the rendering of visual and audio scenes to simulate as much as possible the visual and audio stimulation of the user in the real world. VR technology usually requires users to wear a head-mounted display (HMD) to simulate The visual component completely replaces the user's field of view while requiring the user to wear headphones to provide accompanying audio to the user. In addition, it is usually necessary to perform some kind of head and movement tracking on the user in VR to update the simulated visual and audio content in a timely manner so that the visual and audio content of the user experience is consistent with the user's movements.
AR技术主要是指在用户感知的现实环境中提供视觉或听觉的附加信息或人工生成内容,其中,用户对现实环境的获取可以是直接的,即没有中间的感测,处理和渲染,也可以是间接的,即通过传感器等方式进行传递,并进行进一步的增强处理等。AR technology mainly refers to providing additional visual or auditory information or artificially generated content in the real environment perceived by the user. Among them, the user's acquisition of the real environment can be direct, that is, without intermediate sensing, processing, and rendering, or it can It is indirect, that is, it is transmitted through sensors and other methods, and further enhanced processing is performed.
MR技术是AR的一种高级形式,其实现方式之一是将一些虚拟元素插入到物理场景中,目的是为用户提供一种这些元素是真实场景一部分的沉浸体验。MR technology is an advanced form of AR. One of its implementation methods is to insert some virtual elements into the physical scene, with the purpose of providing users with an immersive experience in which these elements are part of the real scene.
2、超级分辨率(super resolution,SR):是指通过硬件或软件的方法提高原有图像/视频的分辨率,通过低分辨率图像来得到高分辨率图像的技术。目前,基于NN的SR技术因其画面恢复效果显著受到了广泛的关注,其中,NN可以是深度神经网络(deep neural  network,DNN)。2. Super resolution (SR): refers to the technology of improving the resolution of the original image/video through hardware or software methods, and obtaining high-resolution images through low-resolution images. At present, SR technology based on NN has received widespread attention because of its remarkable picture restoration effect. Among them, NN can be a deep neural network (deep neural network). network, DNN).
为了便于理解,结合图2和图3详细介绍基于DNN的SR技术。For ease of understanding, the DNN-based SR technology is introduced in detail with reference to Figures 2 and 3.
图2是本申请实施例提供的基于DNN的SR传输模式和传统传输模式的原理框图,基于DNN的SR传输模式具体包括以下四个步骤:Figure 2 is a principle block diagram of the DNN-based SR transmission mode and the traditional transmission mode provided by the embodiment of the present application. The DNN-based SR transmission mode specifically includes the following four steps:
步骤一:在服务器侧,将高分辨率(high definition,HD)XR视频帧在空间上划分为块(tile)(或者称为片(slice),为了便于描述下文中统称为块)。Step 1: On the server side, the high definition (HD) XR video frame is spatially divided into blocks (tiles) (or slices, for ease of description, collectively referred to as blocks below).
例如,整个视频帧的分辨率为4K(3840*1920)可以划分为一个个的小块,一小块的分辨率为(192*192),然后将整个视频在时间上划分为段,如可以1-2秒划分为一个段,也可以一个视频帧划分为一个段。For example, the entire video frame has a resolution of 4K (3840*1920) and can be divided into small blocks. The resolution of a small block is (192*192). Then the entire video can be divided into segments in time. For example, 1-2 seconds can be divided into a segment, or a video frame can be divided into a segment.
该步骤一的目的是为了分摊处理量,通过并行运算,加速处理。The purpose of step one is to amortize the processing load and speed up processing through parallel operations.
步骤二:在服务器侧,HD块进行下采样处理(即在空间域或者频域上进行抽样),得到低分辨率(low definition,LD)的块。Step 2: On the server side, the HD block is downsampled (i.e., sampled in the spatial domain or frequency domain) to obtain low definition (LD) blocks.
例如,一个块的分辨率为(192*192),经过下采样后的分辨率为(24*24)。在进行下采样后,可以再使用传统的视频压缩技术,如高效率视频编码(high efficiency video coding,HEVC)技术进一步压缩为超低分辨率(ultra-low resolution,ULR)的块。For example, the resolution of a block is (192*192), and the resolution after downsampling is (24*24). After downsampling, traditional video compression techniques such as high efficiency video coding (HEVC) technology can be used to further compress into ultra-low resolution (ULR) blocks.
步骤三:在服务器侧,将ULR块作为DNN的输入,将原始的HD内容作为DNN的目标输出,将这两者作为DNN的训练集,采用PSNR作为损失函数,进行DNN训练,从而可以得到与应用层业务相匹配的自适应神经网络。再将ULR块与DNN一起发送给用户。传输神经网络的信息的原因在于接收端不知道原始视频,需要源端根据原始视频产生神经网络并传输给接收端。Step 3: On the server side, use the ULR block as the input of the DNN, use the original HD content as the target output of the DNN, use these two as the training set of the DNN, and use PSNR as the loss function for DNN training, so that you can get Adaptive neural network matching application layer services. The ULR block is then sent to the user together with the DNN. The reason for transmitting the information of the neural network is that the receiving end does not know the original video, and the source end needs to generate the neural network based on the original video and transmit it to the receiving end.
步骤四:在用户侧,可以将ULR块作为DNN的输入,输出即为高清视频内容。Step 4: On the user side, the ULR block can be used as the input of DNN, and the output is high-definition video content.
在目前已有的传输架构中使用SR技术,是将XR视频的一幅画面帧,在网络传输层分成几十个的互联网协议(internet protocol,IP)包,例如50个IP包,以及将NN的数据也编码为多个IP包,然后将这些IP包传输到固网和/或核心网,之后IP数据包再经过RAN传输到UE,具体流程如图3所示,图3是本申请实施例提供的基于SR方法的XR视频传输示意图。Using SR technology in the current transmission architecture is to divide a frame of XR video into dozens of Internet Protocol (IP) packets at the network transmission layer, such as 50 IP packets, and NN The data is also encoded into multiple IP packets, and then these IP packets are transmitted to the fixed network and/or core network, and then the IP data packets are transmitted to the UE through the RAN. The specific process is shown in Figure 3. Figure 3 is the implementation of this application The example provides a schematic diagram of XR video transmission based on the SR method.
3、协议数据单元(protocol data unit,PDU)会话:为终端设备与DN之间的一个关联,用于提供一个PDU连接服务。3. Protocol data unit (PDU) session: It is an association between the terminal device and the DN, used to provide a PDU connection service.
4、服务质量(quality of service,QoS)流(flow)机制:目前标准中规定QoS flow是最小QoS控制粒度,QoS flow都有对应的QoS配置。4. Quality of service (QoS) flow mechanism: The current standard stipulates that QoS flow is the minimum QoS control granularity, and QoS flow has corresponding QoS configuration.
为了便于理解,下面结合图4对本申请涉及的QoS保障机制的架构进行说明。图4是本申请实施例提供的QoS保障机制的架构的示意图。For ease of understanding, the architecture of the QoS guarantee mechanism involved in this application will be described below with reference to Figure 4. Figure 4 is a schematic diagram of the architecture of the QoS guarantee mechanism provided by the embodiment of the present application.
QoS是业务传输质量的保障机制。其目的是针对各种业务的不同需求,为其提供端到端的服务质量保证。在一个协议数据单元(protocol data unit,PDU)会话(session)中,QoS流是区别QoS的最小粒度。5G系统中,使用QoS流标识符(QoS flow identifier,QFI)标识QoS流,并且QFI在一个PDU会话内要唯一。也就是说,一个PDU会话可以有多条(最多64条)QoS流,不同QoS流的QFI不同。一个PDU会话中,具有相同QFI的用户面业务流使用相同的业务转发处理方式(如调度)。QoS is a guarantee mechanism for service transmission quality. Its purpose is to provide end-to-end service quality assurance for the different needs of various businesses. In a protocol data unit (PDU) session, QoS flow is the smallest granularity that distinguishes QoS. In the 5G system, the QoS flow identifier (QFI) is used to identify the QoS flow, and the QFI must be unique within a PDU session. In other words, a PDU session can have multiple (up to 64) QoS flows, and different QoS flows have different QFIs. In a PDU session, user plane service flows with the same QFI use the same service forwarding processing method (such as scheduling).
在配置粒度上,一个PDU会话可以对应多个无线承载(radio bearer,RB)。一个无 线承载又可以包含多个QoS流。In terms of configuration granularity, one PDU session can correspond to multiple radio bearers (radio bearers, RBs). a nothing A line bearer can contain multiple QoS flows.
对于一个PDU会话而言,5GC和AN之间仍是单一的NG-U信道,AN与UE之间采用的是无线承载,由AN控制将QoS流映射到哪个承载上。For a PDU session, there is still a single NG-U channel between the 5GC and the AN. The wireless bearer is used between the AN and the UE, and the AN controls which bearer the QoS flow is mapped to.
图5是本申请实施例适用的QoS流的映射的示意图。5GC与AN通过把数据包映射到合适的QoS流和无线承载上来保证服务质量。Figure 5 is a schematic diagram of QoS flow mapping applicable to the embodiment of the present application. 5GC and AN ensure quality of service by mapping data packets to appropriate QoS flows and wireless bearers.
UPF实现网际协议(Internet Protocol,IP)流到QoS流的映射,AN实现QoS流到RB的映射。QoS映射可以包括UPF映射、AN映射和UE映射三部分:UPF implements the mapping of Internet Protocol (IP) flows to QoS flows, and AN implements the mapping of QoS flows to RBs. QoS mapping can include three parts: UPF mapping, AN mapping and UE mapping:
UPF映射:UPF接收下行数据后,利用分配和保留优先级将其映射到对应的QoS流。然后执行该QoS流的QoS控制,并通QFI标记该数据。通过QoS流对应的N3接口将该数据发送给AN。UPF mapping: After UPF receives downlink data, it maps it to the corresponding QoS flow using allocation and reservation priorities. Then perform QoS control of the QoS flow and mark the data through QFI. The data is sent to the AN through the N3 interface corresponding to the QoS flow.
AN映射:AN接收到下行数据后,确定QFI对应的RB和QoS流。然后执行QoS流对应的QoS控制,通过RB将该数据发送给UE。或者,AN接收到上行数据后,确定QFI对应的QoS流。然后执行QoS流对应的QoS控制,通过QoS流对应的N3接口将该数据发送给UPF。AN mapping: After receiving the downlink data, AN determines the RB and QoS flow corresponding to the QFI. Then perform QoS control corresponding to the QoS flow, and send the data to the UE through the RB. Or, after receiving the uplink data, the AN determines the QoS flow corresponding to the QFI. Then perform QoS control corresponding to the QoS flow, and send the data to UPF through the N3 interface corresponding to the QoS flow.
UE映射:UE要发送上行数据时,根据QoS规则将其映射到对应的QoS流。然后通过QoS流对应的RB发送该上行数据。UE mapping: When the UE wants to send uplink data, it is mapped to the corresponding QoS flow according to QoS rules. The uplink data is then sent through the RB corresponding to the QoS flow.
应理解,SMF负责QoS流的控制,建立PDU会话时,SMF可以给UPF、AN、UE配置响应的QoS参数。QoS流可以通过PDU会话建立和修改,也可以通过预配置定义。一个QoS流的配置响应参数包括三部分:It should be understood that SMF is responsible for the control of QoS flows. When establishing a PDU session, SMF can configure response QoS parameters for UPF, AN, and UE. QoS flows can be established and modified through PDU sessions or defined through preconfiguration. The configuration response parameters of a QoS flow include three parts:
1)、QoS配置(QoS profile):SMF可以通过N2接口将QoS配置提供给AN,或者也可以在AN中预配置。1) QoS configuration (QoS profile): SMF can provide QoS configuration to AN through the N2 interface, or it can also be pre-configured in the AN.
应理解,某一QoS流的QoS配置,也可以称为QoS配置文件。QoS配置的具体参数如表1所示。It should be understood that the QoS configuration of a certain QoS flow can also be called a QoS configuration file. The specific parameters of QoS configuration are shown in Table 1.
表1

Table 1

5QI是一个标量,用于索引一个5G QoS特性。5QI可以标准化的,可以是预配置的,也可以是动态定义的。5QI的属性如下表2所示。5QI is a scalar used to index a 5G QoS characteristic. 5QI can be standardized, preconfigured, or dynamically defined. The attributes of 5QI are shown in Table 2 below.
表2
Table 2
2)、QoS规则(QoS rule):SMF可以通过N1接口将QoS规则提供给UE。或者,UE可以通过QoS机制推导获得。2) QoS rules: SMF can provide QoS rules to UE through the N1 interface. Alternatively, the UE can derive it through the QoS mechanism.
应理解,UE执行上行用户面数据业务的分类和标记,也就是根据QoS规则将上行数据映射到对应的QoS流。这些QoS规则可以是显示提供给UE的(也就是在PDU会话建立/修改流程中通过信令显示配置给UE);或者,也可以在UE上预配置;或者,也可以是UE使用反射QoS机制隐式推导出来。QoS规则具有如下特性:It should be understood that the UE performs classification and marking of uplink user plane data services, that is, mapping uplink data to corresponding QoS flows according to QoS rules. These QoS rules can be explicitly provided to the UE (that is, explicitly configured to the UE through signaling during the PDU session establishment/modification process); or they can be pre-configured on the UE; or the UE can use the reflection QoS mechanism. Implicitly derived. QoS rules have the following characteristics:
一个QoS规则包含:QoS流关联的QFI、数据包过滤器集(一个过滤器列表)、优先级。A QoS rule includes: QFI associated with the QoS flow, packet filter set (a filter list), and priority.
一个QoS流可以有多个QoS规则。A QoS flow can have multiple QoS rules.
一个PDU会话都要配置一个默认的QoS规则,默认的QoS规则关联到一条QoS流上。A PDU session must be configured with a default QoS rule, and the default QoS rule is associated with a QoS flow.
3)、上行和下行数据包检测规则(packet detection rule,PDR):SMF通过N4接口将PDR(s)提供给UPF。3). Upstream and downstream packet detection rules (packet detection rules, PDR): SMF provides PDR(s) to UPF through the N4 interface.
5、图像组(group of picture,GoP):由多种类型的视频帧组成。GoP中的第一个帧为I帧(intra frame),后面可以包含多个P帧(predicted frame),其中I帧为帧内参考帧,通常数据量较大,在解码时根据本帧数据恢复图像,出错对视频质量的影响大;P帧为预测编码帧,通常数据量较小,用来表示与前一帧的画面差别的数据,解码时需要用之前缓存的画面叠加上本帧定义的差别生成图像,出错对视频质量的影响相对较小。因此,可以根据数据包所属的视频帧类型来调度,例如,由于I帧相比于P帧更加重要,所以I帧所属的数据包调度优先级高,P帧所属的数据包调度优先级更低。5. Group of picture (GoP): consists of multiple types of video frames. The first frame in the GoP is an I frame (intra frame), which can contain multiple P frames (predicted frames) later. The I frame is an intra-frame reference frame. Usually the amount of data is large, and it is restored based on the data of this frame during decoding. Image, errors have a great impact on video quality; P frame is a predictive coding frame, usually with a small amount of data. It is used to represent the data that is different from the previous frame. When decoding, it is necessary to superimpose the previously cached picture on the frame defined by this frame. Differentially generated images, errors have relatively little impact on video quality. Therefore, the data packet can be scheduled according to the type of video frame to which the data packet belongs. For example, since I frames are more important than P frames, the data packets to which I frames belong have a higher scheduling priority, and the data packets to which P frames belong have a lower scheduling priority. .
6、视频质量评价指标:当前主流的视频质量评价指标,主要包括两大类:一类是客观评价指标,例如,PSNR、SSIM,通过计算各个像素点之间的差异或者相关性得到的数值; 第二类是主观评价指标,例如,VMAF,反应出不同的图像失真对用户主观体验的影响程度,其分数范围为0-100分。具体地,VMAF分数越高图像失真越少,用户主观体验越好。6. Video quality evaluation indicators: The current mainstream video quality evaluation indicators mainly include two categories: one is objective evaluation indicators, such as PSNR and SSIM, which are values obtained by calculating the difference or correlation between each pixel; The second category is subjective evaluation indicators, such as VMAF, which reflects the impact of different image distortions on the user's subjective experience, with scores ranging from 0 to 100 points. Specifically, the higher the VMAF score, the less image distortion and the better the user's subjective experience.
7、浮点类型数据的比特结构:一个浮点数的比特包含符号部分、指数部分以及分数部分。7. Bit structure of floating-point type data: The bits of a floating-point number include the sign part, the exponent part and the fraction part.
以32位float型为例,结合图6介绍浮点类型数据的比特结构,图6是一种浮点类型数据的比特结构示意图,包含1位符号(sign)部分:0代表正数,1代表负数;8位指数部分(exponent):指数的范围从-127到+127;23位分数部分(fraction):最小精度为1/(2^23)。具体地,该浮点类型数据的绝对值可以基于下述公式计算得到:
Taking the 32-bit float type as an example, the bit structure of floating-point type data is introduced in conjunction with Figure 6. Figure 6 is a schematic diagram of the bit structure of floating-point type data, including a 1-bit sign part: 0 represents a positive number, 1 represents Negative number; 8-bit exponent: the exponent ranges from -127 to +127; 23-bit fraction: the minimum precision is 1/(2^23). Specifically, the absolute value of the floating point type data can be calculated based on the following formula:
因此可以看出,分数部分的低位比特对系数的绝对值影响较小,符号部分、指数部分以及分数部分的高位比特对绝对值影响较大。需要说明的是,本申请实施例中涉及的分数部分的低位比特和分数部分的高位比特可以理解为,分数部分由高位比特和低位比特组成,在分数部分中除了高位比特之外的均为低位比特,例如,分数部分共23位,第一个比特位为高位比特,剩余的22位为低位比特;或者说前x比特位高位比特,剩余的23-x均为低位比特。Therefore, it can be seen that the low-order bits of the fractional part have a small impact on the absolute value of the coefficient, while the high-order bits of the sign part, the exponent part, and the fractional part have a greater impact on the absolute value. It should be noted that the low-order bits of the fraction part and the high-order bits of the fraction part involved in the embodiments of this application can be understood as the fraction part is composed of high-order bits and low-order bits, and the fraction part except the high-order bits are all low-order bits. Bits, for example, the fraction part has a total of 23 bits, the first bit is a high-order bit, and the remaining 22 bits are low-order bits; or the first x bits are high-order bits, and the remaining 23-x are low-order bits.
由上述介绍的QoS flow机制可知,可以对NN数据包和视频帧数据包根据不同的业务类型设置不同的QoS需求,但是对于NN数据包自身,目前还缺乏区分不同NN数据包重要性的机制。From the QoS flow mechanism introduced above, we can set different QoS requirements for NN data packets and video frame data packets according to different business types. However, for the NN data packet itself, there is currently a lack of a mechanism to distinguish the importance of different NN data packets.
另外,由上述介绍的GoP可知,为区分视频帧数据包重要性可以根据视频编码的结构特点区分不同类型视频帧的数据包,但是对于NN而言,目前的编码结构无重要性分区,因此无法沿用衡量视频帧重要性的方法去衡量NN数据包的重要性。In addition, it can be seen from the GoP introduced above that in order to distinguish the importance of video frame data packets, the data packets of different types of video frames can be distinguished according to the structural characteristics of video coding. However, for NN, the current coding structure has no importance partitioning, so it cannot Use the method of measuring the importance of video frames to measure the importance of NN packets.
为了解决目前的NN数据包传输存在的缺点,本申请提供一种通信方法,通过在GTP-U数据包中携带指示神经网络数据包的优先级的指示信息,以使得接收GTP-U数据包的接入网设备能够根据该指示信息传输神经网络数据包,以期实现对不同优先级的神经网络数据包的差异化传输。In order to solve the shortcomings of the current NN data packet transmission, this application provides a communication method by carrying indication information indicating the priority of the neural network data packet in the GTP-U data packet, so that the GTP-U data packet is received The access network device can transmit neural network data packets according to the instruction information, in order to achieve differentiated transmission of neural network data packets of different priorities.
上文结合图1中的(a)至(c)介绍了本申请实施例能够应用的场景,还简单介绍了本申请中涉及的基本概念,下文中将结合附图详细介绍本申请提供的通信方法。The scenarios in which the embodiments of the present application can be applied are introduced above with reference to (a) to (c) in Figure 1 , and the basic concepts involved in the present application are also briefly introduced. Below, the communication provided by the present application will be introduced in detail with reference to the accompanying drawings. method.
下文示出的实施例并未对本申请实施例提供的方法的执行主体的具体结构特别限定,只要能够通过运行记录有本申请实施例的提供的方法的代码的程序,以根据本申请实施例提供的方法进行通信即可,例如,本申请实施例提供的方法的执行主体可以是核心网设备,或者是核心网设备中能够调用程序并执行程序的功能模块。The embodiments shown below do not specifically limit the specific structure of the execution body of the method provided by the embodiment of the present application, as long as it can be provided according to the embodiment of the present application by running a program that records the code of the method provided by the embodiment of the present application. It suffices to communicate by a method. For example, the execution subject of the method provided by the embodiment of the present application may be the core network device, or a functional module in the core network device that can call the program and execute the program.
为了便于理解本申请实施例,做出以下几点说明。In order to facilitate understanding of the embodiments of the present application, the following points are explained.
第一,在本申请中,“用于指示”可以包括直接指示和间接指示。当描述某一信息用于指示A时,可以包括该信息直接指示A或间接指示A,而并不代表该信息中一定携带有A。First, in this application, "for indicating" may include direct instructions and indirect instructions. When describing certain information to indicate A, it may include that the information directly indicates A or indirectly indicates A, but it does not mean that the information must contain A.
将信息所指示的信息称为待指示信息,则具体实现过程中,对待指示信息进行指示的方式有很多种,例如但不限于,可以直接指示待指示信息,如待指示信息本身或者该待指示信息的索引等。也可以通过指示其他信息来间接指示待指示信息,其中该其他信息与待指示信息之间存在关联关系。还可以仅仅指示待指示信息的一部分,而待指示信息的其他 部分则是已知的或者提前约定的。例如,还可以借助预先约定(例如协议规定)的各个信息的排列顺序来实现对特定信息的指示,从而在一定程度上降低指示开销。同时,还可以识别各个信息的通用部分并统一指示,以降低单独指示同样的信息而带来的指示开销。The information indicated by the information is called information to be indicated. In the specific implementation process, there are many ways to indicate the information to be indicated. For example, but not limited to, the information to be indicated can be directly indicated, such as the information to be indicated itself or the information to be indicated. Index of information, etc. The information to be indicated may also be indirectly indicated by indicating other information, where there is an association relationship between the other information and the information to be indicated. It is also possible to indicate only part of the information to be indicated, while other parts of the information to be indicated are Some are known or agreed in advance. For example, the indication of specific information can also be achieved by means of a pre-agreed (for example, protocol stipulated) arrangement order of each piece of information, thereby reducing the indication overhead to a certain extent. At the same time, the common parts of each piece of information can also be identified and indicated in a unified manner to reduce the instruction overhead caused by indicating the same information individually.
第二,在本申请中示出的第一、第二以及各种数字编号(例如,“#1”、“#2”等)仅为描述方便,用于区分的对象,并不用来限制本申请实施例的范围。例如,区分不同数据包等。而不是用于描述特定的顺序或先后次序。应该理解这样描述的对象在适当情况下可以互换,以便能够描述本申请的实施例以外的方案。Second, the first, second and various numerical numbers (for example, "#1", "#2", etc.) shown in this application are only for convenience of description and are used to distinguish objects, and are not used to limit this application. Scope of Application Embodiments. For example, distinguish different data packets, etc. It is not used to describe a specific order or sequence. It is to be understood that objects so described are interchangeable where appropriate to enable description of aspects other than the embodiments of the present application.
第三,在本申请中,“预配置”可包括预先定义,例如,协议定义。其中,“预先定义”可以通过在设备(例如,包括各个网元)中预先保存相应的代码、表格或其他可用于指示相关信息的方式来实现,本申请对于其具体的实现方式不做限定。Third, in this application, "preconfigured" may include predefined, for example, protocol definitions. Among them, "pre-definition" can be realized by pre-saving corresponding codes, tables or other methods that can be used to indicate relevant information in the device (for example, including each network element). This application does not limit its specific implementation method.
第四,本申请实施例中涉及的“保存”,可以是指的保存在一个或者多个存储器中。所述一个或者多个存储器,可以是单独的设置,也可以是集成在编码器或者译码器,处理器、或通信装置中。所述一个或者多个存储器,也可以是一部分单独设置,一部分集成在译码器、处理器、或通信装置中。存储器的类型可以是任意形式的存储介质,本申请并不对此限定。Fourth, the “save” involved in the embodiments of this application may refer to saving in one or more memories. The one or more memories may be provided separately, or may be integrated in an encoder or decoder, a processor, or a communication device. The one or more memories may also be partially provided separately and partially integrated in the decoder, processor, or communication device. The type of memory can be any form of storage medium, and this application is not limited thereto.
第五,本文中术语“和/或”,仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,本文中字符“/”,一般表示前后关联对象是一种“或”的关系。Fifth, the term "and/or" in this article is just an association relationship that describes related objects, indicating that there can be three relationships. For example, A and/or B can mean: A alone exists, and A and B exist simultaneously. , there are three situations of B alone. In addition, the character "/" in this article generally indicates that the related objects are an "or" relationship.
第六,本申请实施例中涉及的“协议”可以是指通信领域的标准协议,例如可以包括5G协议、新空口(new radio,NR)协议以及应用于未来的通信系统中的相关协议,本申请对此不做限定。Sixth, the "protocol" involved in the embodiments of this application may refer to standard protocols in the communication field, which may include, for example, 5G protocols, new radio (NR) protocols, and related protocols applied in future communication systems. There are no restrictions on this application.
以下,不失一般性,以设备之间的交互为例详细说明本申请实施例提供的通信的方法。In the following, without loss of generality, the communication method provided by the embodiment of the present application will be described in detail by taking the interaction between devices as an example.
图7是本申请实施例提供的一种通信方法的示意性流程图。可以理解,图7中以服务器、核心网设备和接入网设备作为该交互示意的执行主体为例来示意该方法,但本申请并不限制该交互示意的执行主体。例如,图7中的服务器也可以是支持服务器实现该方法的芯片、芯片系统、或处理器,还可以是能实现全部或部分服务器功能的逻辑模块或软件;图7中的核心网设备也可以是支持核心网设备实现该方法的芯片、芯片系统、或处理器,还可以是能实现全部或部分核心网设备功能的逻辑模块或软件;图7中的接入网设备也可以是支持接入网设备实现该方法的芯片、芯片系统、或处理器,还可以是能实现全部或部分接入网设备功能的逻辑模块或软件。该方法包括以下步骤:Figure 7 is a schematic flow chart of a communication method provided by an embodiment of the present application. It can be understood that in Figure 7, the server, the core network device and the access network device are used as the execution subjects of the interactive representation as an example to illustrate the method, but this application does not limit the execution subjects of the interactive representation. For example, the server in Figure 7 can also be a chip, chip system, or processor that supports the server to implement the method, or can be a logic module or software that can realize all or part of the server functions; the core network equipment in Figure 7 can also be It is a chip, chip system, or processor that supports core network equipment to implement this method, or it can be a logic module or software that can realize all or part of the core network equipment functions; the access network equipment in Figure 7 can also be an access network equipment that supports access. The chip, chip system, or processor of the network device that implements the method may also be a logic module or software that can realize all or part of the functions of the access network device. The method includes the following steps:
S710,服务器生成神经网络数据包。S710, the server generates neural network data packets.
其中,神经网络数据包中包括指示信息,该指示信息用于指示该神经网络数据包的优先级。The neural network data packet includes indication information, and the indication information is used to indicate the priority of the neural network data packet.
本申请实施例中涉及的神经网络数据包的优先级可以指该神经网络数据包的调度(或传输)优先级,例如,神经网络数据包的优先级用于确定是否优先调度该神经网络数据包,如当网络状态发生拥堵时,优先级较高的神经网络数据包可以及时传输到用户侧。The priority of the neural network data packet involved in the embodiment of the present application may refer to the scheduling (or transmission) priority of the neural network data packet. For example, the priority of the neural network data packet is used to determine whether to prioritize the scheduling of the neural network data packet. , for example, when network congestion occurs, neural network data packets with higher priority can be transmitted to the user side in time.
或者,or,
该神经网络数据包的优先级可以指该神经网络数据包的处理优先级,其中,处理方式包括但不限于以下任意一种处理方式:神经网络数据包的优先级用于确定用户侧的物理层 向应用层递交神经网络数据包的顺序,优先级较高的神经网络数据包可以及时递交到应用层用于恢复传输数据。The priority of the neural network data packet may refer to the processing priority of the neural network data packet, where the processing method includes but is not limited to any one of the following processing methods: The priority of the neural network data packet is used to determine the physical layer on the user side In the order in which neural network data packets are submitted to the application layer, neural network data packets with higher priority can be submitted to the application layer in time to restore the transmission data.
由上述可知,该实施例中,服务器在生成神经网络数据包时,可以将指示该神经网络数据包的优先级的指示信息携带在神经网络数据包中,以便于接收该神经网络数据包的核心网设备能够从该神经网络数据包中读取用于指示神经网络数据包的优先级的指示信息,获知神经网络数据包的优先级,以期实现对不同优先级的神经网络数据包的差异化传输。As can be seen from the above, in this embodiment, when the server generates a neural network data packet, it can carry indication information indicating the priority of the neural network data packet in the neural network data packet, so as to receive the core of the neural network data packet. The network device can read the instruction information used to indicate the priority of the neural network data packet from the neural network data packet, and learn the priority of the neural network data packet, in order to achieve differentiated transmission of neural network data packets of different priorities. .
示例性地,神经网络数据包中包括指示信息可以是神经网络数据包的包头中包括该指示信息。For example, the indication information included in the neural network data packet may include the indication information in the header of the neural network data packet.
例如,服务器可以在传输层或者更高层的数据包中添加指示信息,如,该指示信息可以添加到用户数据报协议(user datagram protocol,UDP)字段与实时传输协议(real-time transport protocol,RTP)字段之间。For example, the server can add indication information to the packet at the transport layer or higher. For example, the indication information can be added to the user datagram protocol (UDP) field and the real-time transport protocol (RTP) field. ) fields.
为了便于理解,结合图8说明指示信息如何携带在神经网络数据包的包头中,图8是本申请实施例提供的一种神经网络数据包示意图。For ease of understanding, how the indication information is carried in the header of the neural network data packet will be described with reference to FIG. 8 , which is a schematic diagram of a neural network data packet provided by an embodiment of the present application.
应理解,图8只是示例性示出指示信息可以携带在神经网络数据包的包头中,对本申请的保护范围不构成任何的限定,指示信息还可以添加在神经网络数据包的包头的其他位置,如,IP字段和UDP字段之间,或者RTP字段之后等,这里不再一一举例说明。It should be understood that Figure 8 only illustrates that the indication information can be carried in the header of the neural network data packet, and does not constitute any limitation on the protection scope of the present application. The indication information can also be added at other locations in the header of the neural network data packet. For example, between the IP field and the UDP field, or after the RTP field, etc., I will not give examples one by one here.
具体地,服务器生成的神经网络数据包对应某个神经网络。例如,前文图2中所示的场景下服务器将ULR块作为DNN的输入,将原始的HD块作为DNN的目标输出,将这两者作为DNN的训练集,采用PSNR作为损失函数,进行训练得到DNN,ULR块与DNN需要一起发送给用户。其中,DNN的数据可以编码为多个IP,并且该多个IP包传输到核心网,之后IP数据包再经过无线接入网传输到UE。该实施例中将神经网络的数据编码得到的IP包称为神经网络数据包。Specifically, the neural network data packet generated by the server corresponds to a certain neural network. For example, in the scenario shown in Figure 2 above, the server uses the ULR block as the input of the DNN, uses the original HD block as the target output of the DNN, uses the two as the training set of the DNN, and uses PSNR as the loss function for training. DNN, ULR block and DNN need to be sent to the user together. Among them, the DNN data can be encoded into multiple IPs, and the multiple IP packets are transmitted to the core network, and then the IP data packets are transmitted to the UE through the wireless access network. In this embodiment, the IP packet obtained by encoding the data of the neural network is called a neural network data packet.
应理解,一个神经网络的数据可以编码得到一个或者多个神经网络数据包。上述的服务器生成的神经网络数据包可以是某个神经网络的数据编码得到的一个或者多个神经网络数据包中的一个。It should be understood that the data of a neural network can be encoded into one or more neural network data packets. The neural network data packet generated by the above-mentioned server may be one or more neural network data packets obtained by encoding data of a certain neural network.
还应理解,服务器生成的神经网络数据包用于承载神经网络数据包对应的神经网络的参数信息(如,神经网络的系数)。It should also be understood that the neural network data packet generated by the server is used to carry the parameter information of the neural network corresponding to the neural network data packet (for example, the coefficients of the neural network).
具体地,神经网络数据包对应的神经网络用于对数据进行处理(如,恢复数据),其中,数据可以是视频帧数据、音频帧数据或图像数据,还可以是其他类型的数据。本申请实施例对处理的数据的类型不做限定。Specifically, the neural network corresponding to the neural network data packet is used to process data (eg, restore data), where the data can be video frame data, audio frame data, or image data, or other types of data. The embodiments of this application do not limit the type of data processed.
为了便于描述,下文中以数据为视频帧数据或图像数据为例进行说明。该神经网络用于对低分辨率图像进行恢复得到高分辨率图像。For ease of description, the following description takes the example that the data is video frame data or image data. The neural network is used to restore low-resolution images to obtain high-resolution images.
由上述可知,服务器生成的神经网络数据包中包括指示信息,该指示信息用于指示神经网络数据包的优先级,或者可以理解为:该指示信息用于指示该神经网络数据包的重要性。It can be seen from the above that the neural network data packet generated by the server includes indication information, and the indication information is used to indicate the priority of the neural network data packet, or it can be understood that the indication information is used to indicate the importance of the neural network data packet.
作为一种可能的实现方式:神经网络数据包的优先级与该神经网络数据包对应的神经网络恢复数据的效果相关。As a possible implementation method: the priority of a neural network data packet is related to the effect of the neural network recovering data corresponding to the neural network data packet.
在该实现方式下,服务器可以根据神经网络恢复数据的效果确定该神经网络的数据编码得到的神经网络数据包的优先级。 In this implementation, the server can determine the priority of the neural network data packets obtained by encoding the data of the neural network based on the effect of the neural network on recovering data.
该实施例中,由神经网络恢复的数据可以是图像或者视频数据,而神经网络恢复数据的效果可以是画面恢复效果。例如,在XR的超级分辨率传输模式中,神经网络用来将低清视频帧恢复为高清视频帧,不同的用户在不同的时间所对应的神经网络模型对视频的恢复效果是不同的。在该实现方式下,将神经网络对视频的恢复效果作为衡量该神经网络的数据编码得到的神经网络数据包的优先级的标准。In this embodiment, the data restored by the neural network may be image or video data, and the effect of the data restored by the neural network may be a picture restoration effect. For example, in the super-resolution transmission mode of XR, neural networks are used to restore low-definition video frames to high-definition video frames. Different users have different restoration effects on videos corresponding to the neural network models at different times. In this implementation, the restoration effect of the neural network on the video is used as a criterion to measure the priority of the neural network data packets obtained by encoding the data of the neural network.
示例性地,神经网络恢复数据的效果可以由以下任意一种指标指示:PSNR、SSIM或VMAF等。For example, the effect of the neural network on recovering data can be indicated by any of the following indicators: PSNR, SSIM or VMAF, etc.
例如,神经网络恢复数据的效果满足预期(如,VAMF大于预设阈值)的情况下,该神经网络的数据编码得到的神经网络数据包的优先级为为高优先级。For example, when the effect of the neural network on recovering data meets expectations (for example, VAMF is greater than a preset threshold), the priority of the neural network data packet obtained by encoding the data of the neural network is high priority.
还例如,神经网络恢复数据的效果不满足预期(如,VAMF小于或者等于预设阈值)的情况下,该神经网络的数据编码得到的神经网络数据包的优先级为低优先级。For another example, when the effect of the neural network on recovering data does not meet expectations (for example, the VAMF is less than or equal to a preset threshold), the priority of the neural network data packet obtained by encoding the data of the neural network is low priority.
应理解,本申请实施例中对于如何确定不同神经网络对应的恢复效果指标(如,PSNR、SSIM或VMAF等)的值不做限定,可以根据目前相关技术中记载的确定方式,包括但不限于:通过计算使用神经网络恢复的图像和原始高清图像的像素点之间的差异或者相关性得到PSNR、SSIM的值;或者,It should be understood that in the embodiments of the present application, there is no limit on how to determine the values of the recovery effect indicators (such as PSNR, SSIM or VMAF, etc.) corresponding to different neural networks. The determination method can be based on the current determination methods recorded in the related art, including but not limited to : Obtain the values of PSNR and SSIM by calculating the difference or correlation between the pixels of the image restored using the neural network and the original high-definition image; or,
通过用户反馈的体验效果确定神经网络对应的VMAF的值。The value of VMAF corresponding to the neural network is determined based on the experience effect of user feedback.
可选地,以不同神经网络对应的VMAF表示不同神经网络恢复数据的效果为例,说明该实现方式下服务器如何确定神经网络数据包的优先级。Optionally, take the VMAF corresponding to different neural networks to represent the effect of different neural networks on data recovery as an example to explain how the server determines the priority of neural network data packets in this implementation.
例如,服务器需要确定4个神经网络的数据(如,神经网络#1、神经网络#2、神经网络#3和神经网络#4)编码得到的神经网络数据包(如,神经网络#1的数据编码得到的神经网络数据包为神经网络数据包#1、神经网络#2的数据编码得到的神经网络数据包为神经网络数据包#2、神经网络#3的数据编码得到的神经网络数据包为神经网络数据包#3和神经网络#4的数据编码得到的神经网络数据包为神经网络数据包#4)的优先级。For example, the server needs to determine the neural network data packet (e.g., the data of neural network #1) encoded by 4 neural networks (e.g., neural network #1, neural network #2, neural network #3, and neural network #4). The encoded neural network data packet is neural network data packet #1, the encoded neural network data packet of neural network #2 is neural network data packet #2, and the encoded neural network data packet of neural network #3 is The neural network data packet obtained by encoding the data of neural network data packet #3 and neural network #4 is the priority of neural network data packet #4).
当不同神经网络对应的VMAF表示不同神经网络恢复数据的效果的情况下,服务器可以根据该4个神经网络分别对应的VMAF确定该4个神经网络的数据分别编码得到的神经网络数据包的优先级。When the VMAFs corresponding to different neural networks represent the effects of data recovery by different neural networks, the server can determine the priority of the neural network data packets obtained by encoding the data of the four neural networks respectively based on the VMAFs corresponding to the four neural networks. .
下表3给出了一种示例性的基于VMAF(分数范围为0分~100分)评价机制来衡量不同神经网络的数据编码得到神经网络数据包的优先级的标准,共有4个等级划分。其中,优先级数值越大,优先级越高。Table 3 below gives an exemplary standard based on the VMAF (score range is 0 to 100 points) evaluation mechanism to measure the priority of neural network data packets obtained by data encoding of different neural networks. There are 4 levels in total. Among them, the larger the priority value, the higher the priority.
表3
table 3
如,神经网络#1的VMAF分数为<25分,则神经网络#1的数据编码得到的神经网络数据包的优先级为1;神经网络#2的VMAF分数为[25分,50分],则神经网络#2的数据编码得到的神经网络数据包的优先级为2;神经网络#3的VMAF分数为(50分,75分),则神经网络#3的数据编码得到的神经网络数据包的优先级为3;神经网络#4的VMAF分数为 [75分,100分],则神经网络#4的数据编码得到的神经网络数据包的优先级为4。For example, if the VMAF score of neural network #1 is <25 points, then the priority of the neural network data packet obtained by encoding the data of neural network #1 is 1; the VMAF score of neural network #2 is [25 points, 50 points], Then the priority of the neural network data packet obtained by encoding the data of neural network #2 is 2; the VMAF score of neural network #3 is (50 points, 75 points), then the neural network data packet obtained by encoding the data of neural network #3 has a priority of 3; the VMAF score of Neural Network #4 is [75 points, 100 points], then the priority of the neural network data packet obtained by encoding the data of neural network #4 is 4.
应理解,表3只是示例,对本申请的保护范围不构成任何的限定,还可以基于VMAF制定不同于表3所示的评价机制衡量不同神经网络的数据编码得到神经网络数据包的优先级。It should be understood that Table 3 is only an example and does not constitute any limitation on the scope of protection of the present application. An evaluation mechanism different from that shown in Table 3 can also be developed based on VMAF to measure the data encoding of different neural networks to obtain the priority of neural network data packets.
还应理解,上述以神经网络的VMAF分数表示神经网络恢复数据的效果只是举例,对本申请的保护范围不构成任何的限定,其他的指标(如,PSNR、SSIM等)也可以用于表示神经网络恢复数据的效果(例如,将指标数值划分为多个范围,一个范围对应一个神经网络恢复数据的效果程度),具体表示方式与上述VMAF类似,这里不再赘述。It should also be understood that the above-mentioned use of the VMAF score of the neural network to represent the effect of the neural network on data recovery is only an example, and does not constitute any limitation on the protection scope of the present application. Other indicators (such as PSNR, SSIM, etc.) can also be used to represent the neural network. The effect of recovering data (for example, dividing the indicator value into multiple ranges, and each range corresponds to the degree of effect of a neural network on recovering data). The specific expression method is similar to the above-mentioned VMAF, and will not be described again here.
该实现方式所示的神经网络数据包的优先级的表示方式说明:可以根据神经网络恢复数据的效果确定该神经网络的神经网络数据包的优先级,以便于优先传输恢复数据的效果好的神经网络的神经网络数据包,从而用户可以优先使用恢复数据的效果好的神经网络恢复数据,提高用户体验。The representation of the priority of the neural network data packets shown in this implementation explains: the priority of the neural network data packets of the neural network can be determined according to the effect of the neural network on recovering data, so that the neural network with the best effect of recovering data can be transmitted first. The neural network data package of the network is used, so that users can prioritize the use of neural networks with good data recovery effects to restore data and improve user experience.
作为另一种可能的实现方式:神经网络数据包的优先级与该神经网络数据包对应的神经网络恢复数据的效果和预设算法(或者称为传统算法)恢复该数据的效果相关。其中,预设算法可以理解用户可以直接使用的恢复数据的算法,如,协议预定义的算法。As another possible implementation method: the priority of a neural network data packet is related to the effect of the neural network recovering the data corresponding to the neural network data packet and the effect of the preset algorithm (or traditional algorithm) recovering the data. Among them, the preset algorithm can understand the algorithm for recovering data that the user can directly use, such as the algorithm predefined by the protocol.
在该实现方式下,服务器可以根据神经网络恢复数据的效果和预设算法恢复该数据的效果确定该神经网络的数据编码得到的神经网络数据包的优先级。In this implementation, the server can determine the priority of the neural network data packets obtained by encoding the data of the neural network based on the effect of the neural network on recovering the data and the effect of the preset algorithm on recovering the data.
在该实现方式下,考虑到在XR的超分辨传输模式中,用户除了可以使用神经网络来恢复高清视频之外,还可以用一些传统算法(如,双线性差值(bilinear interpolation)算法)来恢复高清视频,虽然传统算法简单,用户可直接使用,但是恢复效果相对于神经网络来说较差。在该实现方式下,将神经网络对视频的恢复效果相比于预设算法恢复该数据的效果的提升程度作为衡量该神经网络的数据编码得到的神经网络数据包的优先级的标准。In this implementation, considering that in the super-resolution transmission mode of XR, in addition to using neural networks to restore high-definition videos, users can also use some traditional algorithms (such as bilinear interpolation algorithms) To restore high-definition videos, although the traditional algorithm is simple and can be used directly by users, the restoration effect is poor compared to neural networks. In this implementation, the degree of improvement in the video recovery effect of the neural network compared to the data recovery effect of the preset algorithm is used as a criterion to measure the priority of the neural network data packets obtained by encoding the data of the neural network.
示例性地,神经网络恢复数据的效果和预设算法恢复该数据的效果可以由以下任意一种指标指示:PSNR、SSIM或VMAF等。For example, the effect of the neural network on recovering the data and the effect of the preset algorithm on recovering the data can be indicated by any of the following indicators: PSNR, SSIM or VMAF, etc.
例如,神经网络恢复数据的效果相比于预设算法恢复该数据的效果好的程度高出预期值(如,神经网络对应的VAMF相比于预设算法对应的VAMF大于一个预设值)的情况下,该神经网络的数据编码得到的神经网络数据包的优先级为高优先级。For example, the effect of the neural network on recovering the data is higher than expected compared to the effect of the preset algorithm on recovering the data (for example, the VAMF corresponding to the neural network is greater than a preset value compared to the VAMF corresponding to the preset algorithm). In this case, the priority of the neural network data packet obtained by encoding the neural network data is high priority.
还例如,神经网络恢复数据的效果相比于预设算法恢复该数据的效果好的程度低于预期值(如,神经网络对应的VAMF相比于预设算法对应的VAMF小于或者等于一个预设值)的情况下,该神经网络的数据编码得到的神经网络数据包的优先级为低优先级。For example, the effect of the neural network on recovering the data is lower than expected compared to the effect of the preset algorithm on recovering the data (for example, the VAMF corresponding to the neural network is less than or equal to a preset value compared to the VAMF corresponding to the preset algorithm. value), the priority of the neural network data packet obtained by encoding the data of the neural network is low priority.
可选地,以不同神经网络对应的VMAF表示不同神经网络恢复数据的效果,以预设算法对应的VMAF表示预设算法恢复该数据的效果为例,说明该实现方式下服务器如何确定神经网络数据包的优先级。Optionally, take the VMAF corresponding to different neural networks to represent the effect of different neural networks on data recovery, and take the VMAF corresponding to the preset algorithm to represent the effect of the preset algorithm to restore the data as an example to illustrate how the server determines the neural network data in this implementation. The priority of the package.
例如,服务器需要确定4个神经网络的数据(如,神经网络#1、神经网络#2、神经网络#3和神经网络#4)编码得到的神经网络数据包(如,神经网络#1的数据编码得到的神经网络数据包为神经网络数据包#1、神经网络#2的数据编码得到的神经网络数据包为神经网络数据包#2、神经网络#3的数据编码得到的神经网络数据包为神经网络数据包#3和神经网络#4的数据编码得到的神经网络数据包为神经网络数据包#4)的优先级。For example, the server needs to determine the neural network data packet (e.g., the data of neural network #1) encoded by 4 neural networks (e.g., neural network #1, neural network #2, neural network #3, and neural network #4). The encoded neural network data packet is neural network data packet #1, the encoded neural network data packet of neural network #2 is neural network data packet #2, and the encoded neural network data packet of neural network #3 is The neural network data packet obtained by encoding the data of neural network data packet #3 and neural network #4 is the priority of neural network data packet #4).
当不同神经网络对应的VMAF表示不同神经网络恢复数据的效果,预设算法对应的 VMAF表示预设算法恢复该数据的效果的情况下,服务器可以根据该4个神经网络分别对应的VMAF以及预设算法对应的VMAF确定该4个神经网络的数据分别编码得到的神经网络数据包的优先级。When the VMAF corresponding to different neural networks represents the effect of different neural networks on data recovery, the corresponding VMAF of the preset algorithm When VMAF indicates the effect of the preset algorithm on recovering the data, the server can determine the neural network data packet obtained by encoding the data of the four neural networks respectively based on the VMAF corresponding to the four neural networks and the VMAF corresponding to the preset algorithm. priority.
服务器可以将某个神经网络相比于传统算法的视频质量评价提升信息作为衡量神经网络优先级的标准。图像提升效果越高的神经网络,其数据编码得到的神经网络数据包的优先级越高。The server can use the video quality evaluation improvement information of a certain neural network compared with the traditional algorithm as a criterion for measuring the priority of the neural network. The higher the image enhancement effect of the neural network, the higher the priority of the neural network data packets obtained by its data encoding.
下表4给出了一种示例性的基于VMAF差值(神经网络对应的VMAF和预设算法对应的VMAF的差值)范围评价机制来衡量不同神经网络的数据编码得到神经网络数据包的优先级的标准,共有4个等级划分。其中,优先级数值越大,优先级越高。Table 4 below gives an exemplary range evaluation mechanism based on the VMAF difference (the difference between the VMAF corresponding to the neural network and the VMAF corresponding to the preset algorithm) to measure the data encoding of different neural networks to obtain the priority of neural network data packets. There are 4 levels of standards. Among them, the larger the priority value, the higher the priority.
表4
Table 4
如,神经网络#1的VMAF分数相比于传统算法的VMAF分数提升<5分,则神经网络#1的数据编码得到的神经网络数据包的优先级为1;神经网络#2的VMAF分数相比于传统算法的VMAF分数提升[5分,10分],则神经网络#2的数据编码得到的神经网络数据包的优先级为2;神经网络#3的VMAF分数相比于传统算法的VMAF分数提升(10分,20分),则神经网络#3的数据编码得到的神经网络数据包的优先级为3;神经网络#4的VMAF分数相比于传统算法的VMAF分数提升≥20分,则神经网络#4的数据编码得到的神经网络数据包的优先级为4。For example, if the VMAF score of neural network #1 is improved by <5 points compared to the VMAF score of the traditional algorithm, then the priority of the neural network data packet obtained by encoding the data of neural network #1 is 1; the VMAF score of neural network #2 is the same. Compared with the VMAF score of the traditional algorithm, the VMAF score is improved [5 points, 10 points], then the priority of the neural network data packet obtained by encoding the data of neural network #2 is 2; the VMAF score of neural network #3 is compared with the VMAF of the traditional algorithm If the score increases (10 points, 20 points), then the priority of the neural network data packet obtained by the data encoding of neural network #3 is 3; the VMAF score of neural network #4 is improved by ≥20 points compared with the VMAF score of the traditional algorithm. Then the priority of the neural network data packet obtained by encoding the data of neural network #4 is 4.
应理解,表4只是示例,对本申请的保护范围不构成任何的限定,还可以基于VMAF制定不同于表4所示的评价机制衡量不同神经网络的数据编码得到神经网络数据包的优先级。It should be understood that Table 4 is only an example and does not constitute any limitation on the scope of protection of the present application. An evaluation mechanism different from that shown in Table 4 can also be developed based on VMAF to measure the data encoding of different neural networks to obtain the priority of neural network data packets.
还应理解,上述以神经网络的VMAF分数表示神经网络恢复数据的效果,预设算法对应的VMAF表示预设算法恢复该数据的效果只是举例,对本申请的保护范围不构成任何的限定,还可以其他的指标(如,PSNR、SSIM等)表示神经网络恢复数据的效果和预设算法恢复该数据的效果,具体表示方式与上述VMAF类似,这里不再赘述。It should also be understood that the above-mentioned VMAF score of the neural network represents the effect of the neural network on recovering the data, and the VMAF corresponding to the preset algorithm represents the effect of the preset algorithm on recovering the data. This is only an example and does not constitute any limitation on the scope of protection of the present application. It can also be Other indicators (such as PSNR, SSIM, etc.) represent the effect of the neural network on recovering the data and the effect of the preset algorithm on recovering the data. The specific expression method is similar to the above-mentioned VMAF, and will not be described again here.
该实现方式所示的神经网络数据包的优先级的表示方式说明:可以根据神经网络恢复数据的效果和预设算法恢复该数据的效果,确定该神经网络的神经网络数据包的优先级,在多个神经网络恢复数据的效果都比预设算法恢复该数据的效果好的情况下,以便于优先传输恢复数据的效果比预设算法好的程度高的神经网络,从而用户可以优先使用恢复数据的效果好的神经网络恢复数据,提高用户体验。The representation of the priority of the neural network data packets shown in this implementation explains: the priority of the neural network data packets of the neural network can be determined based on the effect of the neural network on recovering the data and the effect of the preset algorithm on recovering the data. When the data recovery effect of multiple neural networks is better than that of the preset algorithm, the neural network that recovers the data better than the preset algorithm can be prioritized so that the user can use the recovered data first. The effective neural network restores data and improves user experience.
作为又一种可能的实现方式:神经网络数据包的优先级与重构该神经网络的效果相关,该神经网络数据包用于重构该神经网络数据包对应的神经网络。As another possible implementation method: the priority of the neural network data packet is related to the effect of reconstructing the neural network, and the neural network data packet is used to reconstruct the neural network corresponding to the neural network data packet.
在该实现方式下,服务器可以根据神经网络数据包重构该神经网络的效果确定该神经网络数据包的优先级。In this implementation, the server can determine the priority of the neural network data packet based on the effect of reconstructing the neural network from the neural network data packet.
例如,在该神经网络数据包的数据重构得到的神经网络#1和服务器训练得到的神经 网络差距满足预期(如,构建的神经网络#1和需要传输给用户的神经网络的层数相同)的情况下,确定神经网络数据包的优先级为高优先级。For example, in the data reconstruction of the neural network packet, the neural network #1 is obtained and the neural network trained by the server is When the network gap meets expectations (for example, the number of layers of the constructed neural network #1 and the neural network that needs to be transmitted to the user is the same), determine the priority of the neural network data packet as high priority.
还例如,在该神经网络数据包的数据重构得到的神经网络#1和服务器训练得到的神经网络差距不满足预期(如,构建的神经网络#1和需要传输给用户的神经网络的层数不相同)的情况下,确定神经网络数据包的优先级为低优先级。For example, the gap between the neural network #1 obtained by data reconstruction of the neural network data packet and the neural network obtained by server training does not meet expectations (for example, the number of layers of the constructed neural network #1 and the neural network that needs to be transmitted to the user If they are not the same), determine the priority of the neural network data packet as low priority.
该实现方式所示的神经网络数据包的优先级的表示方式说明:可以根据重构神经网络的效果确定该神经网络的神经网络数据包的优先级,以便于优先传输重构神经网络的效果好的神经网络数据包,提高用户重构神经网络的效率。The representation of the priority of the neural network data packets shown in this implementation shows that the priority of the neural network data packets of the neural network can be determined according to the effect of the reconstructed neural network, so that the effect of the reconstructed neural network is good for priority transmission. neural network data package to improve the efficiency of users in reconstructing neural networks.
作为又一种可能的实现方式:神经网络数据包的优先级与神经网络数据包包括的神经网络的系数的数据对神经网络的系数的影响程度相关。As another possible implementation method: the priority of the neural network data packet is related to the degree of influence of the neural network coefficient data included in the neural network data packet on the neural network coefficient.
在该实现方式下,服务器可以根据神经网络数据包包括的神经网络的系数的数据对神经网络的系数的影响程度确定神经网络数据包的优先级。In this implementation manner, the server may determine the priority of the neural network data packet based on the degree of impact of the data on the coefficients of the neural network included in the neural network data packet on the coefficients of the neural network.
例如,在系数数据计算得到的系数#1和系数之间的差值满足预期(如,系数#1和系数之间的差值小于或者等于预设阈值)的情况下,确定神经网络数据包的优先级为高优先级。For example, when the difference between coefficient #1 and the coefficient calculated by the coefficient data meets expectations (for example, the difference between coefficient #1 and the coefficient is less than or equal to the preset threshold), determine the value of the neural network data packet The priority is high priority.
还例如,在系数数据计算得到的系数#1和系数之间的差值不满足预期(如,系数#1和系数之间的差值大于预设阈值)的情况下,确定神经网络数据包的优先级为低优先级。For another example, in the case where the difference between coefficient #1 and the coefficient calculated by the coefficient data does not meet expectations (for example, the difference between coefficient #1 and the coefficient is greater than a preset threshold), determine the value of the neural network data packet The priority is low priority.
该实现方式所示的神经网络数据包的优先级的表示方式说明:可以根据神经网络数据包的数据计算得到的值和神经网络的系数之间的差别确定神经网络数据包的优先级,以便于优先传输能够计算得到与神经网络的系数最相近的值的神经网络数据包,以使得用户能够根据接收到的神经网络数据包快速重构神经网络,提供用户重构神经网络的效率。The representation of the priority of the neural network data packet shown in this implementation explains: the priority of the neural network data packet can be determined based on the difference between the value calculated from the data of the neural network data packet and the coefficient of the neural network, so as to facilitate Priority is given to transmitting neural network data packets that can calculate the values closest to the coefficients of the neural network, so that users can quickly reconstruct the neural network based on the received neural network data packets, and improve the efficiency of the user's reconstruction of the neural network.
示例性地,该系数由多个比特位表示,该多个比特位的值用于计算该系数的绝对值,该系数的数据由至少一个比特位表示,该至少一个比特位属于该多个比特位。Exemplarily, the coefficient is represented by a plurality of bits, and the values of the plurality of bits are used to calculate the absolute value of the coefficient. The data of the coefficient is represented by at least one bit, and the at least one bit belongs to the plurality of bits. Bit.
具体地,该多个比特位包括符号部分、指数部分以及分数部分,在该至少一个比特位为该符号部分、该指数部分和该分数部分的第一部分的情况下,神经网络数据包的优先级为第一优先级;在该至少一个比特位为该分数部分的第二部分的情况下,确定该神经网络对应的调度优先级为第二优先级,其中,该第一优先级高于该第二优先级,该分数部分的第一部分为该分数部分的高位数据部分,该分数部分的第二部分为该分数部分的低位数据部分。Specifically, the plurality of bits includes a sign part, an exponent part and a fraction part. In the case where the at least one bit is the first part of the sign part, the exponent part and the fraction part, the priority of the neural network data packet is the first priority; in the case where the at least one bit is the second part of the fractional part, the scheduling priority corresponding to the neural network is determined to be the second priority, wherein the first priority is higher than the third Second priority, the first part of the fractional part is the high-order data part of the fractional part, and the second part of the fractional part is the low-order data part of the fractional part.
由上述可知,神经网络的系数可以由符号部分、指数部分以及分数部分表示,有利于确定不同部分对神经网络的系数的绝对值的影响程度,从而可以从神经网络数据包的数据由系数的那些部分表示确定该神经网络数据包的数据对神经网络的系数的绝对值的影响程度,使得方案更加简洁。It can be seen from the above that the coefficients of the neural network can be represented by the symbolic part, the exponential part and the fractional part, which is helpful to determine the degree of influence of different parts on the absolute value of the coefficients of the neural network, so that the data of the neural network data packet can be obtained by those of the coefficients Part of it indicates the degree of influence of the data of the neural network packet on the absolute value of the coefficient of the neural network, making the solution more concise.
为了便于理解,结合具体的示例说明如何根据神经网络数据包包括的神经网络的系数的数据对神经网络的系数的影响程度确定神经网络数据包的优先级。For ease of understanding, how to determine the priority of a neural network data packet based on the degree of impact of the neural network coefficient data included in the neural network data packet on the neural network coefficient will be explained with reference to a specific example.
示例一:神经网络的系数由32位的浮点类型数据表示(如图6所示),基于前文所述的浮点类型数据的结构特点,将对神经网络的系数的绝对值影响大的比特位作为优先级高的数据,将对神经网络的系数的绝对值影响小的比特位作为优先级低的数据。Example 1: The coefficients of the neural network are represented by 32-bit floating point data (as shown in Figure 6). Based on the structural characteristics of the floating point data mentioned above, the bits that will have a large impact on the absolute value of the coefficients of the neural network Bits are regarded as high-priority data, and bits that have a small impact on the absolute value of the coefficient of the neural network are regarded as low-priority data.
例如,可以将神经网络的系数对应的浮点数的符号位、指数位以及分数部分的高位比 特作为优先级高的数据,将剩余的分数部分的低位比特作为优先级低的数据。For example, the sign bit, exponent bit and high-order bit of the fractional part of the floating-point number corresponding to the coefficient of the neural network can be compared to The remaining fractional low-order bits are treated as data with low priority.
服务器在产生神经网络的神经网络数据包时,按照不同的比特位对神经网络的系数的绝对值影响程度,分开放置在不同的数据包中。如图9所示,图9是本申请实施例提供的一种生成神经网络数据包的示意图。When the server generates neural network data packets of the neural network, the different bits are placed in different data packets according to the degree of influence they have on the absolute value of the coefficients of the neural network. As shown in Figure 9, Figure 9 is a schematic diagram of generating neural network data packets provided by an embodiment of the present application.
从图9中可以看出,将神经网络的系数对应的浮点数的符号位、指数位以及分数部分的高位比特放在一个神经网络数据包中,并将该神经网络数据包中的优先级设置为“H”;而将神经网络的系数对应的浮点数的分数部分的低位比特放在另一个神经网络数据包中,并将该神经网络数据包中的优先级设置为“L”,其中“H”表示优先级高,“L”表示优先级低。As can be seen from Figure 9, the sign bit, exponent bit and high-order bits of the fractional part of the floating point number corresponding to the neural network coefficient are placed in a neural network data packet, and the priority in the neural network data packet is set is "H"; and the low-order bits of the fractional part of the floating point number corresponding to the coefficient of the neural network are placed in another neural network data packet, and the priority in the neural network data packet is set to "L", where " H" means high priority, "L" means low priority.
进一步地,为了将神经网络数据包传输至用户,服务器可以将生成的神经网络数据包发送给核心网设备,图7所示的方法流程还包括:Further, in order to transmit the neural network data packet to the user, the server can send the generated neural network data packet to the core network device. The method flow shown in Figure 7 also includes:
S720,服务器向核心网设备发送神经网络数据包,或者说核心网设备接收来自服务器的神经网络数据包。S720: The server sends the neural network data packet to the core network device, or the core network device receives the neural network data packet from the server.
应理解,核心网设备接收来自服务器的神经网络数据包只是核心网设备获取神经网络数据包的一种方式。It should be understood that the core network device receiving the neural network data packet from the server is only a way for the core network device to obtain the neural network data packet.
示例性地,核心网设备可以通过以下几种方式获得神经网络数据包:For example, core network equipment can obtain neural network data packets in the following ways:
方式一:核心网设备接收来自服务器的神经网络数据包。Method 1: The core network device receives the neural network data packet from the server.
例如,服务器直接向核心网设备发送该神经网络数据包。还例如,服务器通过其他设备间接向核心网设备发送该神经网络数据包。For example, the server directly sends the neural network data packet to the core network device. For another example, the server indirectly sends the neural network data packet to the core network device through other devices.
在该方式一中,核心网设备和服务器之间可以通过固网的方式连接。In this method one, the core network equipment and the server can be connected through a fixed network.
方式二:核心网设备从存储器(或内部接口)获取服务器生成的神经网络数据包。Method 2: The core network device obtains the neural network data packet generated by the server from the memory (or internal interface).
在该方式二中,核心网设备和服务器可以合设为一个设备(如,移动边缘计算(mobile edge computing,MEC)场景中),服务器生成的神经网络数据包可以缓存在该合设设备的存储器中,而核心网设备从存储器中获取该神经网络数据包;或者该合设设备中核心网设备和服务器之间通过内部接口传输数据包,核心网设备可以通过内部接口从服务器获取该神经网络数据包。In the second method, the core network device and the server can be combined into one device (for example, in a mobile edge computing (MEC) scenario), and the neural network data packets generated by the server can be cached in the memory of the joint device , and the core network device obtains the neural network data packet from the memory; or the core network device and the server in the joint device transmit the data packet through the internal interface, and the core network device can obtain the neural network data from the server through the internal interface Bag.
应理解,上述的核心网设备获得神经网络数据包的方式一和方式二只是举例,对本申请的保护范围不构成任何的限定,该实施例中核心网设备还可以通过其他方式获得该神经网络数据包。例如,根据接收到的神经网络的参数信息生成该神经网络数据包,这里不再赘述。It should be understood that the above-mentioned methods 1 and 2 for the core network device to obtain neural network data packets are only examples and do not limit the scope of protection of the present application. In this embodiment, the core network device can also obtain the neural network data through other methods. Bag. For example, the neural network data packet is generated according to the received parameter information of the neural network, which will not be described again here.
该实施例中,核心网设备接收到神经网络数据包之后,可以从该神经网络数据包中读取用于指示神经网络数据包的优先级的指示信息。然后将按照GTP-U协议将指示信息封装到GTP-U数据包包头中,以让接入网设备读取。In this embodiment, after receiving the neural network data packet, the core network device may read indication information indicating the priority of the neural network data packet from the neural network data packet. The indication information will then be encapsulated into the GTP-U data packet header according to the GTP-U protocol so that the access network device can read it.
图7所示的方法流程还包括:The method flow shown in Figure 7 also includes:
S730,核心网设备生成GTP-U数据包。S730, the core network device generates GTP-U data packets.
GTP-U数据包的载荷中包括神经网络数据包,该GTP-U数据包的包头中包括指示信息,指示信息用于指示所述神经网络数据包的优先级。The payload of the GTP-U data packet includes a neural network data packet, and the header of the GTP-U data packet includes indication information. The indication information is used to indicate the priority of the neural network data packet.
该实施例中,核心网设备接收到携带用于指示神经网络数据包的优先级的指示信息的神经网络数据包之后,可以从该神经网络数据包中读取用于指示神经网络数据包的优先级 的指示信息,并将该指示信息按照GTP-U协议封装到GTP-U数据包包头中,以及将神经网络数据包作为该GTP-U数据包的载荷。以便于接收该GTP-U数据包的接入网设备可以从该GTP-U数据包的包头中读取该指示信息,并基于该指示信息指示的神经网络数据包的优先级,对该神经网络数据包进行传输处理。可以理解,接入网设备在传输神经网络数据包的时候考虑到神经网络数据包的优先级,以便于实现对不同优先级的神经网络数据包的差异化传输。In this embodiment, after the core network device receives the neural network data packet carrying the indication information indicating the priority of the neural network data packet, it can read the priority of the neural network data packet from the neural network data packet. class The instruction information is encapsulated into the GTP-U data packet header according to the GTP-U protocol, and the neural network data packet is used as the payload of the GTP-U data packet. So that the access network device that receives the GTP-U data packet can read the indication information from the header of the GTP-U data packet, and based on the priority of the neural network data packet indicated by the indication information, the neural network Data packets are transmitted and processed. It can be understood that the access network device takes the priority of the neural network data packet into consideration when transmitting the neural network data packet, so as to achieve differentiated transmission of neural network data packets of different priorities.
该实施例中GTP-U数据包的包头中包括的用于指示神经网络数据包的优先级的指示信息(为了区别可以称为指示信息#1),和神经网络数据包中包括的用于指示神经网络数据包的优先级的指示信息(为了区别可以称为指示信息#2)的具体形式,可以相同也可以不同。In this embodiment, the indication information included in the header of the GTP-U data packet for indicating the priority of the neural network data packet (which may be called indication information #1 for distinction), and the indication information included in the neural network data packet for indicating The specific form of the indication information of the priority of the neural network data packet (which may be called indication information #2 for distinction) may be the same or different.
具体地,该实施例中核心网设备接收到神经网络数据包之后,可以从该神经网络数据包中读取用于指示神经网络数据包的优先级的指示信息#2,然后可以按照GTP-U协议将指示信息#2封装到GTP-U数据包包头中称为指示信息#1,以让接入网设备站读取。而按照GTP-U协议封装后的指示信息#2仍可以称为指示信息#2,或者为了便于区分可以称为指示信息#1。也就是说,该实施例中对指示信息#1和指示信息#2的具体形式不做限定,能够用于指示神经网络数据包的优先级即可。为了便于描述,可以统称为指示信息。Specifically, in this embodiment, after the core network device receives the neural network data packet, it can read the instruction information #2 used to indicate the priority of the neural network data packet from the neural network data packet, and then can follow the GTP-U The protocol encapsulates the indication information #2 into the GTP-U data packet header, which is called indication information #1, so that the access network equipment station can read it. The instruction information #2 encapsulated according to the GTP-U protocol may still be called instruction information #2, or may be called instruction information #1 for ease of distinction. That is to say, in this embodiment, the specific forms of the indication information #1 and the indication information #2 are not limited, and they can be used to indicate the priority of the neural network data packet. For ease of description, they can be collectively called instruction information.
示例性地,可以在GTP-U数据包包头中新增一个字段用来表示神经网络数据包的优先级。For example, a new field can be added to the GTP-U data packet header to indicate the priority of the neural network data packet.
为了便于理解,结合图10说明GTP-U数据包包头中包括指示信息,图10是本申请实施例提供的一种GTP-U数据包的包头的示意图。For ease of understanding, it is explained that the GTP-U data packet header includes indication information with reference to FIG. 10 . FIG. 10 is a schematic diagram of a GTP-U data packet header provided by an embodiment of the present application.
从图10中可以看出,可以在GTP-U数据包中新增一个字节用于添加指示信息。As can be seen from Figure 10, a new byte can be added to the GTP-U data packet to add indication information.
应理解,图10只是示例性示出指示信息可以携带在GTP-U数据包的包头中,对本申请的保护范围不构成任何的限定,指示信息还可以添加在GTP-U数据包的包头的其他位置,如,GTP-U数据包的第四位比特中,这里不再一一举例说明。It should be understood that Figure 10 only illustrates that the indication information can be carried in the header of the GTP-U data packet, and does not constitute any limitation on the protection scope of the present application. The indication information can also be added to other parts of the header of the GTP-U data packet. The position, for example, in the fourth bit of the GTP-U packet, will not be explained one by one here.
进一步地,核心网设备将GTP-U数据包发送给接入网设备,图7所示的方法流程还包括:Further, the core network device sends the GTP-U data packet to the access network device. The method flow shown in Figure 7 also includes:
S740,核心网设备向接入网设备发送GTP-U数据包,或者说接入网设备接收来自核心网设备的GTP-U数据包。S740: The core network device sends the GTP-U data packet to the access network device, or the access network device receives the GTP-U data packet from the core network device.
S750,接入网设备根据指示信息传输神经网络数据包。S750: The access network device transmits the neural network data packet according to the instruction information.
接入网设备接收到的GTP-U数据包的包头中包括指示神经网络数据包的指示信息,从而接入网设备可以从该GTP-U数据包的包头中读取该指示信息,并基于该指示信息指示的神经网络数据包的优先级,对该神经网络数据包进行传输处理。可以理解,接入网设备在传输神经网络数据包的时候考虑到神经网络数据包的优先级,以便于实现对不同优先级的神经网络数据包的差异化传输。The header of the GTP-U data packet received by the access network device includes indication information indicating the neural network data packet, so that the access network device can read the indication information from the header of the GTP-U data packet, and based on the The priority of the neural network data packet indicated by the indication information, and the neural network data packet is transmitted and processed. It can be understood that the access network device takes the priority of the neural network data packet into consideration when transmitting the neural network data packet, so as to achieve differentiated transmission of neural network data packets of different priorities.
例如,接入网设备接收到两个GTP-U数据包(GTP-U数据包#1和GTP-U数据包#2),其中,GTP-U数据包#1的载荷中包括神经网络数据包#1,GTP-U数据包#1的包头中包括指示信息#1,GTP-U数据包#2的载荷中包括神经网络数据包#2,GTP-U数据包#2的包头中包括指示信息#2。接入网设备根据指示信息#1和指示信息#2差异化传输神经网络数据包#1和神经网络数据包#2,如,指示信息#1指示神经网络数据包#1的优先级为高优先级, 指示信息#2指示神经网络数据包#2的优先级为低优先级,接入网设备可以优先传输神经网络数据包#1,在神经网络数据包#1传输完成之后再传输神经网络数据包#2。For example, the access network device receives two GTP-U data packets (GTP-U data packet #1 and GTP-U data packet #2). Among them, the payload of GTP-U data packet #1 includes a neural network data packet. #1, the header of GTP-U packet #1 includes indication information #1, the payload of GTP-U packet #2 includes neural network packet #2, the header of GTP-U packet #2 includes indication information #2. The access network device differentially transmits neural network data packet #1 and neural network data packet #2 according to instruction information #1 and instruction information #2. For example, instruction information #1 indicates that the priority of neural network data packet #1 is high priority. class, Instruction information #2 indicates that the priority of neural network data packet #2 is low priority. The access network device can transmit neural network data packet #1 first, and then transmit neural network data packet #1 after the transmission of neural network data packet #1 is completed. 2.
示例性地,接入网设备根据所述指示信息传输所述神经网络数据包,包括:Exemplarily, the access network device transmits the neural network data packet according to the indication information, including:
在所述指示信息指示所述神经网络数据包为高优先级的情况下,接入网设备优先传输所述神经网络数据包;或者,When the indication information indicates that the neural network data packet is of high priority, the access network device preferentially transmits the neural network data packet; or,
在所述指示信息指示所述神经网络数据包为低优先级的情况下,接入网设备延后传输所述神经网络数据包;或者,When the indication information indicates that the neural network data packet is of low priority, the access network device delays transmission of the neural network data packet; or,
在所述指示信息指示所述神经网络数据包为低优先级,且网络状态发生拥堵的情况下,接入网设备放弃传输所述神经网络数据包。When the indication information indicates that the neural network data packet is of low priority and the network status is congested, the access network device gives up transmitting the neural network data packet.
具体地,接入网设备可以根据神经网络数据包优先级信息(如,上述的指示信息),结合空口调度的其他相关参数,包括但不限于历史速率、瞬时速率、用户等级等,计算出相应的传输优先级。Specifically, the access network device can calculate the corresponding data based on the neural network packet priority information (such as the above-mentioned indication information) and other related parameters of air interface scheduling, including but not limited to historical rate, instantaneous rate, user level, etc. transmission priority.
例如,传统的比例公平调度优先级满足以下条件:
factor1=Ri÷Rh
For example, traditional proportional fair scheduling priority meets the following conditions:
factor1=R i ÷R h
其中,factor1表示比例公平调度优先级,Ri表示用户的瞬时速率,用户的信道条件越好,瞬时速率越高;Ri表示用户的历史速率,表示一段时间内信道的平均速率。Among them, factor1 represents the proportional fair scheduling priority, R i represents the user's instantaneous rate. The better the user's channel condition, the higher the instantaneous rate; R i represents the user's historical rate, which represents the average rate of the channel within a period of time.
具体地,接入网设备可以基于比例公平的调度算法,并结合神经网络数据包的优先级确定神经网络数据包的调度优先级,例如,神经网络数据包的调度优先级满足以下条件:factor2=f(N)×Ri÷Rh Specifically, the access network device can determine the scheduling priority of the neural network data packet based on the proportional fair scheduling algorithm and the priority of the neural network data packet. For example, the scheduling priority of the neural network data packet satisfies the following conditions: factor2= f(N)×R i ÷R h
其中,factor2表示神经网络数据包的调度优先级,N表示神经网络数据包的优先级,f可以为递增的线性或者指数函数。Among them, factor2 represents the scheduling priority of the neural network data packet, N represents the priority of the neural network data packet, and f can be an increasing linear or exponential function.
应理解,本申请实施例中的图7所示的具体的例子只是为了帮助本领域技术人员更好地理解本申请实施例,而非限制本申请实施例的范围。还应理解,上述各过程的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。It should be understood that the specific examples shown in FIG. 7 in the embodiments of the present application are only to help those skilled in the art better understand the embodiments of the present application, but are not intended to limit the scope of the embodiments of the present application. It should also be understood that the size of the serial numbers of the above-mentioned processes does not mean the order of execution. The execution order of each process should be determined by its functions and internal logic, and should not constitute any limitation on the implementation process of the embodiment of the present application.
例如,图7所示的实施例中记载了确定神经网络数据包的优先级由服务器实现,另一种可能,上述的确定神经网络数据包的优先级的动作可以由核心网设备实现,具体确定方式与服务器确定的方式类似,只是执行主体为核心网设备。For example, the embodiment shown in Figure 7 records that determining the priority of neural network data packets is implemented by the server. Another possibility is that the above-mentioned action of determining the priority of neural network data packets can be implemented by the core network device. Specifically determine The method is similar to the method determined by the server, except that the execution subject is the core network device.
还应理解,在本申请的各个实施例中,如果没有特殊说明以及逻辑冲突,不同的实施例之间的术语和/或描述具有一致性、且可以相互引用,不同的实施例中的技术特征根据其内在的逻辑关系可以组合形成新的实施例。It should also be understood that in the various embodiments of the present application, if there are no special instructions or logical conflicts, the terms and/or descriptions between different embodiments are consistent and can be referenced to each other. The technical features in different embodiments New embodiments can be formed based on their internal logical relationships.
还应理解,在上述一些实施例中,主要以现有的网络架构中的网元为例进行了示例性说明(如AF、AMF、SMF等等),应理解,对于网元的具体形式本申请实施例不作限定。例如,在未来可以实现同样功能的网元都适用于本申请实施例。It should also be understood that in some of the above embodiments, network elements in the existing network architecture are mainly used as examples for illustrative explanations (such as AF, AMF, SMF, etc.). It should be understood that the specific form of the network element is The application examples are not limiting. For example, network elements that can implement the same functions in the future are applicable to the embodiments of this application.
可以理解的是,上述各个方法实施例中,由设备(如服务器、核心网设备和接入网设备)实现的方法和操作,也可以由可用于网络设备的部件(例如芯片或者电路)实现。It can be understood that in the above method embodiments, the methods and operations implemented by devices (such as servers, core network equipment, and access network equipment) can also be implemented by components (such as chips or circuits) that can be used in network equipment.
以上,结合图7详细说明了本申请实施例提供的通信方法。上述通信方法主要从各个网元之间交互的角度进行了介绍。可以理解的是,各个网元,为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。 Above, the communication method provided by the embodiment of the present application is described in detail with reference to FIG. 7 . The above communication methods are mainly introduced from the perspective of interaction between various network elements. It can be understood that, in order to implement the above functions, each network element includes a corresponding hardware structure and/or software module to perform each function.
本领域技术人员应该可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those skilled in the art should realize that the present application can be implemented in the form of hardware or a combination of hardware and computer software with the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving the hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may use different methods to implement the described functionality for specific applications, but such implementations should not be considered beyond the scope of this application.
以下,结合图11和图12详细说明本申请实施例提供的通信的装置。应理解,装置实施例的描述与方法实施例的描述相互对应,因此,未详细描述的内容可以参见上文方法实施例,为了简洁,部分内容不再赘述。Hereinafter, the communication device provided by the embodiment of the present application will be described in detail with reference to FIG. 11 and FIG. 12 . It should be understood that the description of the device embodiments corresponds to the description of the method embodiments. Therefore, for content that is not described in detail, please refer to the above method embodiments. For the sake of brevity, some content will not be described again.
图11是本申请实施例提供的一种通信装置的示意性框图。如图11所示,该装置1100可以包括接口单元1110和处理单元1120。接口单元1110可以与外部进行通信,处理单元1120用于进行数据处理。接口单元1110还可以称为通信接口、通信单元或收发单元。Figure 11 is a schematic block diagram of a communication device provided by an embodiment of the present application. As shown in Figure 11, the device 1100 may include an interface unit 1110 and a processing unit 1120. The interface unit 1110 can communicate with the outside, and the processing unit 1120 is used for data processing. The interface unit 1110 may also be called a communication interface, communication unit or transceiver unit.
可选地,该装置1100还可以包括存储单元,该存储单元可以用于存储指令和/或数据,处理单元1120可以读取存储单元中的指令和/或数据,以使得装置实现前述方法实施例。Optionally, the device 1100 may also include a storage unit, which may be used to store instructions and/or data, and the processing unit 1120 may read the instructions and/or data in the storage unit, so that the device implements the foregoing method embodiments. .
该装置1100可以用于执行上文方法实施例中收发设备(如服务器、核心网设备和接入网设备)所执行的动作,这时,该装置1100可以为收发设备或者可配置于收发设备的部件,接口单元1110用于执行上文方法实施例中收发设备的收发相关的操作,处理单元1120用于执行上文方法实施例中收发设备的处理相关的操作。The device 1100 can be used to perform the actions performed by the transceiver equipment (such as servers, core network equipment, and access network equipment) in the above method embodiments. In this case, the device 1100 can be a transceiver device or can be configured on the transceiver device. Components, the interface unit 1110 is used to perform operations related to the transceiver of the transceiver device in the above method embodiment, and the processing unit 1120 is used to perform operations related to the processing of the transceiver device in the above method embodiment.
作为一种设计,该装置1100用于执行上文方法实施例中接入网设备所执行的动作。As a design, the device 1100 is used to perform the actions performed by the access network equipment in the above method embodiment.
接口单元1110,用于接收通用分组无线业务隧道协议用户面GTP-U数据包,该GTP-U数据包的载荷中包括神经网络数据包,该GTP-U数据包的包头中包括指示信息,该指示信息用于指示该神经网络数据包的优先级;The interface unit 1110 is configured to receive a General Packet Radio Service Tunneling Protocol user plane GTP-U data packet. The payload of the GTP-U data packet includes a neural network data packet. The header of the GTP-U data packet includes indication information. The indication information is used to indicate the priority of the neural network data packet;
处理单元1120,用于根据该指示信息控制该装置传输该神经网络数据包。The processing unit 1120 is configured to control the device to transmit the neural network data packet according to the instruction information.
该装置1100可实现对应于根据本申请实施例的方法实施例中的接入网设备执行的步骤或者流程,该装置1100可以包括用于执行方法实施例中的接入网设备执行的方法的单元。并且,该装置1100中的各单元和上述其他操作和/或功能分别为了实现方法实施例中的接入网设备中的方法实施例的相应流程。The apparatus 1100 may implement steps or processes corresponding to those executed by the access network equipment in the method embodiments of the present application, and the apparatus 1100 may include units for executing the methods executed by the access network equipment in the method embodiments. . Moreover, each unit in the device 1100 and the above-mentioned other operations and/or functions are respectively intended to implement the corresponding processes of the method embodiment in the access network equipment in the method embodiment.
其中,当该装置1100用于执行图7中的方法时,接口单元1110可用于执行方法中的收发步骤,如步骤S740;处理单元720可用于执行方法中的处理步骤,如步骤S750。When the device 1100 is used to perform the method in FIG. 7, the interface unit 1110 can be used to perform the sending and receiving steps in the method, such as step S740; the processing unit 720 can be used to perform the processing steps in the method, such as step S750.
应理解,各单元执行上述相应步骤的具体过程在上述方法实施例中已经详细说明,为了简洁,在此不再赘述。另外,各单元执行上述相应步骤的带来的有益效果上述方法实施例中已经详细说明,在此也不再赘述。It should be understood that the specific process of each unit performing the above corresponding steps has been described in detail in the above method embodiments, and will not be described again for the sake of brevity. In addition, the beneficial effects brought about by each unit performing the above corresponding steps have been described in detail in the above method embodiments, and will not be described again here.
作为另一种设计,该装置1100用于执行上文方法实施例中核心网设备所执行的动作。As another design, the device 1100 is used to perform the actions performed by the core network equipment in the above method embodiment.
接口单元1110,用于获得神经网络数据包,该神经网络数据包中包括指示信息,该指示信息用于指示该神经网络数据包的优先级;The interface unit 1110 is used to obtain a neural network data packet. The neural network data packet includes indication information, and the indication information is used to indicate the priority of the neural network data packet;
该接口单元1110,还用于向接入网设备发送通用分组无线业务隧道协议用户面GTP-U数据包,该GTP-U数据包的载荷中包括该神经网络数据包,该GTP-U数据包的包头中包括该指示信息。The interface unit 1110 is also used to send a General Packet Wireless Service Tunneling Protocol user plane GTP-U data packet to the access network device. The payload of the GTP-U data packet includes the neural network data packet. The GTP-U data packet Includes this instruction information in the packet header.
该装置1100可实现对应于根据本申请实施例的方法实施例中的核心网设备执行的步骤或者流程,该装置1100可以包括用于执行方法实施例中的核心网设备执行的方法的单 元。并且,该装置1100中的各单元和上述其他操作和/或功能分别为了实现方法实施例中的核心网设备中的方法实施例的相应流程。The device 1100 may implement steps or processes corresponding to those executed by the core network equipment in the method embodiments of the present application. The device 1100 may include a unit for executing the method executed by the core network equipment in the method embodiments. Yuan. Moreover, each unit in the device 1100 and the above-mentioned other operations and/or functions are respectively intended to implement the corresponding processes of the method embodiment in the core network equipment in the method embodiment.
其中,当该装置1100用于执行图7中的方法时,接口单元1110可用于执行方法中的收发步骤,如步骤S720;处理单元720可用于执行方法中的处理步骤,如步骤S730。When the device 1100 is used to perform the method in Figure 7, the interface unit 1110 can be used to perform the sending and receiving steps in the method, such as step S720; the processing unit 720 can be used to perform the processing steps in the method, such as step S730.
应理解,各单元执行上述相应步骤的具体过程在上述方法实施例中已经详细说明,为了简洁,在此不再赘述。另外,各单元执行上述相应步骤的带来的有益效果上述方法实施例中已经详细说明,在此也不再赘述。It should be understood that the specific process of each unit performing the above corresponding steps has been described in detail in the above method embodiments, and will not be described again for the sake of brevity. In addition, the beneficial effects brought about by each unit performing the above corresponding steps have been described in detail in the above method embodiments, and will not be described again here.
作为又一种设计,该装置1100用于执行上文方法实施例中服务器所执行的动作。As yet another design, the device 1100 is used to perform the actions performed by the server in the above method embodiment.
处理单元1120,用于生成神经网络数据包该神经网络数据包中包括指示信息,该指示信息用于指示该神经网络数据包的优先级;The processing unit 1120 is configured to generate a neural network data packet. The neural network data packet includes indication information, and the indication information is used to indicate the priority of the neural network data packet;
接口单元1110,用于发送该神经网络数据包。The interface unit 1110 is used to send the neural network data packet.
该装置1100可实现对应于根据本申请实施例的方法实施例中的服务器执行的步骤或者流程,该装置1100可以包括用于执行方法实施例中的服务器执行的方法的单元。并且,该装置1100中的各单元和上述其他操作和/或功能分别为了实现方法实施例中的服务器中的方法实施例的相应流程。The device 1100 may implement steps or processes corresponding to those executed by the server in the method embodiments of the embodiments of the present application, and the device 1100 may include a unit for executing the method executed by the server in the method embodiments. Moreover, each unit in the device 1100 and the above-mentioned other operations and/or functions are respectively intended to implement the corresponding processes of the method embodiment in the server in the method embodiment.
其中,当该装置1100用于执行图7中的方法时,接口单元1110可用于执行方法中的收发步骤,如步骤S720;处理单元720可用于执行方法中的处理步骤,如步骤S710。When the device 1100 is used to perform the method in Figure 7, the interface unit 1110 can be used to perform the sending and receiving steps in the method, such as step S720; the processing unit 720 can be used to perform the processing steps in the method, such as step S710.
应理解,各单元执行上述相应步骤的具体过程在上述方法实施例中已经详细说明,为了简洁,在此不再赘述。另外,各单元执行上述相应步骤的带来的有益效果上述方法实施例中已经详细说明,在此也不再赘述。It should be understood that the specific process of each unit performing the above corresponding steps has been described in detail in the above method embodiments, and will not be described again for the sake of brevity. In addition, the beneficial effects brought about by each unit performing the above corresponding steps have been described in detail in the above method embodiments, and will not be described again here.
上文实施例中的处理单元1120可以由至少一个处理器或处理器相关电路实现。接口单元1110可以由收发器或收发器相关电路实现。存储单元可以通过至少一个存储器实现。The processing unit 1120 in the above embodiments may be implemented by at least one processor or processor-related circuit. The interface unit 1110 may be implemented by a transceiver or transceiver related circuitry. The storage unit may be implemented by at least one memory.
如图12所示,本申请实施例还提供一种装置1200。该装置1200包括处理器1210,还可以包括一个或多个存储器1220。处理器1210与存储器1220耦合,存储器1220用于存储计算机程序或指令和、或数据,处理器1210用于执行存储器1220存储的计算机程序或指令和、或数据,使得上文方法实施例中的方法被执行。可选地,该装置1200包括的处理器1210为一个或多个。As shown in Figure 12, this embodiment of the present application also provides a device 1200. The apparatus 1200 includes a processor 1210 and may also include one or more memories 1220. The processor 1210 is coupled to the memory 1220. The memory 1220 is used to store computer programs or instructions and/or data. The processor 1210 is used to execute the computer programs or instructions and/or data stored in the memory 1220, so that the method in the above method embodiment be executed. Optionally, the device 1200 includes one or more processors 1210 .
可选地,该存储器1220可以与该处理器1210集成在一起,或者分离设置。Optionally, the memory 1220 can be integrated with the processor 1210 or provided separately.
可选地,如图12所示,该装置1200还可以包括收发器1230,收发器1230用于信号的接收和、或发送。例如,处理器1210用于控制收发器1230进行信号的接收和、或发送。Optionally, as shown in Figure 12, the device 1200 may also include a transceiver 1230, which is used for receiving and/or transmitting signals. For example, the processor 1210 is used to control the transceiver 1230 to receive and/or transmit signals.
作为一种方案,该装置1200用于实现上文方法实施例中由收发设备(如服务器、核心网设备和接入网设备)执行的操作。As a solution, the device 1200 is used to implement the operations performed by the transceiver equipment (such as server, core network equipment, and access network equipment) in the above method embodiment.
本申请实施例还提供一种计算机可读存储介质,其上存储有用于实现上述方法实施例中由收发设备(如服务器、核心网设备和接入网设备)执行的方法的计算机指令。Embodiments of the present application also provide a computer-readable storage medium on which are stored computer instructions for implementing the method executed by the transceiver device (such as a server, a core network device, and an access network device) in the above method embodiment.
例如,该计算机程序被计算机执行时,使得该计算机可以实现上述方法实施例中由收发设备(如服务器、核心网设备和接入网设备)执行的方法。For example, when the computer program is executed by a computer, the computer can implement the method executed by the transceiver device (such as a server, core network device, and access network device) in the above method embodiment.
本申请实施例还提供一种包含指令的计算机程序产品,该指令被计算机执行时使得该计算机实现上述方法实施例中由收发设备(如服务器、核心网设备和接入网设备)执行的方法。 Embodiments of the present application also provide a computer program product containing instructions. When the instructions are executed by a computer, the computer implements the method executed by the transceiver device (such as a server, a core network device, and an access network device) in the above method embodiment.
本申请实施例还提供一种通信系统,该通信系统包括上文实施例中的服务器、核心网设备和接入网设备。An embodiment of the present application also provides a communication system, which includes the server, core network equipment, and access network equipment in the above embodiment.
上述提供的任一种装置中相关内容的解释及有益效果均可参考上文提供的对应的方法实施例,此处不再赘述。For explanations of relevant content and beneficial effects of any of the devices provided above, please refer to the corresponding method embodiments provided above, and will not be described again here.
应理解,本申请实施例中提及的处理器可以是中央处理单元(central processing unit,CPU),还可以是其他通用处理器、数字信号处理器(digital signal processor,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。It should be understood that the processor mentioned in the embodiments of this application may be a central processing unit (CPU), or other general-purpose processor, digital signal processor (DSP), or application-specific integrated circuit (ASIC). application specific integrated circuit (ASIC), off-the-shelf programmable gate array (field programmable gate array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. A general-purpose processor may be a microprocessor or the processor may be any conventional processor, etc.
还应理解,本申请实施例中提及的存储器可以是易失性存储器和、或非易失性存储器。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM)。例如,RAM可以用作外部高速缓存。作为示例而非限定,RAM可以包括如下多种形式:静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。It should also be understood that the memory mentioned in the embodiments of this application may be a volatile memory and/or a non-volatile memory. Among them, non-volatile memory can be read-only memory (ROM), programmable ROM (PROM), erasable programmable read-only memory (erasable PROM, EPROM), electrically removable memory. Erase electrically programmable read-only memory (EPROM, EEPROM) or flash memory. Volatile memory can be random access memory (RAM). For example, RAM can be used as an external cache. As an example and not a limitation, RAM may include the following forms: static random access memory (static RAM, SRAM), dynamic random access memory (dynamic RAM, DRAM), synchronous dynamic random access memory (synchronous DRAM, SDRAM) , double data rate synchronous dynamic random access memory (double data rate SDRAM, DDR SDRAM), enhanced synchronous dynamic random access memory (enhanced SDRAM, ESDRAM), synchronous link dynamic random access memory (synchlink DRAM, SLDRAM) and Direct memory bus random access memory (direct rambus RAM, DR RAM).
需要说明的是,当处理器为通用处理器、DSP、ASIC、FPGA或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件时,存储器(存储模块)可以集成在处理器中。It should be noted that when the processor is a general-purpose processor, DSP, ASIC, FPGA or other programmable logic device, discrete gate or transistor logic device, or discrete hardware component, the memory (storage module) can be integrated in the processor.
还需要说明的是,本文描述的存储器旨在包括但不限于这些和任意其它适合类型的存储器。It should also be noted that the memories described herein are intended to include, but are not limited to, these and any other suitable types of memories.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的保护范围。Those of ordinary skill in the art will appreciate that the units and steps of each example described in conjunction with the embodiments disclosed herein can be implemented with electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the technical solution. Professionals and technicians can use different methods to implement the described functions for each specific application, but such implementation should not be considered to be beyond the scope of protection of this application.
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。此外,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed devices and methods can be implemented in other ways. For example, the device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined or can be integrated into another system, or some features can be ignored, or not implemented. In addition, the coupling or direct coupling or communication connection between each other shown or discussed may be through some interfaces, and the indirect coupling or communication connection of the devices or units may be in electrical, mechanical or other forms.
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元实现本申请提供的方案。 The units described as separate components may or may not be physically separated, and the components shown as units may or may not be physical units, that is, they may be located in one place, or they may be distributed to multiple network units. Some or all of the units can be selected according to actual needs to implement the solution provided by this application.
另外,在本申请各个实施例中的各功能单元可以集成在一个单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。In addition, each functional unit in each embodiment of the present application can be integrated into one unit, or each unit can exist physically alone, or two or more units can be integrated into one unit.
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。例如,所述计算机可以是个人计算机,服务器,或者网络设备等。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(solid state disk,SSD)等。例如,前述的可用介质可以包括但不限于:U盘、移动硬盘、只读存储器(read-only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented using software, it may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions described in the embodiments of the present application are generated in whole or in part. The computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device. For example, the computer may be a personal computer, a server, or a network device. The computer instructions may be stored in or transmitted from one computer-readable storage medium to another, e.g., the computer instructions may be transferred from a website, computer, server, or data center Transmission to another website, computer, server or data center by wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) means. The computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains one or more available media integrated. The available media may be magnetic media (such as floppy disks, hard disks, magnetic tapes), optical media (such as DVDs), or semiconductor media (such as solid state disks (SSD)), etc. For example, the aforementioned available media may include But it is not limited to: U disk, mobile hard disk, read-only memory (ROM), random access memory (RAM), magnetic disk or optical disk and other media that can store program code.
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。 The above are only specific embodiments of the present application, but the protection scope of the present application is not limited thereto. Any person familiar with the technical field can easily think of changes or substitutions within the technical scope disclosed in the present application. should be covered by the protection scope of this application. Therefore, the protection scope of this application should be subject to the protection scope of the claims.

Claims (22)

  1. 一种通信方法,其特征在于,包括:A communication method, characterized by including:
    接收通用分组无线业务隧道协议用户面GTP-U数据包,所述GTP-U数据包的载荷中包括神经网络数据包,所述GTP-U数据包的包头中包括指示信息,所述指示信息用于指示所述神经网络数据包的优先级;Receive a General Packet Radio Service Tunneling Protocol user plane GTP-U data packet. The payload of the GTP-U data packet includes a neural network data packet. The header of the GTP-U data packet includes indication information. The indication information is To indicate the priority of the neural network data packet;
    根据所述指示信息传输所述神经网络数据包。Transmitting the neural network data packet according to the indication information.
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述指示信息传输所述神经网络数据包,包括:The method of claim 1, wherein transmitting the neural network data packet according to the indication information includes:
    在所述指示信息指示所述神经网络数据包为高优先级的情况下,优先传输所述神经网络数据包;或者,When the indication information indicates that the neural network data packet is of high priority, the neural network data packet is transmitted first; or,
    在所述指示信息指示所述神经网络数据包为低优先级的情况下,延后传输所述神经网络数据包;或者,When the indication information indicates that the neural network data packet is of low priority, delay the transmission of the neural network data packet; or,
    在所述指示信息指示所述神经网络数据包为低优先级,且网络状态发生拥堵的情况下,放弃传输所述神经网络数据包。When the indication information indicates that the neural network data packet is of low priority and the network status is congested, the transmission of the neural network data packet is given up.
  3. 一种通信方法,其特征在于,包括:A communication method, characterized by including:
    获得神经网络数据包,所述神经网络数据包中包括指示信息,所述指示信息用于指示所述神经网络数据包的优先级;Obtain a neural network data packet, the neural network data packet includes indication information, the indication information is used to indicate the priority of the neural network data packet;
    向接入网设备发送通用分组无线业务隧道协议用户面GTP-U数据包,所述GTP-U数据包的载荷中包括所述神经网络数据包,所述GTP-U数据包的包头中包括所述指示信息。Send a General Packet Radio Service Tunneling Protocol user plane GTP-U data packet to the access network device, the payload of the GTP-U data packet includes the neural network data packet, and the header of the GTP-U data packet includes the instructions.
  4. 根据权利要求1至3中任一项所述的方法,其特征在于,所述神经网络数据包的优先级与所述神经网络数据包对应的神经网络恢复数据的效果相关。The method according to any one of claims 1 to 3, characterized in that the priority of the neural network data packet is related to the effect of the neural network recovering data corresponding to the neural network data packet.
  5. 根据权利要求4所述的方法,其特征在于,The method according to claim 4, characterized in that:
    所述神经网络数据包的优先级还与预设算法恢复所述数据的效果相关。The priority of the neural network data packet is also related to the effect of the preset algorithm on recovering the data.
  6. 根据权利要求1至3中任一项所述的方法,其特征在于,所述神经网络数据包用于重构所述神经网络数据包对应的神经网络,The method according to any one of claims 1 to 3, characterized in that the neural network data packet is used to reconstruct the neural network corresponding to the neural network data packet,
    所述神经网络数据包的优先级与重构所述神经网络的效果相关。The priority of the neural network data packet is related to the effect of reconstructing the neural network.
  7. 根据权利要求6所述的方法,其特征在于,所述神经网络数据包包括所述神经网络的系数的数据,所述系数的数据用于获得所述系数,所述系数用于重构所述神经网络,The method according to claim 6, characterized in that the neural network data packet includes data of coefficients of the neural network, the data of the coefficients are used to obtain the coefficients, and the coefficients are used to reconstruct the Neural Networks,
    所述神经网络数据包的优先级与所述系数的数据对所述系数的影响相关。The priority of the neural network data packet is related to the impact of the data of the coefficient on the coefficient.
  8. 根据权利要求1至7中任一项所述的方法,其特征在于,所述指示信息携带在所述神经网络数据包的包头中。The method according to any one of claims 1 to 7, characterized in that the indication information is carried in a header of the neural network data packet.
  9. 根据权利要求8所述的方法,其特征在于,所述指示信息位于所述神经网络数据包包头中的用户数据报协议UDP字段和实时传输协议RTP字段之间。The method according to claim 8, characterized in that the indication information is located between the User Datagram Protocol UDP field and the Real-time Transport Protocol RTP field in the neural network data packet header.
  10. 一种通信装置,其特征在于,包括:A communication device, characterized by including:
    接口单元,用于接收通用分组无线业务隧道协议用户面GTP-U数据包,所述GTP-U数据包的载荷中包括神经网络数据包,所述GTP-U数据包的包头中包括指示信息,所述指示信息用于指示所述神经网络数据包的优先级; An interface unit configured to receive a General Packet Radio Service Tunneling Protocol user plane GTP-U data packet, the payload of the GTP-U data packet includes a neural network data packet, and the header of the GTP-U data packet includes indication information, The indication information is used to indicate the priority of the neural network data packet;
    处理单元,用于根据所述指示信息控制所述装置传输所述神经网络数据包。A processing unit, configured to control the device to transmit the neural network data packet according to the instruction information.
  11. 根据权利要求10所述的装置,其特征在于,所述处理单元根据所述指示信息控制所述装置传输所述神经网络数据包,包括:The device according to claim 10, wherein the processing unit controls the device to transmit the neural network data packet according to the instruction information, including:
    在所述指示信息指示所述神经网络数据包为高优先级的情况下,所述处理单元控制所述装置优先传输所述神经网络数据包;或者,In the case where the indication information indicates that the neural network data packet is of high priority, the processing unit controls the device to transmit the neural network data packet with priority; or,
    在所述指示信息指示所述神经网络数据包为低优先级的情况下,所述处理单元控制所述装置延后传输所述神经网络数据包;或者,When the indication information indicates that the neural network data packet is of low priority, the processing unit controls the device to delay transmission of the neural network data packet; or,
    在所述指示信息指示所述神经网络数据包为低优先级,且网络状态发生拥堵的情况下,所述处理单元控制所述装置放弃传输所述神经网络数据包。When the indication information indicates that the neural network data packet is of low priority and the network status is congested, the processing unit controls the device to give up transmitting the neural network data packet.
  12. 一种通信装置,其特征在于,包括:A communication device, characterized by including:
    接口单元,用于获得神经网络数据包,所述神经网络数据包中包括指示信息,所述指示信息用于指示所述神经网络数据包的优先级;An interface unit, configured to obtain a neural network data packet, where the neural network data packet includes indication information, and the indication information is used to indicate the priority of the neural network data packet;
    所述接口单元,还用于向接入网设备发送通用分组无线业务隧道协议用户面GTP-U数据包,所述GTP-U数据包的载荷中包括所述神经网络数据包,所述GTP-U数据包的包头中包括所述指示信息。The interface unit is also configured to send a General Packet Wireless Service Tunneling Protocol user plane GTP-U data packet to the access network device. The payload of the GTP-U data packet includes the neural network data packet, and the GTP-U data packet contains the neural network data packet. The header of the U data packet includes the indication information.
  13. 根据权利要求10至12中任一项所述的装置,其特征在于,所述神经网络数据包的优先级与所述神经网络数据包对应的神经网络恢复数据的效果相关。The device according to any one of claims 10 to 12, characterized in that the priority of the neural network data packet is related to the effect of the neural network recovering data corresponding to the neural network data packet.
  14. 根据权利要求13所述的装置,其特征在于,The device according to claim 13, characterized in that:
    所述神经网络数据包的优先级还与预设算法恢复所述数据的效果相关。The priority of the neural network data packet is also related to the effect of the preset algorithm on recovering the data.
  15. 根据权利要求10至12中任一项所述的装置,其特征在于,所述神经网络数据包用于重构所述神经网络数据包对应的神经网络,The device according to any one of claims 10 to 12, characterized in that the neural network data packet is used to reconstruct the neural network corresponding to the neural network data packet,
    所述神经网络数据包的优先级与重构所述神经网络的效果相关。The priority of the neural network data packet is related to the effect of reconstructing the neural network.
  16. 根据权利要求15所述的装置,其特征在于,所述神经网络数据包包括所述神经网络的系数的数据,所述系数的数据用于获得所述系数,所述系数用于重构所述神经网络,The device according to claim 15, characterized in that the neural network data packet includes data of coefficients of the neural network, the data of the coefficients are used to obtain the coefficients, and the coefficients are used to reconstruct the Neural Networks,
    所述神经网络数据包的优先级与所述系数的数据对所述系数的影响相关。The priority of the neural network data packet is related to the impact of the data of the coefficient on the coefficient.
  17. 根据权利要求10至16中任一项所述的装置,其特征在于,所述指示信息携带在所述神经网络数据包的包头中。The device according to any one of claims 10 to 16, characterized in that the indication information is carried in a header of the neural network data packet.
  18. 根据权利要求17所述的装置,其特征在于,所述指示信息位于所述神经网络数据包包头中的用户数据报协议UDP字段和实时传输协议RTP字段之间。The device according to claim 17, wherein the indication information is located between a User Datagram Protocol (UDP) field and a Real-time Transport Protocol (RTP) field in the header of the neural network data packet.
  19. 一种通信装置,其特征在于,包括处理器,所述处理器与存储器耦合,所述存储器用于存储计算机程序或指令,所述处理器用于执行存储器中的所述计算机程序或指令,使得所述装置执行如权利要求1至9中任一项所述的方法。A communication device, characterized in that it includes a processor, the processor is coupled to a memory, the memory is used to store computer programs or instructions, and the processor is used to execute the computer program or instructions in the memory, so that the The device performs the method according to any one of claims 1 to 9.
  20. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机程序或指令,当所述计算机程序或指令在计算机上运行时,使得所述计算机执行如权利要求1至9中任一项所述的方法。A computer-readable storage medium, characterized in that a computer program or instructions are stored on the computer-readable storage medium, and when the computer program or instructions are run on a computer, the computer is caused to execute the steps as claimed in claims 1 to 1. The method described in any one of 9.
  21. 一种芯片系统,其特征在于,包括:处理器,用于从存储器中调用并运行计算机程序,使得安装有所述芯片系统的通信设备执行权利要求1至9中任一项所述的方法。A chip system, characterized by comprising: a processor, configured to call and run a computer program from a memory, so that a communication device installed with the chip system executes the method described in any one of claims 1 to 9.
  22. 一种计算机程序产品,其特征在于,当所述计算机程序产品在计算机上运行时,使得计算机执行如权利要求1至9中任一项所述的方法。 A computer program product, characterized in that, when the computer program product is run on a computer, it causes the computer to execute the method according to any one of claims 1 to 9.
PCT/CN2023/081482 2022-04-08 2023-03-15 Data transmission method and apparatus WO2023193579A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210365108.5A CN116939702A (en) 2022-04-08 2022-04-08 Method and device for data transmission
CN202210365108.5 2022-04-08

Publications (1)

Publication Number Publication Date
WO2023193579A1 true WO2023193579A1 (en) 2023-10-12

Family

ID=88243974

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/081482 WO2023193579A1 (en) 2022-04-08 2023-03-15 Data transmission method and apparatus

Country Status (2)

Country Link
CN (1) CN116939702A (en)
WO (1) WO2023193579A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125607A (en) * 2013-04-23 2014-10-29 中兴通讯股份有限公司 User plane congestion processing method and device, and service gateway
US20150358483A1 (en) * 2013-01-18 2015-12-10 Samsung Electronics Co., Ltd. Method and apparatus for adjusting service level in congestion
CN110740481A (en) * 2018-07-18 2020-01-31 中国移动通信有限公司研究院 Data processing method, apparatus and computer storage medium based on quality of service
WO2022042528A1 (en) * 2020-08-24 2022-03-03 华为技术有限公司 Intelligent radio access network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150358483A1 (en) * 2013-01-18 2015-12-10 Samsung Electronics Co., Ltd. Method and apparatus for adjusting service level in congestion
CN104125607A (en) * 2013-04-23 2014-10-29 中兴通讯股份有限公司 User plane congestion processing method and device, and service gateway
CN110740481A (en) * 2018-07-18 2020-01-31 中国移动通信有限公司研究院 Data processing method, apparatus and computer storage medium based on quality of service
WO2022042528A1 (en) * 2020-08-24 2022-03-03 华为技术有限公司 Intelligent radio access network

Also Published As

Publication number Publication date
CN116939702A (en) 2023-10-24

Similar Documents

Publication Publication Date Title
WO2021259112A1 (en) Service transmission method and apparatus
CN112423340B (en) User plane information reporting method and device
WO2022088833A1 (en) Method for transmitting data packet of media stream, and communication apparatus
US20230164631A1 (en) Communication method and apparatus
US20230188472A1 (en) Data transmission method and apparatus
US20230354334A1 (en) Communication method and apparatus
WO2023088009A1 (en) Data transmission method and communication apparatus
WO2023193579A1 (en) Data transmission method and apparatus
WO2022151492A1 (en) Scheduling transmission method and apparatus
WO2022017403A1 (en) Communication method and apparatus
WO2023185608A1 (en) Data transmission method and communication apparatus
WO2023185769A1 (en) Communication method, communication apparatus, and communication system
WO2023179322A1 (en) Communication method and apparatus
WO2023185402A1 (en) Communication method and apparatus
WO2023185598A1 (en) Communication method and apparatus
WO2022198613A1 (en) Media data transmission method and communication apparatus
US20240031298A1 (en) Communication method and device
WO2023045714A1 (en) Scheduling method and communication apparatus
WO2023193571A1 (en) Communication method and communication apparatus
WO2023093559A1 (en) Data transmission method and apparatus
WO2023046118A1 (en) Communication method and apparatus
WO2023088155A1 (en) Quality-of-service (qos) management method and apparatus
WO2021227781A1 (en) Data frame transmission method and communication apparatus
WO2024032211A1 (en) Congestion control method and apparatus
WO2022178778A1 (en) Data transmission method and communication apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23784143

Country of ref document: EP

Kind code of ref document: A1