WO2022126437A1 - Procédé et appareil de communication - Google Patents

Procédé et appareil de communication Download PDF

Info

Publication number
WO2022126437A1
WO2022126437A1 PCT/CN2020/136870 CN2020136870W WO2022126437A1 WO 2022126437 A1 WO2022126437 A1 WO 2022126437A1 CN 2020136870 W CN2020136870 W CN 2020136870W WO 2022126437 A1 WO2022126437 A1 WO 2022126437A1
Authority
WO
WIPO (PCT)
Prior art keywords
sub
group
data packets
packets
characteristic information
Prior art date
Application number
PCT/CN2020/136870
Other languages
English (en)
Chinese (zh)
Inventor
徐小英
李拟珺
许斌
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to PCT/CN2020/136870 priority Critical patent/WO2022126437A1/fr
Priority to CN202080106507.9A priority patent/CN116458239A/zh
Publication of WO2022126437A1 publication Critical patent/WO2022126437A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/04Wireless resource allocation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W72/00Local resource management
    • H04W72/12Wireless traffic scheduling

Definitions

  • the present application relates to the field of communication technologies, and in particular, to a communication method and apparatus.
  • VR technology mainly refers to simulating a virtual environment to bring users an "immersive" sense of immersion. Specifically, by rendering scenes such as visual and audio, to simulate the sensory stimulation of the user such as visual and audio in the real world as much as possible, so that the user is immersed in the simulated virtual environment.
  • the user may wear an end device such as a head mounted display (HMD), which replaces the user's field of view with a visual component simulated in the end device, and the user may also wear a headset to provide the user with Audio that comes with the headset.
  • HMD head mounted display
  • motion tracking of the user can also be performed, for example, tracking the rotation angle of the user's head HMD, so as to update the simulated visual and audio content in time, so that the visual and audio content of the user experience is consistent with the user's actions.
  • the base station provides panoramic image data for the HMD.
  • the user displays the corresponding content. It can be seen that although the user only needs part of the panoramic image data, the base station still needs to deliver the entire panoramic image data to the HMD, which increases the air interface overhead.
  • the present application provides a communication method and device, which can reduce air interface overhead in the process of performing VR services.
  • a communication method is provided, where the execution body of the method is an access network device or a module in the access network device, and the description is made by taking the access network device as the execution body as an example.
  • the method includes: receiving a first data packet from a first core network device, receiving a first message from a terminal device, and sending a first group of sub-data packets to the terminal device according to the first message and the first data packet.
  • the first data packet includes one or more groups of sub-data packets; the first message is used to request the first group of sub-data packets, the first message includes feature information of the first group of sub-data packets, and the first group of sub-data packets is a group or a group of subpackets within multiple groups of subpackets.
  • the base station no longer sends the panoramic image to the terminal device, but sends the sub-data package corresponding to the part of the image required by the terminal device in the panoramic image to the terminal device, and the amount of data sent by the base station is reduced, and then Reduce the air interface resource consumption of the base station and improve the network capacity.
  • the feature information of the first group of sub-data packets includes any one or more of the following: perspective information of the first group of sub-data packets, identifiers of the first group of sub-data packets, and first group of sub-data packets
  • perspective information of the first group of sub-data packets identifiers of the first group of sub-data packets
  • first group of sub-data packets The image type of the frame corresponding to the packet, the encoding type of the frame corresponding to the first group of sub-data packets, the identifier of the frame corresponding to the first group of sub-data packets, and the frame type of the frame corresponding to the first group of sub-data packets.
  • the first data packet further includes feature information of one or more groups of sub-data packets.
  • the first group of sub-data packets corresponding to the first characteristic information are carried in the first quality of service QoS flow, and the second group of sub-data packets corresponding to the second characteristic information are carried in the second QoS flow;
  • the first group of sub-data packets corresponding to the first feature information are carried in the first session, and the first group of sub-data packets corresponding to the second feature information are carried in the second session.
  • the first core network device implicitly indicates the feature information of different sub-data packets to the access network device, that is, the first core network device does not need to encapsulate the feature information in the first data packet, thereby saving transmission overhead.
  • the access network device can determine the characteristic information of the sub-data packets received on the current QoS flow according to the configured association relationship without analyzing the first data packet from the first core network device, which can reduce the analysis burden of the base station. Accordingly, the implementation complexity of the access network device and the first core network device can be reduced.
  • the method further includes: receiving first indication information from the second core network device, where the first indication information is used to indicate characteristic information of the first group of sub-data packets and characteristic information of the first group of sub-data packets The identifier of the associated first QoS flow; or, the first indication information includes the characteristic information of the first group of sub-data packets and the identifier of the first session associated with the characteristic information of the first group of sub-data packets.
  • the second core network device needs to configure multiple QoS flow identifiers and feature information associated with each of the multiple QoS flows for the terminal device in advance. In order to facilitate subsequent terminal equipment to identify the characteristic information of sub-data packets on different QoS flows according to the configuration.
  • the method further includes: sending characteristic information of the first group of sub-data packets to the terminal device.
  • the access network device can send sub-data packets with different characteristic information through the same logical channel, and the user equipment (UE) and the base station do not need to manage multiple sub-data packets.
  • Logical channel to avoid switching logical channels to receive sub-packets with different feature information.
  • the method further includes: sending second indication information to the terminal device, where the second indication information is used to indicate the first group of sub-packets associated with the characteristic information of the first group of sub-packets and the first group of sub-packets.
  • An identifier of a logical channel; or, the second indication information includes the characteristic information of the first group of sub-data packets and the identifier of the first data radio bearer associated with the characteristic information of the first group of sub-data packets.
  • sending the first group of sub-data packets to the terminal device includes: sending the first group of sub-data packets to the terminal device through a first logical channel;
  • the access network device does not need to display the characteristic information to be sent to the terminal device, thereby reducing the air interface overhead of the access network device.
  • the present application provides a communication method.
  • the execution body of the method is a first core network device or a module in the first core network device.
  • the first core network device is used as an example for description.
  • the method includes: determining a first data packet, and sending the first data packet to an access network device.
  • the first data packet includes one or more groups of sub-data packets and characteristic information of one or more groups of sub-data packets.
  • the feature information of the first group of sub-data packets includes any one or more of the following: viewing angle information, an identifier of the first group of sub-data packets, the image type of the corresponding frame, and the encoding of the corresponding frame type, the identifier of the frame corresponding to the first group of sub-data packets, the frame type of the frame corresponding to the first group of sub-data packets, and the first group of sub-data packets is a group of sub-data packets in one or more groups of sub-data packets.
  • the present application provides a communication method, where the execution body of the method is a second core network device or a module in the second core network device, and the second core network device is used as an example for description here.
  • the method includes: determining first indication information; and sending the first indication information to an access network device.
  • the first indication information is used to indicate the identifier of the first quality of service QoS flow associated with the characteristic information of the first group of sub-data packets and the characteristic information of the first group of sub-data packets; or, the first indication information includes the first group of sub-data packets
  • the identifier of the first session that is associated with the feature information of the first group of sub-packets.
  • the feature information of the first group of sub-data packets includes any one or more of the following: perspective information of the first group of sub-data packets, identifiers of the first group of sub-data packets, and first group of sub-data packets
  • perspective information of the first group of sub-data packets identifiers of the first group of sub-data packets
  • first group of sub-data packets The image type of the frame corresponding to the packet, the encoding type of the frame corresponding to the first group of sub-data packets, the identifier of the frame corresponding to the first group of sub-data packets, and the frame type of the frame corresponding to the first group of sub-data packets.
  • the present application provides a communication method.
  • the execution body of the method is a terminal device or a module in the terminal device.
  • the terminal device is used as the execution body as an example for description.
  • the method includes: sending a first message to an access network device, and receiving a first group of sub-data packets from the access network device.
  • the first message is used to request the first group of sub-data packets, and the first message includes characteristic information of the first group of sub-data packets.
  • the feature information of the first group of sub-data packets includes any one or more of the following: perspective information of the first group of sub-data packets, identifiers of the first group of sub-data packets, and first group of sub-data packets
  • perspective information of the first group of sub-data packets identifiers of the first group of sub-data packets
  • first group of sub-data packets The image type of the frame corresponding to the packet, the encoding type of the frame corresponding to the first group of sub-data packets, the identifier of the frame corresponding to the first group of sub-data packets, and the frame type of the frame corresponding to the first group of sub-data packets.
  • the method further includes: receiving feature information of the first group of sub-data packets from the access network device.
  • the method further includes: receiving second indication information from the access network device, where the second indication information is used to indicate that the characteristic information of the first group of sub-data packets is associated with the characteristic information of the first group of sub-data packets The identifier of the first logical channel; or, the second indication information includes the characteristic information of the first group of sub-data packets and the identifier of the first data radio bearer associated with the characteristic information of the first group of sub-data packets.
  • the present application provides a communication method, where the execution body of the method is a terminal device or a module in the terminal device, and the description is made by taking the terminal device as the execution body as an example.
  • the method includes: receiving second indication information from an access network device, and receiving a first group of sub-data packets according to the second indication information.
  • the second indication information is used to indicate the identification of the first logical channel associated with the characteristic information of the first group of sub-data packets and the characteristic information of the first group of sub-data packets.
  • the second indication information is used to indicate the identifier of the first data radio bearer associated with the characteristic information of the first group of sub-data packets and the characteristic information of the first group of sub-data packets.
  • the feature information of the first group of sub-data packets includes any one or more of the following: perspective information of the first group of sub-data packets, identifiers of the first group of sub-data packets, and first group of sub-data packets
  • perspective information of the first group of sub-data packets identifiers of the first group of sub-data packets
  • first group of sub-data packets The image type of the frame corresponding to the packet, the encoding type of the frame corresponding to the first group of sub-data packets, the identifier of the frame corresponding to the first group of sub-data packets, and the frame type of the frame corresponding to the first group of sub-data packets.
  • the present application provides a communication apparatus, the apparatus comprising: a module for performing the foregoing first aspect and any possible implementation manner of the first aspect.
  • the present application provides a communication apparatus, the apparatus comprising: a module for performing the foregoing second aspect and any possible implementation manner of the second aspect.
  • the present application provides a communication apparatus, the apparatus comprising: a module for executing the foregoing third aspect and any possible implementation manner of the third aspect.
  • the present application provides a communication apparatus, the apparatus comprising: a module for performing the foregoing fourth aspect, any possible implementation manner of the fourth aspect, the fifth aspect, and any possible implementation manner of the fifth aspect.
  • a tenth aspect provides a communication device, comprising a processor and an interface circuit, the interface circuit is configured to receive signals from other communication devices other than the communication device and transmit to the processor or send signals from the processor
  • the processor is used to implement the method in the foregoing first aspect and any possible implementation manner of the first aspect through logic circuits or executing code instructions.
  • a communication device comprising a processor and an interface circuit, the interface circuit being configured to receive signals from other communication devices other than the communication device and transmit to the processor or transfer signals from the processor The signal is sent to other communication devices other than the communication device, and the processor is used to implement the method in the second aspect and any possible implementation manner of the second aspect through logic circuits or executing code instructions.
  • a twelfth aspect provides a communication device, comprising a processor and an interface circuit, the interface circuit is configured to receive signals from other communication devices other than the communication device and transmit to the processor or transfer signals from the processor Sent to other communication devices other than the communication device, the processor is used to implement the method in the foregoing third aspect and any possible implementation manner of the third aspect through a logic circuit or executing code instructions.
  • a thirteenth aspect provides a communication device, comprising a processor and an interface circuit, the interface circuit being configured to receive signals from other communication devices other than the communication device and transmit to the processor or transfer signals from the processor The signal is sent to other communication devices other than the communication device, and the processor is used to implement the fourth aspect, any possible implementation manner of the fourth aspect, the fifth aspect, and any of the fifth aspect through a logic circuit or executing code instructions methods in possible implementations.
  • a fourteenth aspect provides a computer-readable storage medium, where a computer program or instruction is stored, and when the computer program or instruction is executed, any one of the aforementioned first aspect and the first aspect is implemented methods in possible implementations.
  • a fifteenth aspect provides a computer-readable storage medium, where a computer program or instruction is stored, and when the computer program or instruction is executed, any one of the foregoing second aspect and the second aspect is implemented methods in possible implementations.
  • a sixteenth aspect provides a computer-readable storage medium, where a computer program or instruction is stored, and when the computer program or instruction is executed, any one of the third aspect and the third aspect is implemented methods in possible implementations.
  • a seventeenth aspect provides a computer-readable storage medium, where a computer program or instruction is stored, and when the computer program or instruction is executed, any one of the fourth aspect and the fourth aspect is implemented Possible implementations, the fifth aspect, and methods in any possible implementations of the fifth aspect.
  • An eighteenth aspect provides a computer program product comprising instructions that, when the instructions are executed, implement the first aspect and the method in any possible implementation manner of the first aspect.
  • a nineteenth aspect provides a computer program product comprising instructions that, when the instructions are executed, implement the second aspect and the method in any possible implementation manner of the second aspect.
  • a twentieth aspect provides a computer program product comprising instructions that, when the instructions are executed, implement the third aspect and the method in any possible implementation manner of the third aspect.
  • a twenty-first aspect provides a computer program product comprising instructions that, when the instructions are executed, implement the fourth aspect, any possible implementation manner of the fourth aspect, the fifth aspect, and any possible implementation of the fifth aspect method in the implementation.
  • a computer program in a twenty-second aspect, includes codes or instructions that, when the codes or instructions are executed, implement the method in the foregoing first aspect and any possible implementation manner of the first aspect.
  • a computer program in a twenty-third aspect, includes codes or instructions, when the codes or instructions are executed, the second aspect and the method in any possible implementation manner of the second aspect are implemented.
  • a computer program in a twenty-fourth aspect, includes codes or instructions, when the codes or instructions are executed, the third aspect and the method in any possible implementation manner of the third aspect are implemented.
  • a computer program includes codes or instructions that, when the codes or instructions are executed, realize the foregoing fourth aspect, any possible implementation manner of the fourth aspect, the fifth aspect, The method in any possible implementation manner of the fifth aspect.
  • a twenty-sixth aspect provides a chip system, the chip system includes a processor, and may further include a memory, for implementing the foregoing first aspect, any possible implementation manner of the first aspect, the second aspect, and the second aspect.
  • the chip system can be composed of chips, and can also include chips and other discrete devices.
  • a communication system comprising the device of the sixth aspect or the tenth aspect, the device of the seventh aspect or the eleventh aspect, and the eighth aspect or the twelfth aspect The device of the aspect, and the device of the ninth aspect or the thirteenth aspect.
  • FIG. 1 is a schematic diagram of a multicast method provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a multicast and unicast method provided by an embodiment of the present application
  • FIG. 3 is a schematic diagram 1 of a communication system architecture provided by an embodiment of the present application.
  • FIG. 4 is a second schematic diagram of a communication system architecture provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a hardware structure of a communication device provided by an embodiment of the present application.
  • FIG. 6 is a schematic flowchart 1 of a communication method provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of slice division provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of fragmentation division provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an image processing method provided by an embodiment of the present application.
  • FIG. 10A is a schematic diagram 1 of a scenario of a communication method provided by an embodiment of the present application.
  • FIG. 10B is a second scenario schematic diagram of the communication method provided by the embodiment of the present application.
  • FIG. 10C is a third scenario schematic diagram of the communication method provided by the embodiment of the present application.
  • FIG. 11 is a schematic flowchart of a method for two-layer coding provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of displaying indication feature information provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of interaction between a base station and a terminal device according to an embodiment of the present application.
  • FIG. 14 is a fourth schematic diagram of a scenario of a communication method provided by an embodiment of the present application.
  • 15-17 are schematic diagrams of displaying indication feature information provided by an embodiment of the present application.
  • FIG. 18 is a second schematic flowchart of a communication method provided by an embodiment of the present application.
  • FIG. 19 is a third schematic flowchart of a communication method provided by an embodiment of the present application.
  • FIG. 20 is a fourth schematic flowchart of a communication method provided by an embodiment of the present application.
  • FIG. 21 is a schematic structural diagram of an apparatus provided by an embodiment of the present application.
  • VR services have the characteristics of high speed, low latency, and high consumption of bandwidth resources.
  • the rate is required to be 50Mbps
  • the delay is 10ms
  • the packet loss rate is 0.00001.
  • the network side can deliver panoramic image data, that is, 360-degree image data, such as 6-sided spherical image data, to the terminal device when scheduling VR service data.
  • the terminal device selects image data of a desired angle of view from the panoramic image data according to the user's angle of view, and displays it to the user.
  • the prior art provides two ways of scheduling VR service data.
  • the base station multi-casts high-definition panoramic images to a group of terminals.
  • the base station multicasts a low-definition panoramic image to a group of terminals, and unicasts a high-definition image of a desired viewing angle to the group of terminals.
  • a user plane function (UPF) network element carries 360-degree panoramic image data through a multicast session, and sends the panoramic image data to the base station.
  • the base station multicasts the panoramic image data to the terminals of the multicast group according to the channel conditions of the farthest terminal equipment among the multicast users.
  • the terminal device selects part of the data from the panoramic image data according to the user's perspective and other information, and generates a corresponding picture to display to the user.
  • UE1-UE6 within the coverage of the base station all watch the live broadcast of the football match, and the user perspective of UE4 and UE5 is perspective 2 (for example, the user heads of UE4 and UE5 all turn to the goal). ), the user perspectives of UE1 and UE2 are both perspective 1 (for example, the user's head turns to the midfield), and the user perspectives of UE3 and UE6 are both perspective 3 (for example, the user's head turns to the front field).
  • the base station divides the UEs within the coverage area watching the live football match, namely UE1-UE6, into a multicast group, and multicasts the panoramic image data of the football match to the UEs in this group. It can be seen that although the user only needs some images in the panoramic image, for example, users of UE4 and UE5 only need the image of view 2, or, due to the limitation of the UE's viewing angle, they can only see the image within the viewing angle range, but the base station needs to Multicast scheduling of the entire panoramic image data. This increases the amount of data scheduled by the base station, resulting in high resource consumption.
  • the base station in order to ensure that the farthest UE in the group, that is, UE6, can successfully receive the panoramic image data, the base station multicasts and schedules the panoramic image data according to the channel conditions of UE6. In this way, the base station has to reserve more resources for the near-end UE with better channel conditions, resulting in a large resource consumption.
  • the server in order to reduce resource overhead, sends a low-definition panoramic image to the UPF, and the UPF carries the low-definition panoramic image through a multicast session, and sends the low-definition panoramic image to the base station.
  • the base station multicasts and schedules the low-definition panoramic image to the terminals of the multicast group based on the channel conditions of the farthest terminal equipment in the multicast group.
  • the terminal device receives the low-definition panoramic image data, and selects a low-definition image from the user's perspective from the low-definition panoramic images according to the user's perspective.
  • the server not only provides the low-definition panoramic image, but also generates a high-definition image of the user's perspective according to the perspective request of the terminal device, and sends the corresponding user perspective to the UPF. of high-definition images.
  • the UPF carries high-definition images from the user's perspective through a dedicated session, and sends the high-definition images from the user's perspective to the base station.
  • the base station sends the high-definition image of the user's perspective to the terminal device through unicast scheduling.
  • the terminal device can superimpose the high-definition image from the user's perspective to the low-definition image from the user's perspective to obtain a picture with better definition and display it to the user.
  • the base station only needs to copy one copy of the data, and multicast the one copy of the data. For example, a piece of data is forwarded to the relay node, and the relay node delivers it to each UE.
  • the base station needs to copy multiple pieces of data, and then send each piece of data to the corresponding UE respectively.
  • an embodiment of the present application provides a communication method, which is applied to a fifth generation (5th generation, 5G) mobile communication system, such as a new radio interface (new radio, NR), or a subsequently evolved communication system (such as a sixth generation, 5G) mobile communication system.
  • 5th generation, 5G fifth generation
  • NR new radio interface
  • subsequently evolved communication system such as a sixth generation, 5G mobile communication system.
  • 6G sixth generation
  • the communication system includes an access and mobility management function (AMF) network element, a session management function (SMF) network element, a UPF, a unified data management (UDM) network element, Policy control function (PCF) network element, authentication server function (AUSF) network element, network exposure function (NEF) network element and some not shown network elements, such as network Function storage function (network function repository function, NRF) network element, etc., which are not specifically limited in this embodiment of the present application.
  • AMF access and mobility management function
  • SMF session management function
  • UPF a unified data management
  • PCF Policy control function
  • AUSF authentication server function
  • NEF network exposure function
  • the terminal accesses the 5GS through the access network equipment, the terminal communicates with the AMF network element through the next generation network (Next generation, N) 1 interface (N1 for short), and the access network
  • the device communicates with the AMF network element through the N2 interface (referred to as N2), the access network device communicates with the UPF network element through the N3 interface (referred to as N3), the AMF network element communicates with the SMF network element through the N11 interface (referred to as N11), and the AMF network element It communicates with the UDM network element through the N8 interface (N8 for short), the AMF network element communicates with the AUSF network element through the N12 interface (N12 for short), the AMF network element communicates with the PCF network element through the N15 interface (N15 for short), and the SMF network element communicates with the PCF network element through the N7 interface.
  • N2 interface referred to as N2
  • the access network device communicates with the UPF network element through the N3 interface (referred to as N3)
  • the interface (N7 for short) communicates with the PCF network element
  • the SMF network element communicates with the UPF network element through the N4 interface (N4 for short)
  • the NEF network element communicates with the SMF network element through the N29 interface (N29 for short)
  • the UPF network element through the N6 interface ( N6 for short) access to the data network.
  • the data network includes one or more servers to provide users with data services, such as VR services.
  • the server communicates with the NEF through the N33 interface (not implemented in Figure 3).
  • the server communicates with the PCF through an N5 interface (not shown in Figure 3).
  • the server in the data network is used to provide computing or application (application, APP) services, and to perform encoding and decoding of video sources, rendering, and the like.
  • application application, APP
  • Core network equipment used to complete the three functions of registration, connection, and session management. Some core network devices and their respective functions are described as follows:
  • NEF It is used to expose the services and capabilities of the (3rd generation partnership project, 3GPP) network function (NF) to the application function (AF), and also allows the AF to provide information to the 3GPP network function.
  • 3GPP 3rd generation partnership project
  • AF application function
  • PCF Carry out policy management of charging policy and quality of service (quality of service, QoS) policy.
  • SMF completes session management functions such as UE's Internet Protocol (IP) address allocation, UPF selection, charging and QoS policy control.
  • IP Internet Protocol
  • UPF Forward specific data on the user plane and generate bills based on traffic conditions. At the same time, it functions as the anchor point of the data plane.
  • the access network device can be any device with wireless transceiver function. It can connect terminal equipment to the core network. Including but not limited to: base station (gNodeB or gNB) or transmission reception point (TRP) in NR, base station in subsequent evolution of 3GPP, access node in WiFi system, wireless relay node, wireless backhaul node, etc. .
  • the base station can be: a macro base station, a micro base station, a pico base station, a small base station, a relay station, or a balloon station, etc. Multiple base stations may support the above-mentioned networks of the same technology, or may support the above-mentioned networks of different technologies.
  • a base station may contain one or more co-sited or non-co-sited TRPs.
  • the access network device may also be a wireless controller, a centralized unit (Central Unit, CU), and/or a distributed unit (Distributed Unit, DU) in a cloud radio access network (Cloud Radio Access Network, CRAN) scenario.
  • the access network device may also be a server, a wearable device, or a vehicle-mounted device.
  • the following description is given by taking the access network device as the base station as an example.
  • the multiple access network devices may be the same type of base station, or may be different types of base stations.
  • the base station can communicate with the terminal equipment, and can also communicate with the terminal equipment through the relay station.
  • the terminal device can communicate with multiple base stations of different technologies. For example, the terminal device can communicate with the base station supporting the LTE network, the base station supporting the 5G network, and the base station supporting the LTE network and the base station of the 5G network. Dual connection.
  • Terminal equipment is a device with wireless transceiver functions, which can be deployed on land, including indoor or outdoor, handheld, wearable or vehicle-mounted; it can also be deployed on water (such as ships, etc.); it can also be deployed in the air (such as aircraft , balloons, satellites, etc.).
  • Terminals can be mobile phones, tablet computers, computers with wireless transceiver functions, virtual reality (VR) terminal equipment (such as head-mounted glasses, HMD, etc.), augmented reality (AR) terminal equipment, wireless terminals in industrial control, vehicle terminal equipment , wireless terminals in unmanned driving, wireless terminals in telemedicine, wireless terminals in smart grid, wireless terminals in transportation safety, wireless terminals in smart cities, wireless terminals in smart homes, wearable terminal devices, etc. .
  • VR virtual reality
  • AR augmented reality
  • Terminals may also be sometimes referred to as terminal equipment, user equipment (UE, User Equipment), access terminal equipment, vehicle-mounted terminals, industrial control terminals, UE units, UE stations, mobile stations, mobile stations, remote stations, remote terminal equipment, mobile equipment, UE terminal equipment, terminal equipment, wireless communication equipment, UE proxy or UE device, etc.
  • Terminals can be fixed or mobile.
  • FIG. 3 is only a schematic diagram of a communication system architecture to which the embodiments of the present application are applied.
  • the embodiments of the present application may also be applied to other communication systems, which are not specifically limited in this embodiment.
  • each network element and the interface between each network element in FIG. 3 are just an example, and the name of each network element and the interface between each network element in the specific implementation may be other, which is not specifically limited in this embodiment of the present application .
  • words such as “first” and “second” are used to distinguish the same or similar items with basically the same function and effect.
  • words “first”, “second” and the like do not limit the quantity and execution order, and the words “first”, “second” and the like are not necessarily different.
  • the network architecture and service scenarios described in the embodiments of the present application are for the purpose of illustrating the technical solutions of the embodiments of the present application more clearly, and do not constitute limitations on the technical solutions provided by the embodiments of the present application.
  • the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems.
  • a communication system 30 provided in an embodiment of the present application includes a first core network device 301 , a second core network device 302 , an access network device 303 , and a terminal device 304 .
  • the access network device 303 is configured to receive the first data packet from the first core network device 301 and the first message from the terminal device 304, and send the first data packet to the terminal device 304 according to the first message and the first data packet.
  • the first data packet includes one or more groups of sub-data packets; the first message is used to request the first group of sub-data packets, the first message includes feature information of the first group of sub-data packets, and the first group of sub-data packets is a group or a group of subpackets within multiple groups of subpackets.
  • the first data packet includes feature information of one or more groups of sub-data packets.
  • the access network device 303 is further configured to receive first indication information from the second core network device 302, where the first indication information is used to indicate that the characteristic information of the first group of sub-data packets is associated with the characteristic information of the first group of sub-data packets The identifier of the first QoS flow; or, the first indication information includes the characteristic information of the first group of sub-data packets and the identifier of the first session associated with the characteristic information of the first group of sub-data packets.
  • the access network device 303 is further configured to send the characteristic information of the first group of sub-data packets to the terminal device.
  • the access network device 303 is further configured to send second indication information to the terminal device, where the second indication information is used to indicate the first logical channel associated with the characteristic information of the first group of sub-data packets and the characteristic information of the first group of sub-data packets or, the second indication information includes the characteristic information of the first group of sub-data packets and the identifier of the first data radio bearer associated with the characteristic information of the first group of sub-data packets.
  • the first core network device 301 is configured to send the first data packet to the access network device 303 .
  • the second core network device 302 is configured to send the first indication information to the access network device 303 .
  • the terminal device 304 is configured to receive the feature information of the first group of sub-data packets from the access network device 303 .
  • the terminal device 304 is further configured to receive the second indication information from the access network device 303 . and for receiving the first group of sub-data packets from the first logical channel according to the second indication information, or receiving the first group of sub-data packets from the first data radio bearer.
  • the first core network device 301, the second core network device 302, the access network device 303, and the terminal device 304 in this embodiment of the present application may communicate directly or communicate through forwarding by other devices. This is not specifically limited in the application examples.
  • the communication system provided in this embodiment of the present application may be applied to the network architecture shown in FIG. 3 , or may be applied to other similar network architectures, which are not specifically limited in this embodiment of the present application.
  • the network element or entity corresponding to the above-mentioned first core network device may be the above-mentioned UPF network element.
  • the network element or entity corresponding to the second core network device may be the above-mentioned SMF network element, and the network element or entity corresponding to the above-mentioned access network device may be the above-mentioned type of device such as the base station.
  • the first core network device or the second core network device or the access network device or the terminal device in this embodiment of the present application may be implemented by one device, or may be implemented jointly by multiple devices, or may be implemented within one device.
  • FIG. 5 is a schematic diagram of a hardware structure of a communication device provided by an embodiment of the present application.
  • the communication device 400 includes at least one processor 401 , memory 403 and at least one communication interface 404 .
  • the processor 401 may be a general-purpose central processing unit (central processing unit, CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more processors for controlling the execution of the programs of the present application. integrated circuit.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • a path may be included between the various components for transferring information between the components.
  • Communication interface 404 using any transceiver-like device, for communicating with other devices or communication networks, such as Ethernet, radio access network (RAN), wireless local area networks (WLAN), etc. .
  • RAN radio access network
  • WLAN wireless local area networks
  • Memory 403 may be read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (RAM), or other types of storage devices that can store information and instructions It can also be an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, CD-ROM storage (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or capable of carrying or storing desired program code in the form of instructions or data structures and capable of being executed by a computer Access any other medium without limitation.
  • the memory may exist independently and be connected to the processor through a communication line. The memory can also be integrated with the processor.
  • the memory 403 is used for storing computer-executed instructions for executing the solution of the present application, and the execution is controlled by the processor 401 .
  • the processor 401 is configured to execute the computer-executed instructions stored in the memory 403, thereby implementing the communication methods provided by the following embodiments of the present application.
  • the computer-executed instructions in the embodiment of the present application may also be referred to as application code, which is not specifically limited in the embodiment of the present application.
  • the processor 401 may include one or more CPUs, such as CPU0 and CPU1 in FIG. 5 .
  • the communication device 400 may include multiple processors, such as the processor 401 and the processor 408 in FIG. 5 .
  • processors can be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor.
  • a processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (eg, computer program instructions).
  • the communication device 400 may further include an output device 405 and an input device 406 .
  • the output device 405 is in communication with the processor 401 and can display information in a variety of ways.
  • the output device 405 may be a liquid crystal display (LCD), a light emitting diode (LED) display device, a cathode ray tube (CRT) display device, or a projector (projector) Wait.
  • Input device 406 is in communication with processor 401 and can receive user input in a variety of ways.
  • the input device 406 may be a mouse, a keyboard, a touch screen device, a sensor device, or the like.
  • the above-mentioned communication device 400 may be a general-purpose device or a dedicated device.
  • the communication device 400 may be a device with a similar structure in FIG. 5 . This embodiment of the present application does not limit the type of the communication device 400.
  • the communication method provided by the embodiment of the present application includes the following steps:
  • the server sends one or more groups of sub-data packets to the UPF.
  • the server may divide (or segment) a frame of panoramic image into one or more tiles.
  • the server may divide the panoramic image into one or more slices.
  • the server may divide the panoramic image into at least one slice and at least one slice.
  • the server may divide the panoramic image into one or more slices at the granularity of viewing angle.
  • Each view corresponds to one or more slices.
  • the server may divide the panorama image into one or more slices according to the granularity of the angle of view, and each angle of view corresponds to one or more slices. That is to say, when the server divides the slices (or slices), it can divide the same viewpoint according to the viewpoint. For example, the pixels corresponding to viewpoint 1 are divided into slice 1, and the pixels corresponding to viewpoint 2 are divided into slice 1 and slice 1. slice 2. Of course, it may not be divided according to the viewing angle.
  • the slices do not refer to each other, and the slices do not refer to each other, and they can all be decoded independently.
  • a slice error in one frame does not affect the decoding of another slice.
  • a slice error in one frame does not affect the decoding of another slice.
  • a slice error in a frame does not affect the decoding of the slice in this frame.
  • Slice errors in the frame also do not affect the decoding of the slice in this frame.
  • Shards and slices can be combined or used independently.
  • the video sequence includes R (R is a positive integer) frames of panoramic images
  • the server divides the Lth (L is a positive integer) frame of panoramic images in the video sequence into M (M is a non-negative integer) slices and N ( N is a non-negative integer) shards. M and N cannot be 0 at the same time.
  • FIG. 7 is a division method of slices.
  • the pixels marked 1 constitute slice 1
  • the pixels marked 3 constitute slice 3
  • the remaining pixels are pixels of slice 2.
  • FIG. 8 is a way of dividing the shards.
  • the image includes slice 1 and slice 2.
  • the block division manners of different frames in the video sequence may be the same or different.
  • the first frame can be divided into 2 slices
  • the second frame can be divided into 3 slices
  • the third frame can be divided into 2 slices.
  • the sizes of the multiple slices obtained by dividing the panoramic image may be the same or different.
  • the sizes of the multiple slices obtained by dividing the panoramic image may be the same or different.
  • the slice size and the slice size may also be the same or different. For example, a slice includes 2 pixels, and a slice includes 2 pixels or 3 pixels.
  • a fragment can be encapsulated into a group of sub-data packets, and a group of sub-data packets includes one or more sub-data packets.
  • a fragment can be encapsulated into 5 IP packets.
  • the server sends N pieces of data to the UPF. N groups of sub-packets corresponding to the fragment.
  • the server also indicates the characteristic information of the N groups of sub-data packets to the UPF.
  • the server displays the characteristic information indicating the N groups of sub-data packets to the UPF, that is, sends the characteristic information of the N groups of sub-data packets to the UPF.
  • the feature information is carried in the header or body of the sub-data packet, and is sent to the UPF together with the sub-data packet.
  • the server implicitly indicates the characteristic information of the N groups of sub-data packets to the UPF.
  • the server sends sub-data packets with different characteristic information to the UPF through different routes. It can be understood that, in the implicit indication mode, the UPF needs to be configured with the association relationship between the route and the feature information, so that the UPF can determine the feature information of the sub-packet received through a certain route according to the association relationship.
  • the feature information of the group of sub-packets includes any one or more of the following: viewing angle information, the identifier of the group of sub-packets, the image type corresponding to the group of sub-packets, the group of sub-packets
  • the view information may be the view of the slice (or slice) corresponding to the group of sub-data packets, or an index (index) of the view of the slice (or slice) corresponding to the group of sub-data packets.
  • index of viewing angle 0-90 degrees to 1 corresponding to binary 01
  • the index of viewing angle of 90-180 degrees to 2 binary 10
  • the index of viewing angle of 270-360 degrees to be 3. is 4.
  • the viewing angle information of the sub-packet group 1 may be 0-90 degrees, or an index value of the viewing angle 0-90 degrees.
  • the identifier of the group of sub-data packets may be, but not limited to, the sequence number of the fragment (or slice) corresponding to the group of sub-data packets.
  • the sub-packet group 1 corresponds to the fragment 1
  • the identifier of the sub-packet group 1 may be the identifier of the fragment 1, such as the serial number of the fragment 1.
  • the group of sub-data packets corresponds to the image type, which refers to the image type of the slice (or slice) corresponding to the group of sub-data packets, and the image type of the slice (or slice) includes a foreground image and a background image.
  • the encoding type corresponding to the group of sub-data packets refers to the encoding type of the slice (or slice) corresponding to the group of sub-data packets.
  • the encoding type can be single-layer encoding, or two-layer encoding or other encodings.
  • Single-layer coding refers to non-layered coding.
  • Two-layer coding refers to base layer coding and enhancement layer coding.
  • a video encoder supporting two-layer encoding can encode a video sequence into a base layer code stream and one (or more) enhancement layer code streams.
  • the data of the base layer code stream can be decoded independently, and the picture content of the basic video can be decoded, and the picture quality is low.
  • the enhancement layer code stream is used to improve the picture quality.
  • the coding of the enhancement layer can refer to the base layer.
  • the base layer module and enhancement layer module of the source encoder in the server includes a down-sampling sub-module, an encoding pipeline sub-module, a coding decision sub-module, and an up-sampling sub-module.
  • the enhancement layer module includes an encoding pipeline sub-module and an encoding decision sub-module.
  • the process of encoding a frame of image to generate a two-layer code stream is as follows: a frame of image (including one or more slices (or slices)) is respectively input to the base layer module and the enhancement layer module.
  • the data input to the base layer module is output as the code stream data of the base layer through the downsampling sub-module, the encoding pipeline sub-module and the encoding decision sub-module in the base layer module.
  • the decision sub-module fuses the encoded data of the base layer and the encoded data of the enhancement layer, and outputs the code stream data of the enhancement layer. It can be seen that the output data of the enhancement layer is related to the data of the base layer, that is, there is a reference relationship: the enhancement layer refers to the base layer.
  • two data streams can be output, namely the base layer data stream and the enhancement layer data stream.
  • the identifier of the frame corresponding to the group of sub-data packets can be the sequence number of the frame corresponding to the group of sub-data packets (for example, the current frame sequence number is 1, the next frame sequence number is 2) or the frame inversion information of the corresponding frame (For example, the frame inversion information of the previous frame is 1, the frame inversion information of the current frame is 0, the frame inversion information of the next frame is 1, and the frame inversion information of the next frame is 0) or other information. It can be understood that if the sub-data packets corresponding to the two frames of panoramic images overlap and reach the base station, the frame number information corresponding to the sub-data packets needs to be carried, so that the base station can perceive the frames of the sub-data packets and avoid disorder.
  • the sub-packet groups 1-4 all correspond to the panoramic image of the frame
  • the identification of the frame corresponding to the sub-packet group 1 is the identification of the panoramic image of the frame, for example, that the panoramic image of the frame is in the frame.
  • the corresponding identification of sub-packet group 2 is the sequence number of the panoramic image of this frame in the entire video sequence.
  • the characteristic information may also be the frame type corresponding to the group of sub-data packets.
  • frames in a video sequence include I-frames, P-frames.
  • the frames in the video sequence also include B frames.
  • the type of the frame corresponding to a group of sub-packets which can be an I frame, a P frame, or a B frame.
  • the I frame can be decoded independently without referring to other video frames.
  • the P frame is an inter-frame predictive coding frame, which represents the difference between this frame and a previous key frame (or P frame). Therefore, the previously cached frame (ie, the previous frame of this frame) needs to be used for decoding.
  • the difference defined by this frame is superimposed to generate the final picture.
  • P frames can be further divided into large P frames and small P frames.
  • the encoding side encodes the small P frame with reference to the large P frame
  • the decoding side decodes the small P frame with reference to the large P frame. Any frame on the encoding side is not encoded with reference to the small P frame, and any frame on the decoding side does not need to be decoded with reference to the small P frame.
  • the B frame is a two-way difference frame, that is, the B frame records the difference between the current frame and the previous frame. In other words, to decode the B frame, not only the buffer frame before the B frame, but also the frame after the B frame must be decoded.
  • the final picture is obtained by superimposing the data of the previous frame and the current frame.
  • B-frames usually have higher compression rates.
  • the first frame in the video sequence is an I frame.
  • the sub-packet groups 1-4 all correspond to the panoramic image of the frame shown in FIG. 9
  • the type of the frame corresponding to the sub-packet group 1 is as shown in FIG. 9 .
  • the frame type (such as I frame) of the panoramic image of this frame.
  • the UPF sends the first data packet to the base station.
  • the base station receives the first data packet from the UPF.
  • the base station stores the first data packet, so as to subsequently schedule sub-data packets in the first data packet to the terminal device according to a request of the terminal device.
  • the first data packet includes one or more groups of sub-data packets.
  • a set of subpackets includes one or more subpackets.
  • the first data packet is a data packet corresponding to a panoramic image.
  • the data packet corresponding to the panoramic image includes one or more groups of sub-data packets. For example, a panoramic image of a video with a resolution of 4K corresponds to about fifty IP packets (ie, sub-data packets).
  • the UPF After the UPF receives N groups of sub-data packets from the server and learns the feature information corresponding to the N groups of sub-data packets, referring to FIG. 10A , the UPF sends a first data packet to the base station, where the first data packet includes the N groups of data packets shown in FIG. 10A . (that is, one or more groups) of sub-packets, and indicate the characteristic information of the N groups of sub-packets to the base station, so that the base station can perceive the characteristic information of the N groups of sub-packets, and send them to the base station according to the characteristic information of the N groups of sub-packets.
  • the end device sends the sub-packets required by the end device.
  • the UPF may explicitly or implicitly indicate the characteristic information of the N groups of sub-data packets to the base station.
  • the two manners of indicating feature information are introduced as follows.
  • the UPF displays characteristic information that indicates to the base station one or more sets of sub-packets.
  • the characteristic information is carried in the first data packet, in other words, the first data packet includes characteristic information of one or more groups of sub-data packets.
  • the first data packet sent by the UPF to the base station is shown to carry feature information.
  • feature information is encapsulated into the first data packet sent by the UPF to the base station.
  • the UPF implicitly indicates to the base station the characteristic information of one or more groups of sub-data packets.
  • the UPF carries sub-packets with different characteristic information through different QoS flows. For example, referring to FIG. 10B , the UPF transmits sub-packets to the base station through QoS flow 1 and QoS flow 2, wherein QoS flow 1 transmits a group of sub-packets corresponding to characteristic information 1 (for example, as shown in FIG. A group of sub-packets corresponding to slice 1), QoS flow 2 transmits a group of sub-packets of characteristic information 2 (for example, a group of sub-packets corresponding to slice 2 shown in FIG. 9).
  • the UPF in the manner in which the UPF implicitly indicates the feature information to the base station, the UPF does not need to encapsulate the feature information in the first data packet.
  • the transmission overhead is saved.
  • the base station can determine the characteristic information of the sub-packets received on the current QoS flow according to the configured association relationship without analyzing the first data packet from the UPF, which can reduce the analysis burden of the base station. Accordingly, the implementation complexity of the base station and the UPF can be reduced.
  • the base station needs to be configured with the QoS flow and the characteristic information of the sub-packets associated with the QoS flow.
  • This configuration can be done by SMF.
  • the base station receives first indication information from the SMF, where the first indication information is used to indicate the characteristic information of the first group of subpackets and the characteristic information of the first group of subpackets of the first QoS flow associated with the characteristic information of the first group of subpackets or, the first indication information includes the identification of the first session associated with the characteristic information of the first group of sub-data packets and the characteristic information of the first group of sub-data packets.
  • the specific configuration method will be given in the following embodiments.
  • the base station when the base station receives a sub-packet from a certain QoS flow, the base station can determine the characteristic information of the sub-packet transmitted on the QoS flow according to the configured association between the QoS flow and the characteristic information. Still taking FIG. 10B as an example, the base station receives corresponding sub-packets from QoS flow 1 and QoS flow 2 respectively. Taking receiving a group of sub-packets through QoS flow 1 as an example, the base station may determine that the sub-packets received through QoS flow 1 are sub-packets with characteristic information 1 according to the above configuration.
  • a PDU session can contain at least one QoS flow, and multiple flows in one session can be mapped To a radio bearer, it can also be mapped to a different radio bearer.
  • radio bearer radio bearer
  • the UPF carries sub-packets of different feature information through different sessions, so as to implicitly indicate the feature information to the base station. That is, the first group of sub-data packets corresponding to the first feature information are carried in the first session, and the first group of sub-data packets corresponding to the second feature information are carried in the second session.
  • the characteristic information of the sub-packet and the identifier of the associated session also need to be configured to the base station. The specific configuration method will be given below.
  • the characteristic information of the group of sub-data packets transmitted on the session can be determined according to the association relationship between the session and the characteristic information.
  • the UPF carries a group of sub-data packets corresponding to characteristic information 1 through session 1, and carries a group of sub-data packets corresponding to characteristic information 2 through session 2.
  • the base station can determine that the characteristic information of the group of sub-packets received through session 1 is characteristic information 1 according to the configured association between sessions and characteristic information.
  • the terminal device sends a first message to the base station.
  • the base station receives the first message from the terminal device.
  • the first message may be physical layer signaling, radio resource control (radio resource control, RRC) signaling, PDCP signaling or MAC layer signaling.
  • RRC radio resource control
  • the first message is used to request the first group of sub-data packets
  • the first message includes feature information of the first group of sub-data packets
  • the first group of sub-data packets is the above-mentioned one or more groups of sub-data packets (that is, the first data package) a set of sub-packages.
  • the first message can be, for example, but not limited to, the following message:
  • Acknowledged message (acknowledged, ACK), non-acknowledged message (negative acknowledgement, NACK), channel quality indication (channel quality indication, CQI), scheduling request (scheduling request, SR).
  • the first message includes an indication field.
  • the indication field of the first message is used to indicate feature information corresponding to the first group of sub-data packets.
  • the indication field of the first message is used to indicate feature information, which may be a direct indication or an indirect indication of the feature information.
  • the direct indication may be that the indication field of the first message includes feature information.
  • the terminal device sends an ACK to the base station, where the ACK carries an indication field, and the indication field includes characteristic information of the first group of sub-data packets.
  • the indirect indication may be that the indication field of the first message does not directly carry the feature information, but the first message is associated with the feature information.
  • the terminal device can infer the associated feature information according to the received first message.
  • the first message may be an SR.
  • the base station configures resource and resource-related feature information for the terminal device. For example, resource 1 is associated with feature information 1, and resource 2 is associated with feature information 2. Subsequently, when the terminal device requests sub-data packets associated with one or more feature information, it sends an SR to the base station through corresponding resources.
  • the SR can carry a sequence, which is associated with the resource. The sequence is also associated with feature information.
  • sequence 1 is associated with resource 1 and feature information 1
  • sequence 2 is associated with resource 2 and feature information 2.
  • the SRs sent by the terminal equipment to the base station through different resources may include different sequences.
  • the SR sent to the base station through resource 1 carries sequence 1
  • the SR sent to the base station through resource 2 carries sequence 2. It can be seen that the indication field in the SR (or the field in the SR) does not explicitly carry feature information.
  • the base station can determine the sub-packet corresponding to the feature information requested by the terminal device according to the resource for receiving the SR. For example, if the SR is received from resource 1, the base station determines the sub-packet corresponding to the characteristic information 1 requested by the terminal equipment; if the SR is received from resource 2, the base station determines the sub-packet corresponding to the characteristic information 2 requested by the terminal equipment.
  • the SR is associated with feature information, that is, SRs sent through different resources are used to request sub-packets of different feature information.
  • the terminal device receives the resource configuration information of the scheduling request from the base station, and the resource configuration information of one scheduling request is used to request a group of sub-data packets of characteristic information.
  • the PDCP layer control PDU signaling includes the header of the PDCP protocol data unit (PDU) and the load of the PDCP PDU payload, where fields in the PDCP PDU header indicate that subsequent payloads carry feature information.
  • PDU PDCP protocol data unit
  • the MAC signaling includes a MAC subheader and a MAC control element (MAC control element, MAC CE).
  • MAC control element MAC control element
  • the logical channel identity (LCID) of the subheader is used to distinguish the type of payload (MAC CE is a payload).
  • LCID logical channel identity
  • a field in the MAC subheader is used to carry the logical channel identifier, indicating that the MAC CE following the subheader is characteristic information. Fields in the MAC CE carry feature information.
  • the feature information of the first group of sub-data packets includes any one or more of the following: viewing angle information, the identifier of the first group of sub-data packets, the image type, the encoding type, the identifier of the frame corresponding to the first group of sub-data packets, the first The type of the frame corresponding to the group sub-packet.
  • the identifier of the first group of sub-data packets may be fragment or slice identifiers corresponding to the first group of sub-data packets.
  • the first message is used to request sub-packet group 1 (corresponding to fragment 1) in the multiple groups of sub-packets as shown in FIG. 9 , and the first message includes: Characteristic information of the sub-packet group 1.
  • it includes one or more of the following feature information: ⁇ view angle information: 0-90 degrees; sequence number of slice 1 corresponding to sub-packet group 1; image type of the frame corresponding to sub-packet group 1: background image; sub-packet group 1
  • the encoding type of slice 1 corresponding to packet group 1 two-layer encoding; the sequence number of the frame corresponding to sub-packet group 1 in the video sequence; the type of frame corresponding to sub-packet group 1: I frame ⁇ .
  • the base station sends the first group of sub-data packets to the terminal device according to the first message and the first data packet.
  • the base station may process the first data packet through a protocol entity included in the base station to obtain the first group of sub-data packets, and send the first group of sub-data packets to the terminal device.
  • FIG. 13 shows the protocol entities included in the base station and the terminal device and the interaction between the base station and the terminal device.
  • the information can be exchanged between the radio link layer control (radio link control, RLC) protocol entity and the media access control (media access control, MAC) protocol entity through a logical channel (logical channel, LCH).
  • RLC radio link control
  • MAC media access control
  • LCH logical channel
  • a packet data convergence protocol (packet data convergence protocol, PDCP) entity and a data radio bearer (data radio bearer, DRB) are in one-to-one correspondence.
  • the MAC protocol entity delivers the first group of sub-data packets to a physical (physical, PHY) layer protocol entity, and the PHY layer entity sends the first group of sub-data packets to the PHY layer entity of the terminal device. After receiving the first group of sub-data packets, the terminal device will then sequentially hand over the first group of sub-data packets to different layer protocol entities for processing.
  • the embodiments of the present application do not limit the number of protocol entities at each layer, the number of LCHs, the number of DRBs, and the like.
  • the base station receives the first message from the terminal 1 (the first message is used to request the first group of sub-data packets, and the first message includes the characteristics of the first group of sub-data packets. After parsing to obtain the characteristic information of the first group of sub-data packets included in the first message (for example, the viewing angle is 0-90 degrees).
  • the base station can sequentially process it through the service data adaptation protocol (SDAP) entity, PDCP entity, RLC protocol entity, and MAC protocol entity as shown in FIG.
  • SDAP service data adaptation protocol
  • the data packet (including one or more groups of sub-data packets) and the feature information of one or more groups of sub-data packets, find the first group of sub-data packets corresponding to the viewing angle of 0-90 degrees in the first data packet.
  • the protocol entity of the base station determines the first group of sub-data packets in the first data packet according to the association between the QoS flow identifier and the feature information.
  • the QoS flow identifier 1 is associated with the characteristic information 1 of the first group of sub-data packets
  • the QoS flow identifier 2 is associated with the characteristic information 2 of the second group of sub-data packets.
  • the base station receives the sub-data packets on the QoS flow marked as 2, it is determined that the group of sub-data packets is the first group of sub-data packets. Or, the base station determines the first group of sub-data packets in the first data packet according to the association relationship between the session identifier and the feature information. If the UPF explicitly indicates the characteristic information to the base station, the protocol entity of the base station may determine the first group of sub-data packets by parsing the first data packet.
  • the base station After determining the first group of sub-data packets required by the terminal device, the base station sends the first group of sub-data packets to the terminal device.
  • the base station may regard the multiple terminal devices as a multicast group, and multicast the corresponding feature information to the terminal devices in the multicast group.
  • One or more groups of subpackets For example, as shown in FIG. 14 , it is assumed that layered encoding is performed on a panoramic image of an I frame to obtain the base layer code stream data and the enhancement layer code stream data. Taking the transmission of enhancement layer code stream data by the base station as an example, it is assumed that UE1 and UE2 need slices of view 1, and UE3 and UE4 need slices of view 2.
  • the base station encapsulates the enhancement layer code stream data corresponding to all segments in the panoramic image into one or more groups of sub-data packets, and multicasts and schedules the one or more groups of sub-data packets to UE1-UE4, that is, Each UE receives multiple sub-packets corresponding to slice 1-slice 5.
  • the base station performs multicast scheduling according to the channel condition of the UE farthest from the base station among UE-UE4.
  • UE1 and UE2 both need the slice of view 1, that is, slice 1 and slice 2 as shown in FIG. 14 are required. Then, the base station regards UE1 and UE2 as a multicast group, and Slice 1 and slice 2 may be scheduled to UE1 and UE2 according to the channel conditions of the farthest UE among UE1 and UE2. Similarly, UE3 and UE4 both need the slices of view 2, namely slice 2 and slice 3. Then, the base station regards UE3 and UE4 as a multicast group, and can send them to the farthest UE according to the channel conditions of UE4 and UE3. UE3 and UE4 multicast scheduling slice 2 and slice 3. Here, it is equivalent to re-dividing groups of UE1-UE4, and scheduling different data by multicast in different groups.
  • the base station does not need to schedule all the slices corresponding to the enhancement layer, namely slice 1-slice 5, which reduces the overhead of the air interface.
  • the base station does not need to schedule slice 1, slice 2, slice 3, and slice 4 according to the channel conditions of the farthest UE among UE1, UE2, UE3, and UE4, thereby further saving air interface overhead.
  • no user is interested in view 3, therefore, the base station does not have to schedule the slice of view 3. That is to say, the base station can determine the minimum granularity of the required scheduling data, that is, perceive the image required by the user, and take the image required by the user as the minimum scheduling granularity, thus further reducing the air interface overhead.
  • the amount of data received by the terminal device is reduced, thereby reducing the processing burden of the terminal device.
  • the group of terminal devices may be within the same beam coverage or within different beam coverages.
  • the base station multicasts the scheduling sub-packet to the group of terminal equipments according to the channel conditions of the farthest terminal equipment under the beam. For example, terminal equipment 1-terminal equipment 6 all request sub-packet group 1 and sub-packet group 2, and terminal equipment 1-terminal equipment 6 are all within the coverage of beam 1, and terminal equipment 6 in this group of terminal equipment is the closest to the base station. If the distance is far, then the base station sends sub-packet group 1 and sub-packet group 2 to terminal equipment 1 to terminal equipment 6 on beam 1 according to the channel conditions of terminal equipment 6 .
  • the base station sends the sub-data packets according to the channel conditions of the farthest terminal devices within the coverage of each beam. For example, terminal equipment 1 - terminal equipment 6 all request sub-packet group 1 and sub-packet group 2, and terminal equipment 1 and terminal equipment 2 are both within the coverage of beam 1, and terminal equipment 5 and terminal equipment 4 are both within beam 2.
  • terminal equipment 3 and terminal equipment 6 are both within the coverage area of beam 3, among which, among terminal equipment 1 and terminal equipment 2, terminal equipment 2 is farther away from the base station, and among terminal equipment 4 and terminal equipment 5 , the terminal device 5 is farther from the base station, and in the terminal device 3 and the terminal device 6, the terminal device 6 is farther from the base station.
  • the base station sends sub-packet group 1 and sub-packet group 2 to terminal equipment 1 and terminal equipment 2 on beam 1 according to the channel conditions of terminal equipment 2, and sends sub-packet group 1 and sub-packet group 2 to terminal equipment on beam 2 according to the channel conditions of terminal equipment 5.
  • 4 and terminal equipment 5 send sub-packet group 1 and sub-packet group 2, and schedule sub-packet group 1 and sub-packet group 2 through beam 3 to terminal equipment 3 and terminal equipment 6 according to the channel conditions of terminal equipment 6.
  • the base station may acquire the sub-data package of the required feature information from the I frame. Otherwise, the user obtains the sub-packets of the required feature information from the P frame. The base station may not schedule sub-data packets for which feature information is not requested.
  • the feature information requested by the terminal device from the base station may be the same as or different from the feature information indicated by the UPF to the base station.
  • the terminal device requests the base station for the slice of view 1, and the UPF indicates to the base station the views of all the slices. Then, the base station can find the fragment corresponding to view 1 in all the fragments, and deliver the sub-data package corresponding to the fragment to the terminal device.
  • the terminal device requests the base station for slices of view 1, and the UPF indicates the identifiers of all slices to the base station. Then, the base station can determine the fragment required by the terminal device according to the corresponding relationship between the view 1 and the fragment identifier, and deliver the sub-data package corresponding to the fragment to the terminal device.
  • the corresponding relationship between the viewing angle and the fragment identifier is configured by the core network device (such as AMF, SMF, UPF, etc.) or the server or network management device to the UE. Of course, the corresponding relationship can also be set locally by the UE. . As a possible implementation manner, the corresponding relationship between the viewing angle and the fragment identifier is configured by the core network device (such as AMF, SMF, UPF, etc.) or the server or network management device to the base station. Of course, the corresponding relationship can also be preset by the base station. .
  • the terminal device requests the base station for a slice identifier (or slice identifier) corresponding to view 1, and the UPF indicates to the base station the viewpoints of all the slices.
  • the base station determines the data required by the terminal device from the data received by the UPF according to the segment identifier (or slice identifier) request information of the terminal device.
  • sharding is mainly used as an example to describe the technical solution, and sharding in the text can also be replaced by slicing.
  • the base station receives the first data packet (corresponding to the panoramic image) from the UPF, and according to Minimum granularity of request scheduling data from end devices. Specifically, the base station can determine the characteristic information of the sub-data package required by the terminal device according to the first message from the terminal device, and then determine the data to be sent to the terminal device, that is, send the sub-data required by the terminal device to the terminal device Bag.
  • the base station no longer sends the panoramic image to the terminal device, but sends the sub-data packets corresponding to the part of the image required by the terminal device in the panoramic image to the terminal device.
  • the amount of data sent by the base station is reduced, thereby reducing the air interface resources of the base station. consumption and increase network capacity.
  • the terminal device requests the sub-data package of the corresponding feature information from the server, resulting in a longer delay, in the embodiment of the present application, the terminal device requests the sub-data package of the corresponding feature information from the base station that is closer to it, The delay is small, which can improve the communication efficiency.
  • the base station may further indicate the characteristic information of the sub-packet to the terminal device. Specifically, feature information may be displayed or implicitly indicated.
  • the display indication manner may be specifically implemented as: the base station sends the characteristic information of the first group of sub-data packets to the terminal device.
  • the base station sends the sub-packet group 1 shown in FIG. 9 to the terminal equipment, and sends the characteristic information of the sub-packet group 1 to the terminal equipment.
  • a field is added to the MAC subheader corresponding to the subdata packet or the subheader of the PDCP for carrying the feature information.
  • one MAC PDU includes multiple MAC sub-PDUs.
  • the MAC sub-PDU1 includes a MAC subheader 1 and a MAC service data unit (service data unit, SDU) 1
  • the MAC subheader 1 includes a logical channel identifier 1 and feature information 1, wherein,
  • the feature information 1 in the MAC subheader 1 indicates that the subpacket in the MAC SDU1 has this feature.
  • the feature information 1 in the MAC subheader 1 is the viewing angle of 0-90 degrees
  • the sub-packet 1 in the MAC SDU1 is the sub-packet corresponding to the viewing angle of 0-90 degrees.
  • the feature information is carried in downlink control information (downlink control information, DCI) of the PDCCH, or in a field in the MAC CE, or in a load field after the PDCP subheader.
  • DCI downlink control information
  • one MAC PDU includes multiple MAC sub-PDUs
  • one MAC sub-PDU in the multiple MAC sub-PDUs includes a MAC CE
  • the MAC CE includes feature information, which is used to indicate that all the following MAC sub-PDUs
  • the sub-packets of the MAC SDU contained in the MAC sub-PDU have this feature.
  • MAC sub-PDU1 includes a MAC CE, and a certain field of the MAC CE carries feature information 1, then, as shown in Figure 16, the sub-packet 1 carried by the MAC sub-PDU2 and the sub-packet carried by the MAC sub-PDU3
  • the sub-packets 2 all have the features indicated by the feature information 1 .
  • one MAC PDU contains multiple MAC sub-PDUs
  • the MAC CE of one MAC sub-PDU is used to indicate the feature information corresponding to the sub-data packets in each MAC sub-PDU in the MAC PDU.
  • the MAC CE includes a characteristic indication field, and each bit in the characteristic indication field is associated with a characteristic information i, wherein the value of the bit is 1 indicating that characteristic information i exists, that is, Indicates that there is a sub-packet corresponding to the feature information i, and a bit value of 0 indicates that there is no characteristic information i, that is, that there is no sub-packet corresponding to the feature information i.
  • the MAC CE also includes a "feature-associated MAC sub-PDU" field, which indicates the specific payload of the sub-packet associated with the feature information, that is, indicates the MAC sub-PDU used to carry the sub-packet associated with the feature information 1, and is used to carry the feature information.
  • a feature-associated MAC sub-PDU indicates the specific payload of the sub-packet associated with the feature information, that is, indicates the MAC sub-PDU used to carry the sub-packet associated with the feature information 1, and is used to carry the feature information.
  • the UE can determine the feature information of sub-data packets in different MAC SDUs according to the information in the MAC CE.
  • the PHY protocol entity sends it to the terminal device.
  • the sub-packet requested by the terminal device and display the characteristic information indicating the sub-packet.
  • the display indication manner may be the indication manner corresponding to FIG. 15 or FIG. 16 or FIG. 17 .
  • sub-data packets mentioned in the embodiments of the present application carry characteristic information, which means that the characteristic information is sent to the terminal device together with the sub-data packets, not that the characteristic information is encapsulated in the sub-data packets.
  • characteristic information means that the characteristic information is sent to the terminal device together with the sub-data packets, not that the characteristic information is encapsulated in the sub-data packets.
  • the base station may send sub-data packets with different features through different logical channels, or may send sub-data packets with different features through the same logical channel.
  • the base station when the base station sends sub-packets with different characteristics through the same logical channel, neither the UE nor the base station need to manage multiple logical channels, and avoid switching logical channels to receive sub-packets with different characteristics.
  • the base station by sending sub-data packets with different characteristics through the same data radio bearer, neither the UE nor the base station need to manage multiple data radio bearers, and avoid switching data radio bearers to receive sub-data packets with different characteristics.
  • the implicit indication manner can be specifically implemented as: the base station sends sub-data packets with different characteristic information to the terminal device through different LCHs.
  • the base station sends the first group of sub-data packets with the first feature information to the terminal device through the first logical channel.
  • the base station sends the second group of sub-data packets with the first feature information to the terminal device through the second logical channel.
  • the first logical channel is different from the second logical channel.
  • the first characteristic information is different from the second characteristic information.
  • the terminal device needs to be configured with the feature information corresponding to the logical channel identifier and the logical channel identifier.
  • the base station sends second indication information to the terminal device, where the second indication information is used to indicate the characteristic information of one or more groups of sub-data packets and the logical channel identifiers associated with each of the characteristic information of one or more groups of sub-data packets.
  • the second indication information is used to indicate the identification of the first logical channel associated with the characteristic information of the first group of sub-data packets and the characteristic information of the first group of sub-data packets, and to indicate the characteristic information of the second group of sub-data packets The identifier of the second logical channel associated with the characteristic information of the second group of sub-packets.
  • Table 3 shows another possible form of the second indication information.
  • the base station sends sub-data packets with different feature information through different LCHs.
  • the base station sends a group of sub-data packets corresponding to feature information 1 through LCH1, sends a group of sub-data packets corresponding to feature information 2 through LCH2, and sends a group of sub-data packets corresponding to feature information 3 through LCH3.
  • the base station implicitly indicates the characteristic information of the sub-data packets to the terminal equipment, which can also be implemented as: the base station sends the sub-packets of different characteristic information to the terminal equipment through different DRBs, so that the terminal equipment If a sub-data packet is received from a certain data radio bearer, the characteristic information corresponding to the sub-data packet can be determined.
  • the base station sends the first group of sub-data packets to the terminal device through the first data radio bearer, and sends the second group of sub-data packets to the terminal device through the second data radio bearer.
  • the base station needs to configure the terminal device with the characteristic information of one or more groups of sub-data packets and the identifiers of the wireless data bearers associated with each of the one or more groups of sub-data packets.
  • the base station sends second indication information to the terminal device, where the second indication information includes feature information of one or more groups of sub-data packets and identifiers of wireless data bearers associated with each of the one or more groups of sub-data packets.
  • the second indication information is used to indicate the identifier of the first data radio bearer associated with the characteristic information of the first group of sub-data packets and the characteristic information of the first group of sub-data packets, and is used to indicate the second group of sub-data packets.
  • the identifier of the second data radio bearer associated with the feature information of the second group of sub-data packets.
  • Table 5 shows another possible form of the second indication information.
  • the base station sends sub-data packets with different feature information through different DRBs. For example, a group of sub-data packets corresponding to characteristic information 1 are sent through DRB1, a group of sub-data packets corresponding to characteristic information 2 are sent through DRB2, and a group of sub-data packets corresponding to characteristic information 3 are sent through DRB3.
  • the UPF does not indicate the characteristic information of one or more groups of sub-data packets to the base station, but indicates to the base station the identification of the terminal equipment requesting one or more groups of sub-data packets.
  • the embodiment of the present application further provides a method for establishing a session.
  • the session establishment method is the basis of the technical solution corresponding to FIG. 6 .
  • the session establishment method includes:
  • the SMF sends a PDU session resource establishment request (PDU SESSION RESOURCE SETUP REQUEST) message to the base station.
  • PDU SESSION RESOURCE SETUP REQUEST PDU session resource establishment request
  • the base station receives the PDU session resource establishment request message from the SMF.
  • the session resource establishment request message may be used to request the base station to establish a multicast PDU session.
  • the PDU session resource establishment request message includes the QoS parameters of the QoS flow.
  • QoS parameters include but are not limited to quality of service flow identity (qos flow identity, QFI), QoS profile (QoS profile).
  • the QoS parameter further includes first indication information, where the first indication information is used to indicate the identifier of the first QoS flow associated with the characteristic information of the first group of sub-packets and the characteristic information of the first group of sub-packets; or, the first indication information It includes the identifier of the first session associated with the characteristic information of the first group of sub-data packets and the characteristic information of the first group of sub-data packets.
  • the SMF sends a session resource establishment request to the base station via the AMF.
  • the session resource establishment request message is used to request the base station to establish a PDU session. That is, a PDU session is established for the current multicast service.
  • the session contains one or more QoS streams.
  • the SMF may indicate to the base station the feature information respectively associated with the multiple QoS flows through the first indication information. In this way, when the base station establishes the multiple QoS streams, the base station can learn the association of feature information associated with each QoS stream.
  • the base station can learn the feature information of the sub-data packets received on different QoS flows.
  • QoS Flow Identifier Feature information 1 Viewing angle 0-90 degrees 2 Viewing angle 90-180 degrees
  • Table 7 shows another possible form of the first indication information.
  • the session resource establishment request message is used to request the base station to establish multiple PDU sessions.
  • the SMF may indicate to the base station the feature information respectively associated with the multiple sessions through the first indication information.
  • the base station can learn to receive sub-data packets corresponding to the specific feature information on a specific session. For example, the base station can learn that the sub-packet corresponding to the characteristic information 2 needs to be received on the session 2.
  • Table 9 shows another possible form of the first indication information.
  • the base station sends DRB parameters to the terminal device according to the PDU session resource establishment request message.
  • the base station configures DRB parameters according to the QoS parameters in the PDU session resource establishment request message, and sends an RRC reconfiguration (RRC RECONFIGURATION) message to the terminal device.
  • the RRC reconfiguration message includes (or carries) DRB parameters (that is, the second indication). information).
  • the DRB parameters include PDCP and SDAP configurations.
  • the DRB parameter includes a correspondence between a multicast service identifier, a logical channel identifier, a multicast session identifier, and a multicast scheduling wireless network temporary identity (Group radio network temporary identity, G-RNTI).
  • G-RNTI Group radio network temporary identity
  • the DRB parameter further includes one or more logical channel identifiers and feature information associated with each of the one or more logical channel identifiers. That is, the second indication information in the above embodiment may be a DRB parameter. In this way, the base station delivers sub-packets of different feature information through different logical channels.
  • the DRB parameter further includes one or more DRB identifiers and feature information associated with each of the one or more DRB identifiers. That is, the second indication information in the above embodiment may be a DRB parameter. In this way, the base station delivers sub-packets of different feature information through different DRBs.
  • the terminal device sends an RRC reconfiguration complete (RRC RECONFIGURATION COMPLETE) message to the base station.
  • RRC reconfiguration complete RRC RECONFIGURATION COMPLETE
  • the terminal device performs configuration according to the DRB parameters, and sends an RRC reconfiguration complete message to the base station.
  • the base station sends a PDU session resource establishment response (PDU SESSION RESOURCE SETUP RESPONSE) message to the SMF.
  • PDU SESSION RESOURCE SETUP RESPONSE PDU session resource establishment response
  • the base station sends a PDU session resource establishment response to the SMF through the AMF.
  • the UPF After the establishment of the multicast session, the UPF can transmit data to the terminal device through the multicast session.
  • the SMF may further configure one or more feature information and a QoS flow identifier corresponding to each feature information to the UPF.
  • the SMF sends a packet detection rule (PDR) to the UPF.
  • the PDR is used to indicate one or more feature information and a QoS stream identifier corresponding to each feature information.
  • UPF filters the data of the corresponding feature into the corresponding QoS flow according to the packet filtering rules of the feature information.
  • the SMF may configure the UPF with one or more feature information and a session identifier corresponding to each feature information.
  • the SMF sends a forwarding action rule (FAR) to the UPF, where the FAR is used to indicate one or more feature information and a session identifier corresponding to each feature information.
  • FAR forwarding action rule
  • UPF filters the data of the corresponding feature into the corresponding session according to the forwarding action rules of the feature information.
  • the embodiment of the present application also provides a communication method, which can reduce the processing burden of the terminal.
  • the method needs to first configure multiple logical channel identifiers and feature information associated with each of the multiple logical channel identifiers to the terminal device.
  • the specific configuration has been described in the above implementation.
  • the communication method includes:
  • the UPF sends a first data packet to a base station.
  • a first data packet is received from the first core network device, where the first data packet includes one or more groups of sub-data packets; one or more groups of sub-data packets respectively have corresponding feature information.
  • the base station sends the first group of sub-data packets to the terminal device through the first logical channel, and sends the second group of sub-data packets to the terminal device through the second logical channel.
  • the first group of sub-data packets corresponds to the first feature information.
  • the second group of sub-data packets corresponds to the second feature information.
  • the first characteristic information is different from the second characteristic information.
  • the first logical channel is different from the second logical channel.
  • the terminal device receives the required first group of sub-data packets from the base station through the first logical channel according to the multiple logical channel identifiers and the respective associated feature information of the multiple logical channel identifiers.
  • the terminal device receives the sub-data packet of the view on the logical channel associated with the view according to the required view.
  • the terminal device does not need to send the first message for requesting the first group of sub-data packets to the base station, and it can flexibly decide the configuration for receiving (such as the LCH for receiving) according to the current needs, to avoid
  • the base station receives redundant data, thereby reducing the data receiving and processing complexity and processing delay of the terminal equipment, and reducing the energy consumption of the terminal equipment. At the same time, it can also avoid wasting cache resources.
  • the embodiment of the present application also provides a communication method, which can reduce the processing burden of the terminal.
  • a terminal device needs to be configured with multiple DRB identifiers and feature information associated with each of the multiple DRB identifiers.
  • the specific configuration has been described in the above implementation.
  • the communication method includes:
  • the UPF sends a first data packet to a base station.
  • a first data packet is received from the first core network device, where the first data packet includes one or more groups of sub-data packets; one or more groups of sub-data packets respectively have corresponding feature information.
  • the base station sends the first group of sub-data packets to the terminal device through the first DRB, and sends the second group of sub-data packets to the terminal device through the second DRB.
  • the first group of sub-data packets corresponds to the first feature information.
  • the second group of sub-data packets corresponds to the second feature information.
  • the first characteristic information is different from the second characteristic information.
  • the first DRB is different from the second DRB.
  • the terminal device receives the required first group of sub-data packets from the base station through the first DRB according to the multiple DRB identifiers and the respective associated feature information of the multiple DRB identifiers.
  • step S101 may be performed first, and then step S103 may be performed.
  • step S103 may be performed first, and then step S101 may be performed.
  • the above-mentioned terminal, session management network element or network device includes corresponding hardware structures and/or software modules for executing each function.
  • the present application can be implemented in hardware or a combination of hardware and computer software with the units and algorithm steps of each example described in conjunction with the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of this application.
  • the terminal device, the access network device, the first core network device, or the second core network device may be divided into functional modules according to the foregoing method examples.
  • each functional module may be divided corresponding to each function, or two One or more functions are integrated in one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. It should be noted that, the division of modules in the embodiments of the present application is schematic, and is only a logical function division, and there may be other division manners in actual implementation.
  • FIG. 21 shows a schematic structural diagram of an apparatus 90 .
  • the apparatus 90 may be the access network device in the foregoing embodiment, or may be a component supporting the functions of the access network device in the foregoing embodiment, such as a chip or circuit in the access network device.
  • the apparatus 90 may be the terminal device in the foregoing embodiment, or may be a component supporting the function of the terminal device in the foregoing embodiment, such as a chip or circuit in the terminal device.
  • the apparatus 90 may be the UPF network element in the foregoing embodiment, or may be a component supporting the UPF function in the foregoing embodiment, such as a chip or circuit in the UPF.
  • the apparatus 90 may be the SMF in the foregoing embodiment, or may be a component supporting the SMF function in the foregoing embodiment, such as a chip or circuit in the SMF. This embodiment of the present application does not specifically limit this.
  • the apparatus 90 includes: a transceiver module 901 .
  • the transceiver module 901 is configured to receive a first data packet from a first core network device, where the first data packet includes one or more groups of sub-data packets; receive a first message from a terminal device, where the first message is used to request the first group of sub-data packets data packet, the first message includes feature information of the first group of sub-data packets, and the first group of sub-data packets is a group of sub-data packets in one or more groups of sub-data packets; according to the first message and the first data packet, Send the first set of sub-packets to the terminal device.
  • the apparatus 90 further includes a processing module 903 for controlling the actions of the apparatus 90 .
  • a processing module 903 for controlling the actions of the apparatus 90 .
  • the apparatus further includes a storage module 902 for storing data or instructions of the apparatus 90 .
  • the first data packet is stored.
  • the apparatus 90 includes: a transceiver module 901 and a processing module 903 .
  • the processing module 903 is used to determine the first data packet, the first data packet includes one or more groups of sub-data packets and the characteristic information of one or more groups of sub-data packets; the transceiver module 901 is used to send to the access network device first packet.
  • the apparatus further includes a storage module 902 for storing data or instructions of the apparatus 90 .
  • the first data packet is stored.
  • the apparatus 90 includes: a transceiver module 901 and a processing module 903 .
  • the processing module 903 is used to determine the first data packet, the first data packet includes one or more groups of sub-data packets and the characteristic information of one or more groups of sub-data packets; the transceiver module 901 is used to send to the access network device first packet.
  • the apparatus further includes a storage module 902 for storing data or instructions of the apparatus 90 .
  • the first indication information is stored.
  • the apparatus 90 includes: a transceiver module 901 .
  • the transceiver module 901 is configured to send a first message to an access network device, where the first message is used to request a first group of sub-data packets, and the first message includes feature information of the first group of sub-data packets; A set of subpackets.
  • the apparatus 90 further includes a processing module 903 for controlling the actions of the apparatus 90 .
  • a processing module 903 for controlling the actions of the apparatus 90 .
  • the apparatus further includes a storage module 902 for storing data or instructions of the apparatus 90 .
  • the apparatus 90 is presented in the form of dividing each functional module in an integrated manner.
  • a module herein may refer to a specific ASIC, circuit, processor and memory executing one or more software or firmware programs, integrated logic circuit, and/or other device that may provide the above-described functions.
  • the apparatus 90 may take the form shown in FIG. 5 .
  • the processor 401 and/or the processor 408 in FIG. 5 may invoke the computer execution instructions stored in the memory 403 to cause the apparatus 90 to execute the communication method in the above method embodiment.
  • the function/implementation process of the transceiver module 901 may be implemented by the communication interface 404 in FIG. 5
  • the function/implementation process of the storage module 902 may be implemented by the memory 403 in FIG. 5
  • the function/implementation process of the processing module 903 may be implemented by the processor 401 and/or the processor 408 in FIG. 5 .
  • the memory 403 may be a storage unit in the chip or the circuit, such as a register, a cache, and the like.
  • the memory 403 may be a storage unit located outside the chip in the device, which is not specifically limited in this embodiment of the present application.
  • an embodiment of the present application further provides a chip system, where the chip system includes a processor for supporting a communication device to implement the above communication method.
  • the system-on-a-chip also includes memory.
  • the memory is used for storing necessary program instructions and data of the communication device.
  • the memory may not be in the system-on-chip.
  • the chip system may be composed of chips, or may include chips and other discrete devices, which are not specifically limited in this embodiment of the present application.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable device.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be transmitted from a website site, computer, server, or data center over a wire (e.g.
  • coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (eg infrared, wireless, microwave, etc.) means to transmit to another website site, computer, server or data center.
  • Computer-readable storage media can be any available media that can be accessed by a computer or data storage devices including one or more servers, data centers, etc., that can be integrated with the media.
  • Useful media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state disks (SSDs)), and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

La présente demande concerne, selon des modes de réalisation, un procédé et un appareil de communication, qui peuvent réduire un surdébit d'interface radio. Le procédé est appliqué à un dispositif de réseau d'accès ou prend en charge une puce dans le dispositif de réseau d'accès. Le procédé consiste à : recevoir un premier paquet de données en provenance d'un premier dispositif de réseau central ; recevoir un premier message en provenance d'un dispositif terminal ; et selon le premier message et le premier paquet de données, envoyer un premier groupe de sous-paquets de données au dispositif terminal. Le premier paquet de données comprend un ou plusieurs groupes de sous-paquets de données. Le premier message est utilisé pour demander le premier groupe de sous-paquets de données, le premier message comprend des informations de caractéristiques du premier groupe de sous-paquets de données, et le premier groupe de sous-paquets de données est un groupe de sous-paquets de données parmi les un ou plusieurs groupes de sous-paquets de données.
PCT/CN2020/136870 2020-12-16 2020-12-16 Procédé et appareil de communication WO2022126437A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/136870 WO2022126437A1 (fr) 2020-12-16 2020-12-16 Procédé et appareil de communication
CN202080106507.9A CN116458239A (zh) 2020-12-16 2020-12-16 通信方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/136870 WO2022126437A1 (fr) 2020-12-16 2020-12-16 Procédé et appareil de communication

Publications (1)

Publication Number Publication Date
WO2022126437A1 true WO2022126437A1 (fr) 2022-06-23

Family

ID=82059909

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/136870 WO2022126437A1 (fr) 2020-12-16 2020-12-16 Procédé et appareil de communication

Country Status (2)

Country Link
CN (1) CN116458239A (fr)
WO (1) WO2022126437A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101932102A (zh) * 2009-06-19 2010-12-29 华为技术有限公司 业务承载映射方法及通信设备
CN102291835A (zh) * 2010-06-21 2011-12-21 中兴通讯股份有限公司 一种无线资源调度方法、接入网网元及终端
WO2017015788A1 (fr) * 2015-07-24 2017-02-02 Panasonic Intellectual Property Corporation Of America Découverte améliorée d'ue relais pour services de proximité
US20180213506A1 (en) * 2014-07-04 2018-07-26 Samsung Electronics Co., Ltd. Method and apparatus for radio resources management
CN108810876A (zh) * 2017-05-05 2018-11-13 华为技术有限公司 通信方法及相关设备
CN109275151A (zh) * 2017-07-17 2019-01-25 华为技术有限公司 一种通信方法、设备和系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101932102A (zh) * 2009-06-19 2010-12-29 华为技术有限公司 业务承载映射方法及通信设备
CN102291835A (zh) * 2010-06-21 2011-12-21 中兴通讯股份有限公司 一种无线资源调度方法、接入网网元及终端
US20180213506A1 (en) * 2014-07-04 2018-07-26 Samsung Electronics Co., Ltd. Method and apparatus for radio resources management
WO2017015788A1 (fr) * 2015-07-24 2017-02-02 Panasonic Intellectual Property Corporation Of America Découverte améliorée d'ue relais pour services de proximité
CN108810876A (zh) * 2017-05-05 2018-11-13 华为技术有限公司 通信方法及相关设备
CN109275151A (zh) * 2017-07-17 2019-01-25 华为技术有限公司 一种通信方法、设备和系统

Also Published As

Publication number Publication date
CN116458239A (zh) 2023-07-18

Similar Documents

Publication Publication Date Title
US20230275698A1 (en) Method and communications apparatus for transmitting data packets of a media stream
WO2022022471A1 (fr) Procédé et appareil de commutation de flux multimédia
CN111669835A (zh) 通信的方法、装置及系统
US20230050923A1 (en) Media packet transmission method, apparatus, and system
WO2023046118A1 (fr) Procédé et appareil de communication
WO2022126437A1 (fr) Procédé et appareil de communication
WO2022206016A1 (fr) Procédé, appareil et système de transport par stratification de données
US20220330074A1 (en) Communication method and apparatus
WO2022151492A1 (fr) Procédé et appareil de transmission avec planification
WO2022188143A1 (fr) Procédé et appareil de transmission de données
US11516702B2 (en) Methods, systems and devices for determining buffer status report
CN115250537A (zh) 一种通信方法及设备
CN115250506A (zh) 一种通信方法及设备
WO2024060991A1 (fr) Procédé et appareil de guidage de flux de données pour trajets multiples
WO2022122123A1 (fr) Procédé et appareil destinés à être utilisés dans une opération d'extraction de données
WO2022213848A1 (fr) Procédé et dispositif de communication
WO2023088155A1 (fr) Procédé et appareil de gestion de qualité de service (qos)
WO2022213836A1 (fr) Procédé de communication et dispositif
WO2023185608A1 (fr) Procédé de transmission de données et appareil de communication
WO2021212999A1 (fr) Procédé, appareil et système de transmission de paquets multimédias
WO2024092725A1 (fr) Dispositif et procédé de mappage de données
WO2023246752A1 (fr) Procédé de communication et appareil de communication
WO2023155633A1 (fr) Procédé et appareil de transmission de données
WO2022188634A1 (fr) Procédé et appareil de communication
WO2023109431A1 (fr) Procédé et appareil de transmission de données

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20965455

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202080106507.9

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20965455

Country of ref document: EP

Kind code of ref document: A1