WO2021218593A1 - Procédé et appareil de communication - Google Patents

Procédé et appareil de communication Download PDF

Info

Publication number
WO2021218593A1
WO2021218593A1 PCT/CN2021/086210 CN2021086210W WO2021218593A1 WO 2021218593 A1 WO2021218593 A1 WO 2021218593A1 CN 2021086210 W CN2021086210 W CN 2021086210W WO 2021218593 A1 WO2021218593 A1 WO 2021218593A1
Authority
WO
WIPO (PCT)
Prior art keywords
data packet
video frame
layer data
base layer
packet
Prior art date
Application number
PCT/CN2021/086210
Other languages
English (en)
Chinese (zh)
Inventor
黄曲芳
吴可镝
马景旺
魏岳军
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021218593A1 publication Critical patent/WO2021218593A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
    • H04W28/065Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information using assembly or disassembly of packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information

Definitions

  • This application relates to the field of communication technology, and in particular to a communication method and device.
  • 5G communication systems are gradually infiltrating some video services with strong real-time performance and large data capacity requirements, such as large-scale live games and remote surgery.
  • video coding introduces a layered coding method.
  • the output video data packet is divided into two paths: one is the base layer data packet, and the other is the enhancement layer data packet. Once the basic layer data packet is lost, it will have a greater impact on the receiving screen, and once the enhancement layer data packet is lost, it will not have much impact on the receiving screen.
  • the present application provides a communication method and device, which facilitates avoiding multiple small-sized transmission blocks obtained by grouping packets, and improves air interface transmission efficiency.
  • the embodiments of the present application provide a communication method, which may be applicable to network equipment or terminal equipment or UPF network element, or may also be applied to the chip inside the network equipment or terminal equipment or UPF network element.
  • the network device receives a first video frame, and the first video frame includes at least one base layer data packet and at least one enhancement layer data packet; further, the network device can determine the first The last base layer packet in the video frame.
  • the network device determines the last base layer data packet in the first video frame, so that after receiving the last base layer data packet in the first video frame, it can group packets to obtain a transmission block. All the data packets in the first video frame are grouped into one transmission block, which is convenient to avoid the grouping of multiple small-sized transmission blocks, thereby improving the transmission efficiency of the air interface.
  • the method further includes: grouping M base layer data packets in the first video frame to obtain a first transmission block, and sending the first transmission block; wherein, the The M base layer data packets include the last base layer data packet in the first video frame, and M is a positive integer.
  • the M base layer data packets are all the base layer data packets in the first video frame.
  • the first video frame further includes N base layer data packets, the N base layer data packets are transmitted before the M base layer data packets, and N is a positive integer; this method It also includes: after determining that the data size of the N basic layer data packets is greater than or equal to the size of the transmission block, grouping the N basic layer data packets to obtain at least one transmission block, and sending the at least one transmission block .
  • determining the last base layer data packet in the first video frame includes: receiving first indication information, where the first indication information is used to indicate that the transmission mode of the data packet in the first video frame is the first A transmission mode, the first transmission mode is: shaping the transmission sequence of the base layer data packet and the enhancement layer data packet in the first video frame, so that the base layer data packet of the video frame is transmitted first, Then, the enhancement layer data packet is transmitted; further, it can be determined that the last basic layer data packet transmitted before the first enhancement layer data packet is the last basic layer data packet in the first video frame.
  • determining the last base layer data packet in the first video frame includes: receiving at least one control packet; furthermore, it may be determined that the last base layer data packet transmitted before the at least one control packet is the first The last base layer data packet of a video frame.
  • At least one control packet includes at least one of the following: a first control packet, a second control packet, and a third control packet; the first control packet is after the last base layer data packet of the first video frame Transmission, and the first control packet is adjacent to the last base layer data packet of the first video frame; the second control packet is transmitted after the last enhancement layer data packet of the first video frame, and the second control packet is adjacent to the first video frame The last enhancement layer data packet of the frame is adjacent; the third control packet is transmitted before the first base layer data packet of the next video frame of the first video frame, and the third control packet is the same as the first one of the next video frame.
  • the basic layer data packets are adjacent.
  • determining the last base layer data packet in the first video frame includes: determining that the first data packet carrying the second indication information is the last base layer data packet of the first video frame, and the second The indication information is used to indicate that the first data packet is the last basic layer data packet of the first video frame; or, it is determined that the last basic layer data packet transmitted before the second data packet carrying the third indication information is the first video frame The last basic layer data packet of the first video frame, and the third indication information is used to indicate that the second data packet is the last enhancement layer data packet of the first video frame; or, to determine the last transmission before the third data packet carrying the fourth indication information
  • One base layer data packet is the last base layer data packet of the first video frame, and the fourth indication information is used to indicate that the third data packet is the first base layer data packet of the next video frame of the first video frame.
  • determining the last base layer data packet in the first video frame includes: determining that the last base layer data packet received in the first time period is the last base layer of the first video frame Data packet; where the start time of the first time period is the time when the first base layer data packet in the first video frame is received.
  • the method further includes: obtaining the duration of the first time period from the application server or the session management function SMF network element or the access and mobility management function AMF network element.
  • the method further includes: sending the enhancement layer data packet of the first video frame before the first time, and determining that the enhancement layer data packet of the first video frame will not be sent after the first time.
  • the first time is any one of the following times: the time when the first basic layer data packet in the second video frame after the first video frame is received; the last basic layer data packet in the second video frame is received Time of receiving the first enhancement layer data packet in the second video frame; Time of receiving the last enhancement layer data packet in the second video frame.
  • the first time is the end time of the second time period; the start time of the second time period is any of the following times: the time when the first base layer data packet in the first video frame is received ; Time to receive the last base layer data packet in the first video frame; time to receive the first enhancement layer data packet in the first video frame; time to receive the last enhancement layer data packet in the first video frame; The time when the first base layer data packet in the second video frame after the first video frame was received; the time when the last base layer data packet in the second video frame was received.
  • the method further includes: obtaining the duration of the second time period from the application server or the SMF network element or the AMF network element.
  • the embodiments of the present application provide a communication method, which may be applicable to network equipment or terminal equipment or UPF network element, or may also be applicable to the chip inside the network equipment or terminal equipment or UPF network element.
  • the network device receives instruction information, which is used to indicate the transmission mode of the data stream of the first service; further, the network device can determine the first service according to the instruction information.
  • the transmission mode of the data stream corresponding to the service wherein the transmission mode of the data stream of the first service includes a first transmission mode or a second transmission mode;
  • the first transmission mode is: for each of the first service A video frame, shaping the transmission sequence of the base layer data packet and the enhancement layer data packet in the video frame, so that the base layer data packet of the video frame is transmitted first, and then the enhancement layer data packet is transmitted;
  • the second transmission mode is: for each video frame of the first service, the transmission sequence of the base layer data packet and the enhancement layer data packet in the video frame is not shaped.
  • the method when the transmission mode of the data stream of the first service is the first transmission mode, the method further includes: receiving at least one of the first video frames of the first service Basic layer data packet; determining that the last basic layer data packet transmitted before the first enhancement layer data packet is the last basic layer data packet in the first video frame.
  • the embodiments of the present application provide a communication method, which may be applicable to network equipment or terminal equipment or UPF network element, or may also be applied to the chip inside the network equipment or terminal equipment or UPF network element. Take this method applied to a network device as an example.
  • the network device receives a control packet and determines that the last data packet transmitted before the control packet is the data packet of the first video frame, and after the control packet The first data packet transmitted is a data packet of a video frame next to the first video frame.
  • the control packet includes a first control packet and/or a second control packet; wherein, the first control packet is transmitted after the last data packet of the first video frame, and The first control packet is adjacent to the last data packet of the first video frame; or, the second control packet is transmitted before the first data packet of the next video frame of the first video frame, and The second control packet is adjacent to the first data packet of the next video frame.
  • the embodiments of the present application provide a communication method, which may be applicable to network equipment or terminal equipment or UPF network element, or may also be applied to the chip inside the network equipment or terminal equipment or UPF network element.
  • the network device receives a first data packet, and the first data packet carries first indication information; it is determined that the first data packet is a data packet of a first video frame , And the first data packet transmitted after the first data packet is the data packet of the next video frame of the first video frame; or, determining the last data packet transmitted before the first data packet Is a data packet of the first video frame, and the first data packet is a data packet of a video frame next to the first video frame.
  • the method further includes: receiving a second data packet, the second data packet carrying second indication information; and determining that the second data packet is the last one of the first video frame Basic layer data packet.
  • the embodiments of the present application provide a communication method, which may be applicable to network equipment or terminal equipment, or may also be applicable to a chip inside the network equipment or terminal equipment.
  • the network device receives a first video frame, and the first video frame includes at least one enhancement layer data packet; An enhancement layer data packet, and an enhancement layer data packet of the first video frame determined not to be sent after the first time.
  • the first time is any of the following times: the time when the first base layer data packet in the second video frame after the first video frame is received; and the second video is received The time of the last base layer data packet in the frame; the time of receiving the first enhancement layer data packet in the second video frame; the time of receiving the last enhancement layer data packet in the second video frame.
  • the first time is the end time of the second time period;
  • the start time of the second time period is any of the following times: receiving the first one of the first video frames The time of the base layer data packet; the time of receiving the last base layer data packet in the first video frame;
  • the method further includes: obtaining the duration of the second time period from an application server or an SMF network element or an AMF network element.
  • an embodiment of the present application provides a communication method, which may be applicable to an application server, or may also be applicable to a chip inside the application server.
  • the application server sends instruction information to the core network device, where the instruction information is used to indicate the transmission mode of the data stream of the first service; and, according to the first service
  • the data stream transmission mode of the first service is to transmit the data packet of the first service; wherein the transmission mode of the data stream of the first service includes the first transmission mode or the second transmission mode;
  • the first transmission mode is: For each video frame of the first service, the transmission sequence of the basic layer data packet and the enhancement layer data packet in the video frame is shaped so that the basic layer data packet of the video frame is transmitted first, and then transmitted An enhancement layer data packet;
  • the second transmission mode is: for each video frame of the first service, the transmission sequence of the base layer data packet and the enhancement layer data packet in the video frame is not shaped.
  • the core network equipment here may be an SMF network
  • the application server generates at least one control packet, and sends the at least one control packet; wherein, the at least one control packet includes at least one of the following: a first control packet, a second control packet, and a third control packet.
  • Control packet the first control packet is transmitted after the last base layer data packet of the first video frame, and the first control packet is adjacent to the last base layer data packet of the first video frame; or,
  • the second control packet is transmitted after the last enhancement layer data packet of the first video frame, and the second control packet is adjacent to the last enhancement layer data packet of the first video frame; or, so
  • the third control packet is transmitted before the first base layer data packet of the next video frame of the first video frame, and the third control packet is the same as the first base layer data packet of the next video frame Adjacent.
  • the application server generates a first data packet, the first data packet carries instruction information, and sends the first data packet; wherein the instruction information is used to indicate that the first data packet is The last base layer data packet of the first video frame; or, the indication information is used to indicate that the first data packet is the last (enhancement layer) data packet of the first video frame; or, the indication The information is used to indicate that the first data packet is the first (base layer) data packet of the next video frame of the first video frame.
  • the application server determines the indication information and sends the indication information to the core network device, where the indication information is used to indicate the duration of the first time period or the duration of the second time period.
  • the embodiments of the present application provide a communication method, which may be applicable to core network equipment (such as SMF network elements or AMF network elements), or may also be applicable to chips inside the core network equipment.
  • core network equipment such as SMF network elements or AMF network elements
  • the core network device receives indication information from an application server, where the indication information is used to indicate the transmission mode of the data stream of the first service; and, to the network device and/ Or the UPF network element sends the instruction information.
  • the core network device receives instruction information from the application server, the instruction information is used to indicate the duration of the first time period or the duration of the second time period; and, send the said information to the network device and/or the UPF network element Instructions.
  • the embodiments of the present application provide a communication method, which may be applicable to a terminal device, or may also be applicable to a chip inside the terminal device.
  • the terminal device receives instruction information from an application server, the instruction information is used to indicate the transmission mode of the data stream of the first service; and, the instruction is sent to the network device information.
  • the indication information may be carried in an RRC message.
  • the present application provides a communication device.
  • the communication device may be a network device or a chip set inside the network device.
  • the communication device is capable of implementing the functions involved in any one of the first to fifth aspects.
  • the communication device includes a module corresponding to the steps involved in any one of the first to fifth aspects.
  • unit or means, the function or unit or means can be realized by software, or by hardware, or by hardware executing corresponding software.
  • the communication device includes a processing unit and a communication unit.
  • the communication unit can be used to send and receive signals to achieve communication between the communication device and other devices.
  • the communication unit is used to communicate with the terminal.
  • the device sends system information; the processing unit can be used to perform some internal operations of the communication device.
  • the functions performed by the processing unit and the communication unit may correspond to the steps involved in any one of the first to fifth aspects described above.
  • the communication device includes a processor, and may also include a transceiver.
  • the transceiver is used to send and receive signals, and the processor executes program instructions to complete the first to fifth aspects.
  • the communication device may further include one or more memories, and the memories are used for coupling with the processor.
  • the one or more memories may be integrated with the processor, or may be provided separately from the processor, which is not limited in this application.
  • the memory can store necessary computer programs or instructions to realize the functions involved in any one of the first to fifth aspects described above.
  • the processor can execute the computer program or instruction stored in the memory, and when the computer program or instruction is executed, the communication device realizes any possible aspect of any one of the first to fifth aspects. The method in the design or implementation.
  • the communication device includes a processor and a memory, and the memory can store necessary computer programs or instructions for realizing the functions involved in any one of the first to fifth aspects.
  • the processor can execute the computer program or instruction stored in the memory, and when the computer program or instruction is executed, the communication device realizes any possible aspect of any one of the first to fifth aspects. The method in the design or implementation.
  • the communication device includes at least one processor and an interface circuit, where at least one processor is used to communicate with other devices through the interface circuit, and execute the above-mentioned first to fifth aspects. Any possible design or implementation method in any aspect.
  • the present application provides a communication device.
  • the communication device may be a terminal device or a chip set inside the terminal device.
  • the communication device has the function of implementing any one of the first aspect to the fifth aspect, and the eighth aspect.
  • the communication device includes performing any one of the first aspect to the fifth aspect, and the eighth aspect.
  • the aspect relates to the module or unit or means corresponding to the step, and the function or unit or means can be realized by software, or by hardware, or by hardware executing corresponding software.
  • the communication device includes a processing unit and a communication unit.
  • the communication unit can be used to send and receive signals to achieve communication between the communication device and other devices.
  • the communication unit is used to receive Configuration information of the network device; the processing unit can be used to perform some internal operations of the communication device.
  • the functions performed by the processing unit and the communication unit may correspond to the steps involved in any one of the first aspect to the fifth aspect and the eighth aspect.
  • the communication device includes a processor, and may also include a transceiver.
  • the transceiver is used to send and receive signals.
  • the processor executes program instructions to complete the first to fifth aspects and the first Any possible design or method in any of the eight aspects.
  • the communication device may further include one or more memories, and the memories are used for coupling with the processor.
  • the one or more memories may be integrated with the processor, or may be provided separately from the processor, which is not limited in this application.
  • the memory may store necessary computer programs or instructions to realize the functions involved in any one of the above-mentioned first to fifth and eighth aspects.
  • the processor can execute the computer program or instruction stored in the memory, and when the computer program or instruction is executed, the communication device realizes any one of the first aspect to the fifth aspect, and the eighth aspect. Any possible design or method of implementation.
  • the communication device includes a processor and a memory
  • the memory can store necessary computer programs or instructions for realizing the functions involved in any one of the first aspect to the fifth aspect, and the eighth aspect.
  • the processor can execute the computer program or instruction stored in the memory, and when the computer program or instruction is executed, the communication device realizes any one of the first aspect to the fifth aspect, and the eighth aspect. Any possible design or method of implementation.
  • the communication device includes at least one processor and an interface circuit, where at least one processor is used to communicate with other devices through the interface circuit and execute the first to fifth aspects and the first aspect described above. Any possible design or method in any of the eight aspects.
  • this application provides a communication device.
  • the communication device may be a UPF network element or a chip set inside the UPF network element.
  • the communication device has the function of implementing any one of the foregoing first to fourth aspects.
  • the communication device includes a module or module corresponding to the steps involved in any one of the foregoing first to fourth aspects.
  • Units or means (means), the functions or units or means can be realized by software, or by hardware, or by hardware executing corresponding software.
  • the communication device includes a processing unit and a communication unit.
  • the communication unit can be used to send and receive signals to achieve communication between the communication device and other devices.
  • the communication unit is used to receive Configuration information of the network device; the processing unit can be used to perform some internal operations of the communication device.
  • the functions performed by the processing unit and the communication unit may correspond to the steps involved in any one of the first aspect to the fourth aspect.
  • the communication device includes a processor and a memory, and the memory can store necessary computer programs or instructions for realizing the functions involved in any one of the first to fourth aspects.
  • the processor can execute the computer program or instruction stored in the memory, and when the computer program or instruction is executed, the communication device realizes any possible aspect of any one of the first to fourth aspects. The method in the design or implementation.
  • the communication device includes at least one processor and an interface circuit, where at least one processor is configured to communicate with other devices through the interface circuit, and execute the above-mentioned first to fourth aspects.
  • the communication device may also include one or more memories for coupling with the processor.
  • the one or more memories may be integrated with the processor, or may be provided separately from the processor, which is not limited in this application.
  • the memory can store necessary computer programs or instructions to realize the functions involved in any one of the first to fourth aspects described above.
  • the processor can execute the computer program or instruction stored in the memory, and when the computer program or instruction is executed, the communication device realizes any possible aspect of any one of the first to fourth aspects. The method in the design or implementation.
  • the present application provides a communication device.
  • the communication device may be an application server or a chip set inside the application server.
  • the communication device has the function of realizing any aspect of the above sixth aspect, for example, the communication device includes a module or unit or means corresponding to the step involved in any one of the above sixth aspect, so The functions or units or means can be realized by software, or by hardware, or by hardware executing corresponding software.
  • the communication device includes a processing unit and a communication unit.
  • the communication unit can be used to send and receive signals to achieve communication between the communication device and other devices.
  • the communication unit is used to receive Configuration information of the network device; the processing unit can be used to perform some internal operations of the communication device.
  • the functions performed by the processing unit and the communication unit may correspond to the steps involved in any one of the above sixth aspects.
  • the communication device includes a processor, and may also include a transceiver.
  • the transceiver is used to send and receive signals, and the processor executes program instructions to accomplish any of the above-mentioned sixth aspects.
  • the communication device may further include one or more memories, and the memories are used for coupling with the processor.
  • the one or more memories may be integrated with the processor, or may be provided separately from the processor, which is not limited in this application.
  • the memory may store necessary computer programs or instructions to realize the functions involved in any one of the above-mentioned sixth aspects.
  • the processor can execute the computer program or instruction stored in the memory, and when the computer program or instruction is executed, the communication device realizes any possible design or implementation manner in any one of the above sixth aspects In the method.
  • the communication device includes a processor and a memory, and the memory can store necessary computer programs or instructions for realizing the functions involved in any one of the above sixth aspects.
  • the processor can execute the computer program or instruction stored in the memory, and when the computer program or instruction is executed, the communication device realizes any possible design or implementation manner in any one of the above sixth aspects In the method.
  • the communication device includes at least one processor and an interface circuit, where at least one processor is configured to communicate with other devices through the interface circuit, and execute any one of the above-mentioned sixth aspects. Any possible design or method of implementation.
  • the present application provides a communication device
  • the communication device may be an SMF network element or a chip set inside the SMF network element, or the communication device may be an AMF network element or set inside the AMF network element Chip.
  • the communication device has the function of implementing any aspect of the seventh aspect.
  • the communication device includes a module or unit or means corresponding to the steps involved in any aspect of the seventh aspect, so The functions or units or means can be realized by software, or by hardware, or by hardware executing corresponding software.
  • the communication device includes a processing unit and a communication unit.
  • the communication unit can be used to send and receive signals to achieve communication between the communication device and other devices.
  • the communication unit is used to receive Configuration information of the network device; the processing unit can be used to perform some internal operations of the communication device.
  • the functions performed by the processing unit and the communication unit may correspond to the steps involved in any aspect of the seventh aspect.
  • the communication device includes a processor and a memory, and the memory can store necessary computer programs or instructions for realizing the functions involved in any one of the seventh aspects.
  • the processor can execute the computer program or instruction stored in the memory, and when the computer program or instruction is executed, the communication device realizes any possible design or implementation manner in any one of the seventh aspects above In the method.
  • the communication device includes at least one processor and an interface circuit, where at least one processor is configured to communicate with other devices through the interface circuit and execute any one of the seventh aspects above. Any possible design or method of implementation.
  • the communication device may also include one or more memories for coupling with the processor.
  • the one or more memories may be integrated with the processor, or may be provided separately from the processor, which is not limited in this application.
  • the memory can store necessary computer programs or instructions to realize the functions involved in any one of the first to fourth aspects described above.
  • the processor can execute the computer program or instruction stored in the memory, and when the computer program or instruction is executed, the communication device realizes any possible aspect of any one of the first to fourth aspects. The method in the design or implementation.
  • this application provides a communication system, which may include SMF network elements or AMF network elements, and also includes network equipment and/or UPF network elements; wherein, SMF network elements or AMF network elements are used to The network device and/or the UPF network element sends instruction information, the instruction information is used to indicate that the transmission mode of the data stream of the first service is the first transmission mode, and the first transmission mode is: for the first video frame of the first service,
  • the transmission sequence of the basic layer data packet and the enhancement layer data packet in the first video frame is shaped so that the basic layer data packet of the video frame is transmitted first, and then the enhancement layer data packet of the video frame is transmitted.
  • the network device and/or the UPF network element is used to receive the indication information; and to receive the first video frame, and determine that the last basic layer data packet transmitted before the first enhancement layer data packet in the first video frame is The last base layer data packet in the first video frame.
  • the communication system may include SMF network elements or AMF network elements, as well as network equipment and/or UPF network elements; wherein, SMF network elements or AMF network elements are used to send instructions to the network equipment and/or UPF network elements ,
  • the indication information is used to indicate the duration of the first time period, and the network device and/or the UPF network element can be used to determine that the last base layer data packet received in the first time period is the last one of the first video frame Basic layer data packet; wherein, the start time of the first time period is the time when the first basic layer data packet in the first video frame is received.
  • the communication system includes a UPF network element and a network device; the UPF network element is used to send a first video frame to the network device, the first video frame includes at least one control packet, and the at least one control includes a first control packet and a second control packet.
  • At least one of a packet and a third control packet wherein the first control packet is transmitted before the first base layer data packet of the first video frame, and the first control packet is the same as the first base layer data packet of the first video frame Packets are adjacent; the second control packet is transmitted after the last base layer data packet of the first video frame, and the second control packet is adjacent to the last base layer data packet of the first video frame; the third control packet is in the first The last enhancement layer data packet of the video frame is transmitted after, and the third control packet is adjacent to the last enhancement layer data packet of the first video frame; the network device is used to receive the first video frame.
  • the communication system includes a UPF network element and a network device; the UPF network element is used to send a first video frame to the network device, and the first video frame includes a first data packet that carries first instruction information and a data packet that carries second instruction information. At least one of the second data packet and the third data packet carrying third indication information; wherein the first indication information is used to indicate that the first data packet is the first base layer data packet of the first video frame, and the second indication The information is used to indicate that the second data packet is the last basic layer data packet of the first video frame, and the third indication information is used to indicate that the third data packet is the last enhancement layer data packet of the first video frame; the network device is used to, Receive the first video frame.
  • this application provides a computer-readable storage medium in which computer-readable instructions are stored.
  • the computer reads and executes the computer-readable instructions, the computer executes the first aspect described above. Any possible design method from the eighth aspect.
  • this application provides a computer program product, which when a computer reads and executes the computer program product, causes the computer to execute any one of the possible design methods in the first to eighth aspects.
  • the present application provides a chip that includes a processor, the processor is coupled with a memory, and is configured to read and execute a software program stored in the memory, so as to implement the first aspect to the first aspect described above. Any one of the eight possible design methods.
  • FIG. 1 is a schematic diagram of a network architecture to which an embodiment of this application is applicable;
  • Figure 2a is a schematic diagram of a CU-DU separation architecture provided by an embodiment of the application.
  • 2b is a schematic diagram of another CU-DU separation architecture provided by an embodiment of the application.
  • 2c is a schematic diagram of the distribution of an air interface protocol stack provided by an embodiment of the application.
  • FIG. 3a is an example diagram of a user plane protocol layer structure of a PDU session provided by an embodiment of the application
  • FIG. 3b is an example diagram of a protocol layer structure between a terminal device and a network device according to an embodiment of the application;
  • Figure 3c is a schematic diagram of a video service scenario provided by an embodiment of the application.
  • 3d is a schematic diagram of a data transmission path in a remote surgery scenario provided by an embodiment of the application.
  • FIG. 4 is a schematic diagram of the transmission of multiple video frames provided by an embodiment of the application.
  • FIG. 5a is a schematic diagram of a flow corresponding to the communication method provided in Embodiment 1 of this application;
  • FIG. 5b is a schematic diagram of transmission mode 1 and transmission mode 2 provided by an embodiment of this application;
  • FIG. 6a is a schematic diagram of a flow corresponding to the communication method provided in the second embodiment of this application.
  • 6b is a schematic diagram of at least one control packet included in a video frame provided by an embodiment of the application.
  • FIG. 7a is a schematic diagram of a process corresponding to the communication method provided in the third embodiment of the application.
  • FIG. 7b is a schematic diagram of a data packet carrying instruction information a, instruction information b, or instruction information c provided by an embodiment of the application;
  • FIG. 8 is a schematic diagram of a process corresponding to the communication method provided in the fourth embodiment of this application.
  • FIG. 9a is an example diagram of data packets transmitted in the first time period provided by an embodiment of the application.
  • FIG. 9b is another example diagram of data packets transmitted in the first time period provided by an embodiment of the application.
  • FIG. 9c is an example diagram of stopping scheduling an enhancement layer data packet provided by an embodiment of the application.
  • FIG. 9d is another example diagram of stopping scheduling an enhancement layer data packet provided by an embodiment of the application.
  • FIG. 10 is a possible exemplary block diagram of a device involved in an embodiment of this application.
  • FIG. 11 is a schematic structural diagram of a network device provided by an embodiment of this application.
  • FIG. 12 is a schematic structural diagram of a core network device provided by an embodiment of this application.
  • FIG. 13 is a schematic structural diagram of a terminal device provided by an embodiment of this application.
  • FIG. 1 is a schematic diagram of a network architecture to which an embodiment of this application is applicable.
  • a terminal device can access a wireless network to obtain services from an external network (such as a data network (DN)) through the wireless network, or communicate with other devices through the wireless network, such as other terminals Device communication.
  • the wireless network includes radio access network (RAN) and core network (CN).
  • RAN is used to connect terminal equipment to the wireless network
  • CN is used to manage terminal equipment and provide Gateway for DN communication.
  • the terminal device includes a device that provides voice and/or data connectivity to the user, for example, it may include a handheld device with a wireless connection function, or a processing device connected to a wireless modem.
  • the terminal device can communicate with the core network via a radio access network (RAN), and exchange voice and/or data with the RAN.
  • RAN radio access network
  • the terminal equipment may include user equipment (UE), wireless terminal equipment, mobile terminal equipment, device-to-device communication (device-to-device, D2D) terminal equipment, vehicle to everything (V2X) terminal equipment , Machine-to-machine/machine-type communications (M2M/MTC) terminal equipment, Internet of things (IoT) terminal equipment, subscriber unit, subscriber station (subscriber) station), mobile station (mobile station), remote station (remote station), access point (access point, AP), remote terminal (remote terminal), access terminal (access terminal), user terminal (user terminal), user Agent (user agent), or user equipment (user device), etc.
  • UE user equipment
  • M2M/MTC Machine-to-machine/machine-type communications
  • IoT Internet of things
  • subscriber unit subscriber station (subscriber) station)
  • mobile station mobile station
  • remote station remote station
  • access point access point
  • AP remote terminal
  • remote terminal remote terminal
  • access terminal access terminal
  • user terminal user terminal
  • user Agent
  • it may include mobile phones (or “cellular” phones), computers with mobile terminal equipment, portable, pocket-sized, hand-held, mobile devices with built-in computers, and so on.
  • PCS personal communication service
  • PCS cordless phones
  • SIP session initiation protocol
  • WLL wireless local loop
  • PDA personal digital assistants
  • restricted devices such as devices with low power consumption, or devices with limited storage capabilities, or devices with limited computing capabilities. Examples include barcodes, radio frequency identification (RFID), sensors, global positioning system (GPS), laser scanners and other information sensing equipment.
  • RFID radio frequency identification
  • GPS global positioning system
  • laser scanners and other information sensing equipment.
  • the RAN may include one or more RAN devices, such as the RAN device 1101 and the RAN device 1102.
  • the interface between the RAN device and the terminal device may be a Uu interface (or called an air interface).
  • Uu interface or called an air interface.
  • the names of these interfaces may not change or may be replaced by other names, which is not limited in this application.
  • the RAN device is a node or device that connects terminal devices to the wireless network, and the RAN device can also be called a network device or a base station.
  • RAN equipment include but are not limited to: a new generation Node B (gNB) in a 5G communication system, an evolved node B (eNB), a radio network controller (RNC), and a node B (node B, NB), base station controller (BSC), base transceiver station (BTS), home base station (for example, home evolved nodeB, or home node B, HNB), baseband unit (baseBand) unit, BBU), transmission and receiving point (transmitting and receiving point, TRP), transmitting point (TP), mobile switching center, etc.
  • gNB new generation Node B
  • eNB evolved node B
  • RNC radio network controller
  • BSC base station controller
  • BTS base transceiver station
  • home base station for example, home evolved nodeB, or home node B, HNB
  • the RAN equipment may include a protocol layer structure.
  • the control plane protocol layer structure may include the RRC layer, the packet data convergence protocol (PDCP) layer, the radio link control (RLC) layer, and the media interface.
  • user plane protocol layer structure can include PDCP layer, RLC layer, MAC layer, physical layer and other protocol layer functions, in a possible implementation
  • the PDCP layer may also include a service data adaptation protocol (SDAP) layer.
  • SDAP service data adaptation protocol
  • the RAN device may include one or more centralized units (CU) and one or more distributed units (DU), and multiple DUs may be centrally controlled by one CU.
  • the interface between the CU and the DU may be referred to as an F1 interface, where the control panel (CP) interface may be F1-C, and the user panel (UP) interface may be F1-U.
  • CU and DU can be divided according to the protocol layers of the wireless network: for example, as shown in Figure 2a, the functions of the PDCP layer and above are set in the CU, and the functions of the protocol layers below the PDCP layer (such as the RLC layer and the MAC layer) are set in the DU.
  • the above-mentioned division of the processing functions of CU and DU according to the protocol layer is just an example, and it can also be divided in other ways.
  • the functions of the protocol layer above the PDCP layer are set in the CU, the PDCP layer and the lower protocol layer.
  • the function of is set in the DU.
  • the CU or DU can be divided into functions with more protocol layers.
  • the CU or DU can also be divided into partial processing functions with the protocol layer.
  • part of the functions of the RLC layer and the functions of the protocol layer above the RLC layer are set in the CU, and the remaining functions of the RLC layer and the functions of the protocol layer below the RLC layer are set in the DU.
  • the functions of the CU or DU can also be divided according to service types or other system requirements, for example, by delay, and the functions that need to meet the delay requirements for processing time are set in the DU, and the delay does not need to be met.
  • the required function is set in the CU.
  • the CU may also have one or more functions of the core network.
  • the CU can be set on the network side to facilitate centralized management; the DU can have multiple radio frequency functions, or the radio frequency functions can be set remotely. The embodiment of the present application does not limit this.
  • the function of the CU may be implemented by one entity, or may also be implemented by different entities.
  • the functions of the CU can be further divided, that is, the control plane and the user plane are separated and implemented by different entities, which are the control plane CU entity (ie CU-CP entity) and the user plane CU entity. (That is, the CU-UP entity), the CU-CP entity and the CU-UP entity can be coupled with the DU to jointly complete the function of the RAN device.
  • the interface between the CU-CP entity and the CU-UP entity can be an E1 interface
  • the interface between the CU-CP entity and the DU can be an F1-C interface
  • the interface between the CU-UP entity and the DU can be an F1-U interface.
  • one DU and one CU-UP can be connected to one CU-CP.
  • one DU can be connected to multiple CU-UPs
  • one CU-UP can be connected to multiple DUs.
  • Fig. 2c is a schematic diagram of the distribution of an air interface protocol stack.
  • the air interface protocol stack can be RLC, MAC, and PHY in the DU, and PDCP and above protocol layers in the CU.
  • the signaling generated by the CU may be sent to the terminal device through the DU, or the signaling generated by the terminal device may be sent to the CU through the DU.
  • the DU may directly pass the protocol layer encapsulation without analyzing the signaling and transparently transmit it to the terminal device or CU.
  • the sending or receiving of the signaling by the DU includes this scenario.
  • the RRC or PDCP layer signaling will eventually be processed as physical layer signaling and sent to the terminal device, or converted from received physical layer signaling.
  • the RRC or PDCP layer signaling can also be considered to be sent by the DU, or sent by the DU and radio frequency load.
  • CN can include one or more CN devices.
  • CN can include access and mobility management function (AMF) network elements, session management function (session management function, SMF). ) Network element, user plane function (UPF) network element, policy control function (PCF) network element, unified data management (UDM) network element, application function (AF) ) Network elements, etc.
  • AMF access and mobility management function
  • SMF session management function
  • UPF user plane function
  • PCF policy control function
  • UDM unified data management
  • AF application function
  • the AMF network element is a control plane network element provided by the operator's network. It is responsible for the access control and mobility management of terminal equipment accessing the operator's network. For example, it includes functions such as mobile status management, allocation of temporary user identities, authentication and authorization of users, etc. .
  • the SMF network element is a control plane network element provided by the operator's network, and is responsible for managing the protocol data unit (protocol data unit, PDU) session of the terminal device.
  • the PDU session is a channel used to transmit PDUs, and the terminal device needs to transmit PDUs to each other through the PDU session and the DN.
  • the PDU session is established, maintained, and deleted by the SMF network element.
  • SMF network elements include session management (such as session establishment, modification, and release, including tunnel maintenance between UPF and RAN), UPF network element selection and control, service and session continuity (service and session continuity, SSC) mode selection, Session-related functions such as roaming.
  • the UPF network element is a gateway provided by the operator, and a gateway for the communication between the operator's network and the DN.
  • UPF network elements include user plane-related functions such as packet routing and transmission, packet inspection, service usage reporting, quality of service (QoS) processing, lawful monitoring, uplink packet inspection, and downlink packet storage.
  • QoS quality of service
  • the PCF network element is a control plane function provided by the operator, which is used to provide a PDU session strategy to the SMF network element.
  • Policies can include charging-related policies, QoS-related policies, and authorization-related policies.
  • the UDM network element is a control plane network element provided by the operator, and is responsible for storing subscriber permanent identifier (SUPI), security context (security context), subscription data and other information of the subscribers in the operator's network.
  • SUPI subscriber permanent identifier
  • security context security context
  • subscription data subscription data and other information of the subscribers in the operator's network.
  • the AF network element is a functional network element that provides various business services, and can interact with the core network through other network elements, and can interact with the policy management framework for policy management.
  • the CN may also include other possible network elements, such as network exposure function (NEF), unified data repository (UDR) network elements, and NEF network elements for Provide framework, authentication and interfaces related to network capability opening, and transfer information between 5G system network functions and other network functions; UDR network elements are mainly used to store user-related subscription data, policy data, and open structured data , Application data.
  • NEF network exposure function
  • UDR unified data repository
  • a DN can also be called a packet data network (PDN), which is a network located outside the operator’s network.
  • PDN packet data network
  • the operator’s network can be connected to multiple DNs, and application servers corresponding to multiple services can be deployed in the DN.
  • the terminal device provides a variety of possible services.
  • Npcf, Nudm, Naf, Namf, Nsmf, N1, N2, N3, N4, and N6 are interface serial numbers.
  • the meaning of these interface serial numbers can be referred to the meaning defined in the relevant standard protocol, and there is no restriction here.
  • FIG. 1 uses a 5G communication system as an example for illustration.
  • the solutions in the embodiments of the present application can also be applied to other possible communication systems, such as the 6th generation (6G) in the future.
  • the foregoing network elements or functions may be network elements in hardware devices, software functions running on dedicated hardware, or virtualization functions instantiated on a platform (for example, a cloud platform).
  • the foregoing network element or function may be implemented by one device, or jointly implemented by multiple devices, or may be a functional module in one device, which is not specifically limited in the embodiment of the present application.
  • the user plane data transmission channel can be established for the terminal device through the control plane signaling interaction process (such as the PDU session establishment process), and the terminal device can communicate with the application server deployed in the DN through the user plane
  • the data transmission channel performs data transmission.
  • the application server can send a downlink data packet to the terminal device.
  • the transmission path of the downlink data packet is: application server ⁇ UPF network element ⁇ network device ⁇ terminal device; accordingly, the terminal device can send an uplink data packet to the application server.
  • the transmission path of the packet is: terminal equipment ⁇ network equipment ⁇ UPF network element ⁇ application server.
  • Figure 3a is an example diagram of the user plane protocol layer structure of a PDU session.
  • the terminal device and the network device can follow the access network protocol layer structure, and the protocol layer structure between the network device and the UPF network element can include the L1 layer (that is, the physical layer) and the L2 layer (that is, the data link). Road layer), user datagram protocol (UDP)/internet protocol (IP) layer, GTP-U (GTP-U is general packet radio service, GPRS) tunnel transmission protocol (GPRS tunnel protocol, GTP) protocol) layer, etc.;
  • the UPF network element may also include the PDU session layer equivalent to the PDU session layer of the terminal device;
  • the application server may include the application layer with the terminal device Peer application layer.
  • the protocol layer structure between the terminal device and the network device may include SDAP layer, PDCP layer, RLC layer, MAC layer, and physical layer.
  • SDAP layer, PDCP layer, RLC layer, MAC The layer and the physical layer can also be collectively referred to as the access layer.
  • the transmission direction of the data it is divided into sending or receiving, and each of the above-mentioned layers is further divided into a sending part and a receiving part.
  • the following data transmission is taken as an example.
  • Figure 3b shows a schematic diagram of downlink data transmission between layers. In Figure 3b, the downward arrow indicates data transmission, and the upward arrow indicates data reception.
  • the PDCP layer After the PDCP layer obtains the data from the upper layer, it transmits the data to the RLC layer and the MAC layer, and then the MAC layer generates a transmission block, and then performs wireless transmission through the physical layer. Data is encapsulated correspondingly in each layer.
  • the data received by a certain layer from the upper layer of the layer is regarded as the service data unit (SDU) of the layer, and after layer encapsulation, it becomes a PDU, and then is passed to the lower layer.
  • SDU service data unit
  • the data received by the PDCP layer from the upper layer is called PDCP SDU
  • the data sent by the PDCP layer to the lower layer is called PDCP PDU
  • the data received by the RLC layer from the upper layer is called RLC SDU
  • the data sent by the RLC layer to the lower layer is called RLC PDU.
  • the connections between layers are mostly corresponded in the way of channels.
  • the RLC layer and the MAC layer correspond to each other through a logical channel (LCH), and the MAC layer and the physical layer correspond to each other through a transport channel. Below the physical layer is a physical channel, which is used to correspond to the other end.
  • the physical layer is used to correspond to the other end.
  • the following row direction is taken as an example.
  • the application layer of the application server After the application layer of the application server obtains the data packet X, it can encapsulate the data packet X according to the GTP-U protocol to obtain GTP -U PDU1 (can also be referred to as GTP-U data packet 1), and then send GTP-U PDU1 to the UPF network element.
  • GTP-U protocol can also be referred to as GTP-U data packet 1
  • the UPF network element after the UPF network element receives GTP-U PDU1, it can parse GTP-U PDU1 to obtain data packet X, and then encapsulate data packet X according to the GTP-U protocol to obtain GTP-U PDU2 (also called GTP -U data packet 2), and then send GTP-U PDU2 to the network device.
  • GTP-U PDU2 also called GTP -U data packet 2
  • the network device receives the GTP-U PDU2
  • it can parse the GTP-U PDU2 to obtain the data packet X, and then encapsulate the data packet X into the MAC PDU, and send it to the terminal device through the air interface.
  • the terminal device can be connected to one or more application layer devices.
  • the application layer device can include an application layer equivalent to the application layer of the terminal device, and the terminal device can communicate with the application layer device.
  • Communication; the application server can be connected to one or more peripheral devices, which can include input devices and output devices of the application server.
  • the transmission path of the downlink data packet is: peripheral device ⁇ application server ⁇ UPF network element ⁇ network device ⁇ terminal device ⁇ application layer device;
  • the transmission path of the uplink data packet is: application layer device ⁇ terminal device ⁇ network device ⁇ UPF network element ⁇ application server ⁇ peripheral equipment.
  • application-level equipment such as helmets
  • instructions can be understood as uplink data packets). After the instructions are transmitted to the surgical site, they are executed by the on-site manipulator. The execution conditions are then converted into signals (can be understood as downlink data packets) via cameras and other medical professional equipment, and transmitted to the doctor’s In the helmet.
  • Figure 3d is a schematic diagram of the data transmission path in the remote surgery scene.
  • Video can be composed of consecutive images (or pictures, photos, etc.) that are played continuously. When 24 images are played quickly in one second, the human eye will think that this is a continuous image (ie, video).
  • Frame rate refers to the number of images played per second. For example, 24 frames means 24 images per second, 60 frames means 60 images per second, and so on.
  • a video frame can be understood as an image (that is, a video frame can include a data packet corresponding to an image).
  • the frame rate is 60 frames
  • the duration of a video frame is 1000ms/60Hz, which is approximately equal to 16ms.
  • the frame rate is 60 frames as an example for description.
  • Figure 4 is a schematic diagram of the transmission of multiple video frames.
  • video frame 1, video frame 2, and video frame 3 are three consecutive video frames.
  • video frame 1 can include multiple video frames.
  • Multiple data packets can be distributed in the first segment of video frame 1 (for example, multiple data packets can be distributed in the first 8ms of 16ms), and there can be a transmission time interval between multiple data packets of different video frames (gap ).
  • the video frames involved in the embodiments of the present application may be any one of I-frames, P-frames, and B-frames, or may also be video frames with other possible names.
  • I frame, P frame and B frame can be H.264 (that is, the height proposed by the joint video group composed of the Video Coding Expert Group of the International Telecommunication Union Telecommunications Standards Branch and the International Organization for Standardization/International Electrotechnical Commission Dynamic Picture Expert Group. Compressed digital video codec standard) or the three frames defined in H.265 or H.266.
  • I frame also known as intra-frame coded frame, is an independent frame with all its own information. It can be decoded independently without referring to other images.
  • the first frame in the video sequence is always I frame (I frame is a key frame).
  • P frame is also called inter-frame predictive coding frame. It needs to refer to the previous I frame to be encoded. It represents the difference between the current frame picture and the previous frame (the previous frame may be an I frame or a P frame). It is required for decoding Use the previously buffered picture to superimpose the difference defined in this frame to generate the final picture.
  • B frame is also called bidirectional predictive coding frame, that is, B frame records the difference between the current frame and the previous frame; that is to say, to decode B frame, not only the previous cached picture must be obtained, but also the decoded picture after passing through The superposition of the screen and the data of this frame obtains the final screen.
  • the channel quality of the wireless channel fluctuates greatly. Even if the physical location of the terminal device remains unchanged, the wireless signal will fluctuate sharply for a short time. This fluctuation occurs in a very short time and unpredictable, so it cannot be adjusted and transmitted. Parameters (such as modulation and coding scheme (MCS)) and other methods enhance robustness. Therefore, if the probability of packet loss in the transmission of data packets increases, once the packet is lost, it needs to be resolved through retransmission. However, the importance of different data packets of the video service can be different.
  • MCS modulation and coding scheme
  • the loss of some data packets will have a great impact on the receiving picture, and the loss of some data packets will not have much influence on the receiving picture;
  • the transmission network cannot know the importance of each data packet, so it adopts the "retransmit after packet loss" method for all data packets. Using this method, on the one hand, will cause the data packet to fail to be within the specified delay range.
  • the packet is lost and the lost packet is an important packet, it will have a great impact on the receiving picture and reduce the user experience.
  • video coding introduces a layered coding method, which can also be called a scalable coding method.
  • This method regards a basic layer (BL) and several enhancement layers (extend layer, EL) as a multi-layer video system.
  • the basic layer provides a bit stream of the basic image quality
  • the enhancement layer provides the basic image quality.
  • the base layer code stream and the enhancement layer code stream are separately decodable sub-code streams, and the enhancement layer code stream may include one or more layers.
  • the basic layer stream can include basic layer data packets, which are a necessary condition for video playback. In this case, the video quality is poor; and the enhancement layer stream can include enhancement layer data packets.
  • Data packets are a supplementary condition for video playback; for example, if the video quality corresponding to the basic layer stream is smooth, then the first enhancement layer stream is superimposed on the basic layer stream to achieve standard definition image quality.
  • Superimposing the second enhancement layer code stream on the basis of high-definition picture quality can achieve high-definition picture quality
  • superimposing the third enhancement layer code stream on the basis of high-definition picture quality can reach the Blu-ray picture quality.
  • the more enhancement layer code streams are superimposed on the base layer code stream, the better the video quality after decoding.
  • the embodiments of the present application will take two layers of the basic layer and the enhancement layer as examples for description, that is, the enhancement layer involved in the embodiments of the present application may include one layer or may include multiple layers.
  • the encoded output data packet can be divided into two paths: one is the base layer data packet, and the other is the enhancement layer data packet.
  • the transmission network can learn the two layers of data, and then can distinguish between the following two aspects: (1) Retransmission: For the basic layer data packet, a transmission is unsuccessful, if time permits, then retransmission, if the time is not If allowed, no retransmission; for the enhancement layer data, no retransmission. (2) New transmission: Priority is given to the new transmission of basic layer data packets. For example, two users have new data packets to be transmitted at the same time. One user is a basic layer data packet and the other user is an enhanced layer data packet. Transmit basic layer data packets or use a more reliable method to transmit basic layer data packets.
  • a video frame includes a base layer data packet and an enhancement layer data packet.
  • the base layer data packet and the enhancement layer data packet can have Different port numbers, so that network devices can distinguish basic layer data packets and enhanced layer data packets by port numbers.
  • the network device can group the one or more basic layer data packets to generate transmission blocks to reduce the delay, and then send them to the terminal device through the air interface.
  • a video frame includes 7 basic layer data packets.
  • the network device After receiving the first two basic layer data packets, the network device can group these two basic layer data packets to obtain transmission block 1, and send it to Terminal equipment; after the network equipment receives the middle three basic layer data packets, it can group these three basic layer data packets to obtain transmission block 2 and send it to the terminal equipment; the network equipment receives the last two basic layer data packets After the data packet, the two basic layer data packets can be grouped to obtain transmission block 3, and sent to the terminal device.
  • the network device may group packets to obtain multiple small-sized transmission blocks, resulting in a large number of transmission blocks containing basic layer data packets, affecting air interface transmission efficient.
  • the embodiments of the present application can provide a communication method for avoiding multiple small-sized transmission blocks obtained by grouping packets, and improving air interface transmission efficiency.
  • the communication method may include: the network device receives a first video frame, the first video frame includes at least one base layer data packet and at least one enhancement layer data packet, and the network device determines the last one in the first video frame Basic layer data packet. In this way, the network device determines the last base layer data packet in the first video frame, so that after receiving the last base layer data packet in the first video frame, it can group packets to obtain a transmission block. All the data packets in the first video frame are grouped into one transmission block, which is convenient to avoid the grouping of multiple small-sized transmission blocks, thereby improving the transmission efficiency of the air interface.
  • Fig. 5a is a schematic diagram of the process corresponding to the communication method provided in the first embodiment of the application. As shown in Fig. 5a, the method includes the following steps:
  • Step 501 The application server sends instruction information 1 to the core network device, and the instruction information 1 is used to indicate the transmission mode of the data stream of the service 1.
  • the core network device may be an AMF network element or an SMF network element.
  • service 1 may be a video service, and the data stream of service 1 may include data packets, and may also include other possible information, such as control information or control parameters, which are not specifically limited.
  • the transmission mode of the data stream of service 1 may be transmission mode 1 or transmission mode 2.
  • transmission mode 1 means that for each video frame of service 1, the transmission sequence of the basic layer data packet and the enhancement layer data packet in the video frame is not reshaped.
  • the basic layer data packet in the video frame The layer data packet and the enhancement layer data packet may or may not be cross-transmitted (for example, the base layer data packet of the video frame is transmitted first, and then the enhancement layer data packet is transmitted).
  • the cross transmission of the basic layer data packet and the enhancement layer data packet in the video frame can be understood as the mixed transmission or alternate transmission of the basic layer data packet and the enhancement layer data packet in the video frame, for example, in the video frame
  • the transmission sequence of some data packets is: basic layer data packet a, enhancement layer data packet a, and basic layer data packet b; for another example, the transmission sequence of some data packets in the video frame is: enhancement layer data packet a, basic layer data Package a, enhancement layer data package b.
  • Transmission mode 2 means that for each video frame of service 1, the transmission sequence of the basic layer data packet and the enhancement layer data packet in the video frame is shaped so that the basic layer data packet of the video frame is transmitted first, and then Transmission of enhancement layer data packets.
  • the application server may include a video encoder, and the video encoder may include two modules, namely an encoding module and a buffering module.
  • the encoding module is responsible for generating basic layer data packets and enhancement layer data packets and transmitting them to the buffer module.
  • the buffer module is responsible for outputting the data packets generated by the encoding module.
  • the buffer module For transmission mode 1, after the buffer module receives data packets from the encoding module, it does not need to reshape the transmission sequence of these data packets, and directly output them to the outside. Specifically, if the encoding module encodes a data packet, it can transmit a data packet to the buffer module, and then the buffer module can output the data packet without shaping; that is, the order in which the buffer module outputs the data packet and the encoding module The sequence of output data packets is the same.
  • the encoding module will output the base layer data packet first, and after outputting Q (Q is a positive integer) base layer data packets, it will start to output the enhancement layer data packets, and within a period of time, The base layer data packet and the enhancement layer data packet are alternately output; when the base layer data packet in the video frame ends, the encoding module will continue to output the enhancement layer data packet; finally, the enhancement layer data packet in the video frame also ends.
  • Q is a positive integer
  • the buffer module After the buffer module receives data packets from the encoding module, it can shape the transmission sequence of these data packets and output the shaped data packets to the outside. Specifically, for a video frame, after the buffer module receives the enhancement layer data packet from the encoding module, it can store it for a period of time, that is, first output the basic layer data packet, and wait until the basic layer data packet is all output, and then output it. Enhanced layer data packet. In this case, through the shaping of the buffer module, the externally output data packets appear to output the basic layer data packets first, and then output the enhancement layer data packets.
  • the core network device sends the indication information 1 to the UPF network element and/or the network device; accordingly, the UPF network element and/or the network device can learn the transmission mode of the data stream of the service 1.
  • the core network device sends instruction information 1 to the network device, and the transmission mode indicated by the instruction information 1 is transmission mode 2 as an example for description.
  • Step 503 The application server generates multiple video frames of service 1, and each video frame may include at least one basic layer data packet and at least one enhancement layer data packet.
  • the encoding module of the application server may generate a base layer data packet and an enhancement layer data packet of a plurality of video frames, and the buffer module outputs the data packets according to transmission mode 2.
  • the application server may obtain multiple images taken by the camera from a peripheral device (such as a camera), and then the encoding module generates base layer data packets and enhancement layer data packets of multiple video frames according to the multiple images.
  • Step 504 The application server sends a first video frame to the UPF network element.
  • the first video frame may be any video frame among multiple video frames of the service 1; accordingly, the UPF network element receives the first video frame.
  • Step 505 The UPF network element sends the first video frame to the network device; accordingly, the network device receives the first video frame.
  • the UPF network element may send the first video frame through the N3 tunnel between the UPF network element and the network device, and accordingly, the network device may receive the first video frame through the N3 tunnel.
  • the N3 tunnel between the network device and the UPF network element may be established in the process of PDU session establishment or PDU session modification.
  • the SMF network element can allocate a UPF network element side N3 tunnel address (referred to as the first N3 tunnel address for the convenience of description) for the UPF network element, and then connect the first N3 tunnel through the AMF network element
  • the address is sent to the network device; accordingly, when the network device receives the first N3 tunnel address, it can allocate a N3 tunnel address on the network device side (referred to as the second N3 tunnel address), and use the AMF network element to transfer the second N3 tunnel address Send to the SMF network element, and then from the SMF network element to the UPF network element; in addition, the SMF network element can also send the first N3 tunnel address allocated for the UPF network element to the UPF network element; in this way, the network device and the UPF network element An N3 tunnel was established between. Understandably, the N3 tunnel address can be the N3 tunnel number.
  • the N3 tunnel between the UPF network element and the network device may be established for a certain service, for example, the N3 tunnel corresponds to service 1.
  • the network device After receiving the data packet from the N3 tunnel, the network device can learn that the data packet is a service 1 packet.
  • Step 506 The network device sends the first video frame to the terminal device.
  • the network device may determine the time to schedule the base layer data packet in the first video frame according to the transmission mode of service 1, that is, try to start air interface transmission after receiving the base layer data packet in the first video frame as much as possible. For example, the network device determines that the transmission mode of the data stream of service 1 is transmission mode 2 according to the instruction information 1.
  • the network device can Determine that the last base layer data packet transmitted before the first enhancement layer data packet is the last base layer data packet in the first video frame, and then packetize all the base layer data packets in the first video frame to get A transmission block, and send the transmission block to the terminal device.
  • the first video frame includes n base layer data packets, when the network After the device receives the first m basic layer data packets in the first video frame, if it is determined that the sum of the data amount of the m basic layer data packets is greater than or equal to the transport block size, it can first group the m basic layer data packets The packet obtains the transmission block and sends it to the terminal device; subsequently, after the last basic layer data packet in the first video frame is determined, the remaining nm basic layer data packets are packaged to obtain the transmission block and sent to the terminal device.
  • the specific operation to be performed may depend on the internal implementation of the network device; for example, the network device may be based on The determined last base layer data packet in the first video frame is used to schedule the base layer data packet in the first video frame, and/or the network device may also be based on the determined last base layer data in the first video frame Package to perform other possible operations. It is understandable that the scheduling manner described above is only a possible example, and the network device may also adopt other possible scheduling manners, which are not specifically limited.
  • the last base layer data packet in the first video frame is determined, and the network device can schedule the first video frame according to or refer to the last base layer data packet in the first video frame.
  • the network device cannot determine the last basic layer data packet in the first video frame, and then may be grouped to obtain multiple small-sized transmission blocks, thereby affecting transmission efficiency).
  • the network device may determine the transmission mode of the data stream of service 1.
  • the above description is based on the example of the network device determining the transmission mode of the data stream of service 1 according to the instruction information 1 sent by the core network device.
  • the application server can send instruction information 1 to the terminal device through an application layer message, and the terminal device can send instruction information 1 to the network device through an access layer message; that is, the network device can also send the instruction information 1 to the network device through an access layer message;
  • the instruction information 1 sent by the device determines the transmission mode of the data stream of service 1.
  • the access layer message may be an RRC message.
  • the manner in which the network device determines the transmission mode of the data stream of the service 1 is not limited.
  • downlink data transmission is taken as an example for description, and the method in Embodiment 1 can also be applied to uplink data transmission.
  • the terminal device can determine the transmission mode of the data stream of service 1, and then after acquiring the first video frame, it can determine the last basic layer data packet in the first video frame, and send the first video frame to the network device .
  • the terminal device can determine the data stream transmission mode of service 1 in many ways. For example, after the network device obtains the instruction information 1, it can send the instruction information 1 to the terminal device through an access layer message, and the terminal device can then follow The indication information 1 determines the transmission mode of the data stream of service 1; for another example, the application server can send indication information 1 to the terminal device through an application layer message, and the terminal device can determine the transmission mode of the data stream of service 1 according to the indication information 1.
  • the terminal device obtains the first video frame, which can mean: the application layer of the terminal device receives the first video frame from the application layer device connected to the terminal device; or, it can also mean: the terminal device includes a video encoder, The application layer obtains the first video frame from the video encoder.
  • the implementation of the terminal equipment scheduling the basic layer data packet in the first video frame can refer to the description of the network equipment scheduling the basic layer data packet in the first video frame.
  • the difference between the two is that the terminal equipment needs to send the transmission block.
  • the uplink resources need to be allocated by network equipment.
  • the terminal device can determine the block error rate of the transmission block sent through the uplink resource according to the uplink resource allocated by the network device, and if the block error rate corresponding to the uplink resource is high, it will no longer transmit the basic layer data packet through the transmission block. If the block error rate corresponding to the uplink resource is low for the transmission block obtained by grouping the packet, the transmission block obtained by grouping the basic layer data packet can be transmitted through the transmission block.
  • the network device or terminal device can determine the last basic layer data packet in the video frame according to the transmission mode, and in an optional manner, the network device or terminal device can be based on or refer to the video frame
  • the last base layer data packet is used to determine the timing of scheduling the base layer data packet in the video frame, so as to avoid multiple small-sized transmission blocks obtained by grouping packets and improve the transmission efficiency of the air interface.
  • Fig. 6a is a schematic flow diagram corresponding to the communication method provided in the second embodiment of the application. As shown in Fig. 6a, the method includes the following steps:
  • Step 601 The application server generates multiple video frames of service 1, and each video frame may include at least one basic layer data packet, at least one enhancement layer data packet, and at least one control packet.
  • the multiple video frames include a first video frame, and at least one control packet in the first video frame may include at least one of a control packet 1a, a control packet 2a, and a control packet 3a.
  • the control packet 1a is transmitted before the first base layer data packet in the first video frame, and the control packet 1a is adjacent to the first base layer data packet in the first video frame.
  • the control packet 2a is transmitted after the last base layer data packet in the first video frame, and the control packet 2a is adjacent to the last base layer data packet in the first video frame; that is, the control packet 2a immediately follows the first video frame. It is transmitted after the last base layer data packet in a video frame.
  • the control packet 3a is transmitted after the last enhancement layer data packet in the first video frame, and the control packet 3a is adjacent to the last enhancement layer data packet in the first video frame.
  • the multiple video frames may also include the next video frame in the first video frame, and at least one control packet in the next video frame may include at least one of a control packet 1b, a control packet 2b, and a control packet 3b.
  • the implementation of the control package 1b, the control package 2b, and the control package 3b can refer to the description of the control package 1a, the control package 2a, and the control package 3a, and the details are not repeated here.
  • control packets involved above may all be GTP control packets, or may also be control packets of a protocol layer above the GTP layer (such as the realtime transport protocol (RTP) layer), which is not specifically limited .
  • RTP realtime transport protocol
  • Step 602 The application server sends the first video frame to the UPF network element; accordingly, the UPF network element receives the first video frame.
  • Step 603 The UPF network element sends the first video frame to the network device; accordingly, the network device receives the first video frame.
  • Step 604 The network device sends the first video frame to the terminal device.
  • the network device may determine the time to schedule the base layer data packet in the first video frame according to the control packet 2a, the control packet 3a, or the control packet 1b. For example, after the network device receives the control packet 2a, control packet 3a, or control packet 1b, it can determine that the last basic layer data packet transmitted before the control packet 2a, control packet 3a, or control packet 1b is the last one in the first video frame
  • the basic layer data packet may further be packaged for all the basic layer data packets in the first video frame to obtain a transmission block, and the transmission block is sent to the terminal device.
  • the network device may also determine the control packet 1a or the control packet 3a to distinguish different video frames. For example, when the control packet 1a is included in the first video frame, the network device can determine that the data packet transmitted before the control packet 1a is the data packet of the previous video frame of the first video frame (for example, the network device can determine that the data packet is in the control packet 1a). The last data packet transmitted before is the last data packet of the previous video frame of the first video frame), and the data packet transmitted after the control packet 1a is the data packet of the first video frame.
  • the network device can determine that the data packet transmitted before the control packet 3a is the data packet of the first video frame (for example, the network device can determine the last transmitted data packet before the control packet 3a).
  • One data packet is the last data packet of the first video frame
  • the data packet transmitted after the control packet 3a is the data packet of the next video frame of the first video frame.
  • the above description is based on an example in which the video frame generated by the application server includes at least one control packet.
  • the application server generates multiple video frames (here, each video frame may Including at least one basic layer data packet and at least one enhancement layer data packet, but not including control packets), the application server sends multiple video frames to the UPF network element; accordingly, after the UPF network element receives multiple video frames, it can respond to At least one control packet is generated for each video frame, and multiple video frames are sent to the network device (here, each video frame may include at least one base layer data packet, at least one enhancement layer data packet, and at least one control packet).
  • the UPF network element may generate at least one of a control packet 1a, a control packet 2a, and a control packet 3a for the first video frame.
  • the UPF network element can determine the first basic layer data packet in the first video frame, and then add the control packet 2a before the first basic layer data packet; for another example, the UPF network element can determine the first basic layer data packet in the first video frame.
  • the control packet added by the UPF network element may be a GTP control packet.
  • the UPF network element can determine the last basic layer data packet in the first video frame in multiple ways; for example, the core network device sends instruction information 1 to the UPF network element, and the instruction information 1 indicates the transmission of the data stream of service 1.
  • the mode is transmission mode 2, and when the UPF network element receives the first enhancement layer data packet in the first video frame, it can determine that the last basic layer data packet transmitted before the first enhancement layer data packet is the first The last base layer packet in the video frame.
  • the UPF network element can determine the first basic layer data packet and the last enhancement layer data packet in the first video frame in multiple ways; for example, the UPF network element determines the transmission time interval between data packets of different video frames. Determine the first base layer data packet and the last enhancement layer data packet in the first video frame.
  • the network device can determine the last basic layer data packet in the video frame according to the control packet in the video frame, and in an optional manner, the network device can be based on or refer to the last one in the video frame
  • the basic layer data packet is used to determine the timing of scheduling the basic layer data packet in the video frame, so as to avoid multiple small-sized transmission blocks obtained by grouping packets, and improve the transmission efficiency of the air interface.
  • Fig. 7a is a schematic diagram of the process corresponding to the communication method provided in the third embodiment of the application. As shown in Fig. 7a, the method includes the following steps:
  • Step 701 The application server generates multiple video frames of service 1.
  • Each video frame may include at least one of a data packet carrying indication information a, a data packet carrying indication information b, and a data packet carrying indication information c.
  • the indication information a is used to indicate that the data packet carrying the indication information a is the first basic layer data packet (or the first data packet) in the video frame to which the data packet belongs;
  • the indication information b is used to indicate the carrying The data packet of the indication information b is the last base layer data packet in the video frame to which the data packet belongs;
  • the indication information c is used to indicate that the data packet carrying the indication information c is the last enhancement layer in the video frame to which the data packet belongs Data packet (or the last data packet).
  • the multiple video frames include a first video frame
  • the first video frame may include at least one of a data packet 1a, a data packet 2a, and a data packet 3a; wherein, the data packet 1a carries indication information a, the indication information a is used to indicate that the data packet 1a is the first base layer data packet in the first video frame; the data packet 2a carries the indication information b, and the indication information b is used to indicate that the data packet 2a is the first video frame The last basic layer data packet; the data packet 3a carries indication information c, and the indication information c is used to indicate that the data packet 3a is the last enhancement layer data packet in the first video frame.
  • the multiple video frames may also include the next video frame in the first video frame, and the next video frame may include data packet 1b (carrying instruction information a), data packet 2b (carrying instruction information b), and data packet At least one of 3b (carrying instruction information c).
  • data packet 1b carrier instruction information
  • data packet 2b carrier instruction information
  • data packet 3b data packet At least one of 3b (carrying instruction information c).
  • the implementation of the data packet 1b, the data packet 2b, and the data packet 3b can refer to the description of the data packet 1a, the data packet 2a, and the data packet 3a, and the details are not repeated here.
  • each data packet in multiple video frames including a data packet carrying indication information a (wherein the case of carrying indication information b or indication information c, you can refer to processing) as an example, in one example, it may be in the video frame
  • Each data packet adds a field, which can be used to carry indication information a.
  • the field can include 1 bit, and the value of this bit is 0, which means that the data packet is not in the video frame to which the data packet belongs.
  • the first basic layer data packet; the value of this bit is 1, indicating that the data packet is the first basic layer data packet in the video frame to which the data packet belongs.
  • a special code point of a certain parameter of the data packet can also be used to indicate whether the data packet is the first basic layer data packet in the video frame to which the data packet belongs.
  • a 4-bit parameter is used. When the value of the bit parameter is 1110, it means that the data packet is the first basic layer data packet in the video frame to which the data packet belongs, and when the value of the 4-bit parameter is other values, it means the data packet It is not the first base layer data packet in the video frame to which the data packet belongs.
  • a field can be added to the corresponding data packet in the video frame. This field is used to carry indication information a. When the data packet does not carry this field, it means that the data packet is not the video to which the data packet belongs.
  • indication information a indication information b, or indication information c may be located in the GTP-U header of the data packet, or may also carry a protocol layer above the GTP layer of the data packet (such as the RTP layer) In the Baotou, there are no specific restrictions.
  • Step 702 The application server sends the first video frame to the UPF network element; accordingly, the UPF network element receives the first video frame.
  • Step 703 The UPF network element sends the first video frame to the network device; accordingly, the network device receives the first video frame.
  • Step 704 The network device sends the first video frame to the terminal device.
  • the network device may determine the timing of scheduling the base layer data packet in the first video frame according to the data packet 2a, the data packet 3a, or the data packet 1b. For example, after the network device receives the data packet 2a, it can determine that the data packet 2a is the last basic layer data packet in the first video frame, and then can group all the basic layer data packets in the first video frame to obtain a transmission Block and send the transmission block to the terminal device; for example, after the network device receives the data packet 3a or the data packet 1b, it can determine that the last basic layer data packet transmitted before the data packet 3a or the data packet 1b is the first video The last basic layer data packet in the frame can be further packaged for all basic layer data packets in the first video frame to obtain a transmission block, and the transmission block is sent to the terminal device.
  • the video frame generated by the application server includes at least one of a data packet carrying indication information a, a data packet carrying indication information b, and a data packet carrying indication information c.
  • the application server generates multiple video frames (here, each video frame may include data packets that do not carry indication information a, indication information b, or indication information c), and sends multiple video frames to the UPF network element.
  • the UPF network element can add indication information a, indication information b, or indication information c to the header of the corresponding data packet, and send multiple video frames to the network device (this
  • each video frame may include at least one of a data packet carrying indication information a, a data packet carrying indication information b, and a data packet carrying indication information c).
  • the UPF network element can determine the last basic layer data packet in the first video frame, and then add the indication information b to the last basic layer data packet; for example, the UPF network element can be in the last basic layer data packet.
  • Instruction information b is added to the GTP packet header.
  • the UPF network element may determine the first basic layer data packet, the last basic layer data packet, and the last enhancement layer data packet in the first video frame in various ways. For details, please refer to the description in other embodiments. No longer.
  • the network device can determine the last basic layer data packet in the video frame according to the data packet carrying the instruction information, and in an optional manner, the network device can refer to or refer to the last one in the video frame
  • the basic layer data packet is used to determine the timing of scheduling the basic layer data packet in the video frame, so as to avoid multiple small-sized transmission blocks obtained by grouping packets, and improve the transmission efficiency of the air interface.
  • FIG. 8 is a schematic flow diagram corresponding to the communication method provided in the fourth embodiment of the application. As shown in FIG. 8, the method includes the following steps:
  • Step 801 The application server sends instruction information 2 to the core network device, where the instruction information 2 is used to indicate the start time of the first time period and the duration of the first time period.
  • the start time of the first time period may be the time when the first base layer data packet in the video frame is received.
  • the time of receiving the first basic layer data packet in the video frame can refer to the receiving time of the first data packet in the video frame; the receiving time can be a moment (or an instant), or it can be For a short period of time.
  • the receiving time can be the start boundary or end boundary of a certain frame, subframe, slot, mini-slot, or symbol; or, it can also be A certain frame, subframe, time slot, mini-slot or symbol.
  • Step 802 The core network device sends instruction information 2 to the UPF network element and/or network device; accordingly, the UPF network element and/or network device can determine the start time of the first time period and the first time period according to the instruction information 2. The length of time.
  • the core network device may be an AMF network element or an SMF network element.
  • description will be made by taking the core network device sending instruction information 2 to the UPF network element and the network device as an example.
  • Step 803 The application server generates multiple video frames of service 1, and each video frame may include at least one basic layer data packet and at least one enhancement layer data packet.
  • the application server sends the first video frame to the UPF network element.
  • the first video frame may be any video frame among the multiple video frames of the service 1; accordingly, the UPF network element receives the first video frame.
  • step 805 the UPF network element sends the first video frame to the network device; accordingly, the network device receives the first video frame.
  • the UPF network element may determine the last base layer data packet in the first video frame according to the start time of the first time period and the duration of the first time period. For example, the UPF network element may determine that the last basic layer data packet received in the first time period is the last basic layer data packet of the first video frame; that is, the UPF network element receives the first video frame in the When the first basic layer data packet of the, you can start the timer (the length of the timer is the length of the first time period), and then after the timer expires, you can determine the last basic layer data received before the timer expires The packet is the last basic layer data packet of the first video frame.
  • the UPF network element can also send the data packet in the first video frame to the network device according to the start time of the first time period and the duration of the first time period.
  • the UPF network element can send the data packet in the first video frame to the network device. Perform shaping to ensure that the interval between the time when the first basic layer data packet in the first video frame arrives at the network device and the time when the last basic layer data packet arrives at the network device is less than or equal to the length of the first time period, to The network device can accurately determine the timing of scheduling the base layer data packet in the video frame according to the start time of the first time period and the duration of the first time period.
  • Step 806 The network device sends the first video frame to the terminal device.
  • the network device may determine that the last base layer data packet received in the first time period is the last base layer data packet of the first video frame, and then may perform the processing for all base layer data packets in the first video frame.
  • the packet is formed to obtain a transmission block, and the transmission block is sent to the terminal device. That is to say, when the network device receives the first basic layer data packet in the first video frame, it can start a timer (the length of the timer is the length of the first time period), and then after the timer expires, it can All the basic layer data packets in the first video frame are packaged to obtain a transmission block, and the transmission block is sent to the terminal device.
  • the network device determines that the last basic layer data packet received in the first time period is the last basic layer data packet of the first video frame, which means: At the end of the segment, the last base layer data packet received in the first time period is considered to be the last base layer data packet of the first video frame; it does not absolutely mean the last base layer data packet received in the first time period
  • the base layer data packet must be the last base layer data packet of the first video frame.
  • the base layer data packet in the first video frame includes data packet 1 (BL1), data packet 2 (BL2), and data packet 3 (BL3).
  • BL1 data packet 1
  • BL2 data packet 2
  • BL3 data packet 3
  • the network device is in the first If data packet 1, data packet 2, and data packet 3 are received within the time period, at the end of the first time period, it is determined that data packet 3 is the last basic layer data packet of the first video frame; in another example, As shown in Figure 9b, the network device receives data packet 1 and data packet 2 in the first time period (data packet 3 has not been received yet), then at the end of the first time period, it is determined that data packet 2 is the first The last base layer data packet of the video frame, but actually data packet 3 is the last base layer data packet of the first video frame.
  • the UPF network element and/or network device can determine the start time of the first time period and the duration of the first time period in various ways. The above is based on the UPF network element and/or network device. Determining the start time of the first time period and the duration of the first time period according to the instruction information 2 sent by the core network device is described as an example.
  • the application server can send the instruction information 2 to the terminal device through an application layer message, and the terminal device can send the instruction information 2 to the network device through an access layer message;
  • the instruction information 2 sent by the terminal device determines the start time of the first time period and the length of the first time period; for another example, the start time of the first time period and the time length of the first time period may be predefined by the protocol; For another example, the start time of the first time period may be predefined by the protocol, and the duration of the first time period may be indicated by the core network device or terminal device to the UPF network element and/or network device; for another example, The duration of one time period may be predefined by the protocol, and the start time of the first time period may be instructed by the core network device or the terminal device to the UPF network element and/or network device.
  • the manner in which the UPF network element and/or the network device determines the start time of the first time period and the duration of the first time period is not limited.
  • downlink data transmission is taken as an example for description, and the method in the fourth embodiment can also be applied to uplink data transmission.
  • the terminal device can determine the start time of the first time period and the duration of the first time period, and then after acquiring the first video frame, it can determine the last base layer data packet in the first video frame, and set the first A video frame is sent to the network device.
  • the terminal device can determine the start time of the first time period and the duration of the first time period in many ways.
  • the terminal device sends instruction information 2; for another example, the application server may send instruction information 2 to the terminal device through an application layer message.
  • the network device or terminal device can determine the last basic layer data packet in the video frame according to the start time of the first time period and the duration of the first time period, and in an optional manner, The network device can determine the time to schedule the basic layer data packet in the video frame according to or refer to the last basic layer data packet in the video frame, so as to avoid multiple small-sized transmission blocks obtained by grouping packets and improve the transmission efficiency of the air interface. .
  • the network device can preferentially transmit the data of the user with the basic layer data packet; in other words, the transmission capacity on the air interface is limited. Under the circumstance, the network device can preferentially transmit the basic layer data packet instead of the enhanced layer data packet. However, if the enhancement layer data packet is not transmitted, the data packet already received by the network device cannot be fully utilized, which will result in a waste of transmission resources.
  • a data packet which can be a basic layer data packet or an enhancement layer data packet
  • the data packet (for example, the data packet corresponds to The Pth (P is a positive integer) frame image) is no longer useful for decoding the Pth frame image, but the data packet may still have reference value for other images after the Pth frame image is decoded.
  • the basic layer data packet is usually transmitted preferentially, in the embodiment of the present application, a data packet that cannot be transmitted on time is an enhancement layer data packet as an example for description. Further, considering that if the transmission delay of the data packet is large, even if the data packet is transmitted to the receiving end device, it may have no reference value for the decoded image and waste air interface transmission resources.
  • the embodiments of the present application provide a communication method for clearly stopping scheduling data packets (such as enhancement layer data packets) in a video frame, so that data packets that exceed the transmission delay can be reasonably processed. This makes it possible not only to make full use of the data packets already received by the network equipment (or terminal equipment), but also to effectively save air interface transmission resources.
  • scheduling data packets such as enhancement layer data packets
  • the communication method provided by the embodiment of the present application may include: a network device receives a first video frame, multiple enhancement layer data packets in the first video frame, and the network device may send the first video frame to the terminal device before the first time.
  • the enhancement layer data packet in the video frame, and the enhancement layer data packet in the first video frame is no longer sent to the terminal device after the first time is determined.
  • the terminal device obtains the first video frame, and the multiple enhancement layer data packets in the first video frame, the terminal device may send the enhancement layer data packet in the first video frame to the network device before the first time, and determine the first After time, the enhancement layer data packet in the first video frame is no longer sent to the network device.
  • the time for the network device or terminal device to stop scheduling the enhancement layer data packet in the first video frame is clarified, so that the data packet that exceeds the transmission delay can be processed reasonably.
  • the first time is any of the following times: the time when the first base layer data packet in the second video frame after the first video frame is received; the last base layer data packet in the second video frame is received Time; time to receive the first enhancement layer data packet in the second video frame; time to receive the last enhancement layer data packet in the second video frame.
  • the manner in which the network device or the terminal device obtains the first time can refer to the manner in which the network device or the terminal device obtains the transmission mode in the first embodiment.
  • the second video frame may be the Kth video frame after the first video frame, and K is a positive integer, for example, the value of K may be 1, 2, 3,....
  • the first video frame is video frame W
  • the enhancement layer data packet of video frame 1 can be sent. After the first base layer data packet, the enhancement layer data packet of video frame 1 is no longer sent.
  • the first time is the end time of the second time period; the start time of the second time period is any of the following times: the time when the first base layer data packet in the first video frame is received; The time of the last base layer data packet in the first video frame; the time of receiving the first enhancement layer data packet in the first video frame; the time of receiving the last enhancement layer data packet in the first video frame; The time of the first base layer data packet in the second video frame after one video frame; the time of receiving the last base layer data packet in the second video frame.
  • the manner in which the network device or the terminal device obtains the start time of the second time period and the duration of the second time period can refer to the fourth embodiment above, the network device or the terminal device obtains the start time and the duration of the first time period.
  • the description of the duration of the first time period can refer to the fourth embodiment above, the network device or the terminal device obtains the start time and the duration of the first time period.
  • the second video frame may be the Kth video frame after the first video frame.
  • the network device can start a timer when the first basic layer data packet in video frame 2 occurs, Before the timer expires, the enhancement layer data packet of video frame 1 can be sent, and after the timer expires, the enhancement layer data packet of video frame 1 is no longer sent.
  • Embodiment 1 to Embodiment 5 can be implemented separately, or the technical features involved in different embodiments in Embodiment 1 to Embodiment 5 can also be implemented in combination.
  • Embodiment 1, Embodiment 2, Embodiment 3, or Embodiment 4 can be implemented in combination with Embodiment 5; for another example, Embodiment 1 can be implemented in combination with Embodiment 2, Embodiment 3, or Embodiment 4.
  • step numbers of the flowcharts described in Embodiment 1 to Embodiment 4 are only an example of the execution process, and do not constitute a restriction on the order of execution of the steps. There is no time sequence dependency among the embodiments of this application. There is no strict order of execution between the steps of the relationship. In addition, not all the steps shown in each flowchart are necessary steps, and some steps can be added or deleted on the basis of each flowchart according to actual needs.
  • the network device can determine the last data packet in the video frame and send the frame boundary identifier corresponding to the video frame to the terminal device.
  • the frame boundary identifier can be used to indicate the video frame
  • the data packet of the video frame has been transmitted the frame boundary identifier can also be understood as the end of the data packet identifier; accordingly, the terminal device can determine that the data packet of the video frame has been transmitted based on the frame boundary identifier, because there is transmission between data packets of different video frames
  • the terminal device After the terminal device determines that the data packet of the video frame has been transmitted, it can enter the sleep state (thus saving power consumption and not affecting normal data transmission) or perform a switching operation or perform other possibilities within the transmission time interval. Operation.
  • the network device can determine the last data packet in the video frame in many ways. Take the first video frame as an example. For example, in the second embodiment, the network device can determine the last data packet in the first video frame according to the control packet (such as the control packet 3a); for example, in the third embodiment, the network device can According to the data packet (for example, data packet 3a) carrying the indication information c, the last data packet in the first video frame is determined.
  • the control packet such as the control packet 3a
  • the network device can According to the data packet (for example, data packet 3a) carrying the indication information c, the last data packet in the first video frame is determined.
  • the network device can group the data packets in the video frame to obtain one or more MAC PDUs, where the last MAC PDU of one or more MAC PDUs (the last MAC PDU includes the video frame The last data packet) may include the frame boundary identifier; accordingly, after receiving the MAC PDU containing the frame boundary identifier, the terminal device can learn that the MAC PDU is the last MAC PDU corresponding to the video frame, and then determine the data of the video frame The package has been transmitted.
  • the frame boundary identifier may be included in the header of the MAC PDU.
  • the network device may send control signaling to the terminal device after one or more MAC PDUs corresponding to the video frame, and the control signaling may include the frame boundary identifier; accordingly, the terminal device receives After the control signaling is reached, it can be determined that the data packets of the video frame have been transmitted.
  • the control signaling here may be a MAC control element (CE).
  • the network device can determine the last basic layer data packet in the video frame, and can use the above-mentioned similar method to send the end of the basic layer data packet of the video frame to the terminal device.
  • the end of the basic layer data packet is used to indicate the basic data of the video frame.
  • the layer data packet has been transmitted.
  • a network device can group the basic layer data packets of a video frame to obtain one or more MAC PDUs, among which the last MAC PDU of one or more MAC PDUs (the last MAC PDU includes the last basic layer data packet of the video frame)
  • the base layer data packet end identifier may be included in the data packet; for example, the network device may send control signaling to the terminal device after the last MAC PDU, and the control signaling may include the base layer data packet end identifier.
  • the first base layer data packet in the first video frame and the first data packet in the first video frame involved in the embodiments of the present application are equivalent concepts, and the two can be replaced with each other; and,
  • the last enhancement layer data packet in a video frame and the last data packet in the first video frame are equivalent concepts, and the two can be replaced with each other.
  • the network device or the terminal device may include a hardware structure and/or software module corresponding to each function.
  • the embodiments of the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered beyond the scope of this application.
  • the embodiments of this application can divide the network equipment, terminal equipment or UPF network element into functional units according to the above method examples.
  • each functional unit can be divided corresponding to each function, or two or more functions can be integrated into one.
  • Unit can be implemented in the form of hardware or software functional unit.
  • FIG. 10 shows a possible exemplary block diagram of a device involved in an embodiment of the present application.
  • the apparatus 1000 may include: a processing unit 1002 and a communication unit 1003.
  • the processing unit 1002 is used to control and manage the actions of the device 1000.
  • the communication unit 1003 is used to support communication between the apparatus 1000 and other devices.
  • the communication unit 1003 is also called a transceiving unit, and may include a receiving unit and/or a sending unit, which are used to perform receiving and sending operations, respectively.
  • the device 1000 may further include a storage unit 1001 for storing program codes and/or data of the device 1000.
  • the apparatus 1000 may be the network device in the foregoing embodiment, or may also be a chip provided in the network device.
  • the processing unit 1002 can support the apparatus 1000 to execute the actions of the network device in the above method examples.
  • the processing unit 1002 mainly executes the internal actions of the network device in the method example, and the communication unit 1003 can support communication between the apparatus 1000 and other devices (such as terminal devices or UPF network elements).
  • the apparatus 1000 may be the terminal device in the foregoing embodiment, or may also be a chip provided in the terminal device.
  • the processing unit 1002 may support the apparatus 1000 to perform the actions of the terminal device in the above method examples.
  • the processing unit 1002 mainly executes the internal actions of the terminal device in the method example, and the communication unit 1003 can support communication between the apparatus 1000 and other devices (such as network devices).
  • the device 1000 may be the UPF network element in the foregoing embodiment, or may also be a chip set in the UPF network element.
  • the processing unit 1002 may support the apparatus 1000 to execute the actions of the UPF network element in the above method examples.
  • the processing unit 1002 mainly executes the internal actions of the UPF network element in the method example, and the communication unit 1003 can support communication between the apparatus 1000 and other devices (such as network devices).
  • the communication unit 1003 is configured to receive a first video frame, the first video frame includes at least one base layer data packet and at least one enhancement layer data packet; the processing unit 1002 is configured to determine the last one in the first video frame Basic layer data packet.
  • the communication unit 1003 is further configured to receive first indication information, the first indication information is used to indicate that the transmission mode of the data packet in the first video frame is the first transmission mode, and the first transmission mode To: shape the transmission sequence of the base layer data packet and the enhancement layer data packet in the first video frame, so that the base layer data packet of the video frame is transmitted first, and then the enhancement layer data packet is transmitted; and, processing
  • the unit 1002 is specifically configured to determine that the last base layer data packet transmitted before the first enhancement layer data packet is the last base layer data packet in the first video frame.
  • the communication unit 1003 is further configured to receive at least one control packet; and, the processing unit 1002 is specifically configured to determine that the last base layer data packet transmitted before the at least one control packet is the last of the first video frame A basic layer data packet.
  • At least one control packet includes at least one of the following: a first control packet, a second control packet, and a third control packet; the first control packet is after the last base layer data packet of the first video frame Transmission, and the first control packet is adjacent to the last base layer data packet of the first video frame; the second control packet is transmitted after the last enhancement layer data packet of the first video frame, and the second control packet is adjacent to the first video frame The last enhancement layer data packet of the frame is adjacent; the third control packet is transmitted before the first base layer data packet of the next video frame of the first video frame, and the third control packet is the same as the first one of the next video frame.
  • the basic layer data packets are adjacent.
  • the processing unit 1002 is specifically configured to: determine that the first data packet carrying the second indication information is the last base layer data packet of the first video frame, and the second indication information is used to indicate the first data packet Is the last base layer data packet of the first video frame; or, it is determined that the last base layer data packet transmitted before the second data packet carrying the third indication information is the last base layer data packet of the first video frame.
  • the third indication information is used to indicate that the second data packet is the last enhancement layer data packet of the first video frame; or, it is determined that the last basic layer data packet transmitted before the third data packet carrying the fourth indication information is the first video The last base layer data packet of the frame, and the fourth indication information is used to indicate that the third data packet is the first base layer data packet of the next video frame of the first video frame.
  • the processing unit 1002 is specifically configured to determine that the last base layer data packet received in the first time period is the last base layer data packet of the first video frame;
  • the start time is the time when the first base layer data packet in the first video frame is received.
  • the communication unit 1003 is further configured to obtain the duration of the first time period from the application server or the SMF network element or the AMF network element.
  • the communication unit 1003 is further configured to send the enhancement layer data packet of the first video frame before the first time
  • the processing unit 1002 is further configured to determine that the first video frame is no longer sent after the first time Enhancement layer packets.
  • the first time is any one of the following times: the time when the first basic layer data packet in the second video frame after the first video frame is received; the last basic layer data packet in the second video frame is received Time of receiving the first enhancement layer data packet in the second video frame; Time of receiving the last enhancement layer data packet in the second video frame.
  • the first time is the end time of the second time period; the start time of the second time period is any of the following times: the time when the first base layer data packet in the first video frame is received ; Time to receive the last base layer data packet in the first video frame; time to receive the first enhancement layer data packet in the first video frame; time to receive the last enhancement layer data packet in the first video frame; The time when the first base layer data packet in the second video frame after the first video frame was received; the time when the last base layer data packet in the second video frame was received.
  • the communication unit 1003 is further configured to obtain the duration of the second time period from the application server or the SMF network element or the AMF network element.
  • each unit in the device can be all implemented in the form of software called by processing elements; they can also be all implemented in the form of hardware; part of the units can also be implemented in the form of software called by the processing elements, and some of the units can be implemented in the form of hardware.
  • each unit can be a separate processing element, or it can be integrated in a certain chip of the device for implementation.
  • it can also be stored in the memory in the form of a program, which is called and executed by a certain processing element of the device. Function.
  • each step of the above method or each of the above units may be implemented by an integrated logic circuit of hardware in a processor element or implemented in a form of being called by software through a processing element.
  • the unit in any of the above devices may be one or more integrated circuits configured to implement the above method, for example: one or more application specific integrated circuits (ASIC), or, one or Multiple microprocessors (digital singnal processors, DSPs), or, one or more field programmable gate arrays (Field Programmable Gate Arrays, FPGAs), or a combination of at least two of these integrated circuits.
  • ASIC application specific integrated circuits
  • DSPs digital singnal processors
  • FPGAs Field Programmable Gate Arrays
  • the unit in the device can be implemented in the form of a processing element scheduler
  • the processing element can be a processor, such as a general-purpose central processing unit (central processing unit, CPU), or other processors that can call programs.
  • CPU central processing unit
  • these units can be integrated together and implemented in the form of a system-on-a-chip (SOC).
  • the above receiving unit is an interface circuit of the device for receiving signals from other devices.
  • the receiving unit is an interface circuit used by the chip to receive signals from other chips or devices.
  • the above unit for sending is an interface circuit of the device for sending signals to other devices.
  • the sending unit is an interface circuit used by the chip to send signals to other chips or devices.
  • the network device 110 may include one or more DU 1101 and one or more CU 1102.
  • the DU 1101 may include at least one antenna 11011, at least one radio frequency unit 11012, at least one processor 11013, and at least one memory 11014.
  • the DU 1101 part is mainly used for the transmission and reception of radio frequency signals, the conversion of radio frequency signals and baseband signals, and part of baseband processing.
  • the CU1102 may include at least one processor 11022 and at least one memory 11021.
  • the CU 1102 part is mainly used to perform baseband processing, control network equipment, and so on.
  • the DU 1101 and the CU 1102 may be physically set together, or may be physically separated, that is, a distributed base station.
  • the CU 1102 is the control center of the network device, which may also be called a processing unit, and is mainly used to complete the baseband processing function.
  • the CU1102 may be used to control the network device to execute the operation flow of the network device in the foregoing method embodiment.
  • the network device 110 may include one or more radio frequency units, one or more DUs, and one or more CUs.
  • the DU may include at least one processor 11013 and at least one memory 11014
  • the radio frequency unit may include at least one antenna 11011 and at least one radio frequency unit 11012
  • the CU may include at least one processor 11022 and at least one memory 11021.
  • the CU1102 may be composed of one or more single boards, and multiple single boards may jointly support a wireless access network (such as a 5G network) with a single access indication, or may respectively support wireless access networks of different access standards.
  • Access network such as LTE network, 5G network or other network.
  • the memory 11021 and the processor 11022 may serve one or more boards. In other words, the memory and the processor can be set separately on each board. It can also be that multiple boards share the same memory and processor. In addition, necessary circuits can be provided on each board.
  • the DU1101 can be composed of one or more single boards.
  • Multiple single boards can jointly support a wireless access network with a single access indication (such as a 5G network), and can also support wireless access networks with different access standards (such as a 5G network).
  • the memory 11014 and the processor 11013 may serve one or more boards. In other words, the memory and the processor can be set separately on each board. It can also be that multiple boards share the same memory and processor. In addition, necessary circuits can be provided on each board.
  • the network device shown in FIG. 11 can implement various processes related to the network device in the above illustrated method embodiment.
  • the operations and/or functions of each module in the network device shown in FIG. 11 are used to implement the corresponding processes in the foregoing method embodiments.
  • FIG. 12 is a schematic structural diagram of a core network device provided by an embodiment of this application. It may be the UPF network element (or AMF network element or SMF network element) in the above embodiment, and is used to implement the operation of the UPF network element (or AMF network element or SMF network element) in the above embodiment.
  • the core network device 1200 may include a processor 1201, a memory 1202, and an interface circuit 1203.
  • the processor 1201 may be used to process the communication protocol and communication data, and to control the communication device.
  • the memory 1202 may be used to store programs and data, and the processor 1201 may execute the method executed by the AMF network element or the SMF network element in the embodiment of the present application based on the program.
  • the interface circuit 1203 may be used for the core network device 1200 to communicate with other devices.
  • the communication may be wired communication or wireless communication.
  • the interface circuit may be, for example, a service-oriented communication interface.
  • the above memory 1202 may also be externally connected to the core network device 1200.
  • the core network device 1200 may include an interface circuit 1203 and a processor 1201.
  • the above interface circuit 1203 may also be externally connected to the core network device 1200.
  • the core network device 1200 may include a memory 1202 and a processor 1201.
  • the communication device 1200 may include the processor 1201.
  • the core network device shown in FIG. 12 can implement each process involving the core network device in the above illustrated method embodiment.
  • the operations and/or functions of each module in the core network device shown in FIG. 12 are respectively for implementing the corresponding processes in the foregoing method embodiments.
  • FIG. 13 is a schematic structural diagram of a terminal device according to an embodiment of the application. It may be the terminal device in the above embodiment, and is used to implement the operation of the terminal device in the above embodiment.
  • the terminal device includes: an antenna 1310, a radio frequency part 1320, and a signal processing part 1330.
  • the antenna 1310 is connected to the radio frequency part 1320.
  • the radio frequency part 1320 receives the information sent by the network device through the antenna 1310, and sends the information sent by the network device to the signal processing part 1330 for processing.
  • the signal processing part 1330 processes the information of the terminal equipment and sends it to the radio frequency part 1320.
  • the radio frequency part 1320 processes the information of the terminal equipment and sends it to the network equipment via the antenna 1310.
  • the signal processing part 1330 may include a modem subsystem, which is used to process the various communication protocol layers of the data; it may also include a central processing subsystem, which is used to process the terminal device operating system and application layer; in addition, it may also Including other subsystems, such as multimedia subsystems, peripheral subsystems, etc., where the multimedia subsystem is used to control the terminal device camera, screen display, etc., and the peripheral subsystem is used to realize the connection with other devices.
  • the modem subsystem can be a separate chip.
  • the modem subsystem may include one or more processing elements 1331, for example, including a main control CPU and other integrated circuits.
  • the modem subsystem may also include a storage element 1332 and an interface circuit 1333.
  • the storage element 1332 is used to store data and programs, but the program used to execute the method performed by the terminal device in the above method may not be stored in the storage element 1332, but is stored in a memory outside the modem subsystem, When in use, the modem subsystem is loaded and used.
  • the interface circuit 1333 is used to communicate with other subsystems.
  • the modem subsystem can be implemented by a chip, the chip includes at least one processing element and an interface circuit, wherein the processing element is used to execute each step of any method executed by the above terminal device, and the interface circuit is used to communicate with other devices.
  • the unit for the terminal device to implement each step in the above method can be implemented in the form of a processing element scheduler.
  • the device for the terminal device includes a processing element and a storage element, and the processing element calls the program stored by the storage element to Perform the method performed by the terminal device in the above method embodiment.
  • the storage element may be a storage element whose processing element is on the same chip, that is, an on-chip storage element.
  • the program used to execute the method executed by the terminal device in the above method may be a storage element on a different chip from the processing element, that is, an off-chip storage element.
  • the processing element calls or loads a program from the off-chip storage element on the on-chip storage element to call and execute the method executed by the terminal device in the above method embodiment.
  • the unit of the terminal device that implements each step in the above method may be configured as one or more processing elements, and these processing elements are arranged on the modem subsystem, where the processing elements may be integrated circuits, For example: one or more ASICs, or, one or more DSPs, or, one or more FPGAs, or a combination of these types of integrated circuits. These integrated circuits can be integrated together to form a chip.
  • the units of the terminal device that implement each step in the above method can be integrated together and implemented in the form of an SOC, and the SOC chip is used to implement the above method.
  • the chip can integrate at least one processing element and a storage element, and the processing element can call the stored program of the storage element to implement the method executed by the above terminal device; or, the chip can integrate at least one integrated circuit to implement the above terminal The method executed by the device; or, it can be combined with the above implementations.
  • the functions of some units are implemented in the form of calling programs by processing elements, and the functions of some units are implemented in the form of integrated circuits.
  • the above apparatus for terminal equipment may include at least one processing element and an interface circuit, wherein at least one processing element is used to execute any of the methods performed by the terminal equipment provided in the above method embodiments.
  • the processing element can execute part or all of the steps executed by the terminal device in the first way: calling the program stored in the storage element; or in the second way: combining instructions through the integrated logic circuit of the hardware in the processor element Part or all of the steps performed by the terminal device are executed in a manner; of course, part or all of the steps executed by the terminal device can also be executed in combination with the first manner and the second manner.
  • the processing element here is the same as that described above, and can be implemented by a processor, and the function of the processing element can be the same as the function of the processing unit described in FIG. 10.
  • the processing element may be a general-purpose processor, such as a CPU, or one or more integrated circuits configured to implement the above methods, such as: one or more ASICs, or, one or more microprocessors DSP , Or, one or more FPGAs, etc., or a combination of at least two of these integrated circuit forms.
  • the storage element may be realized by a memory, and the function of the storage element may be the same as the function of the storage unit described in FIG. 10.
  • the storage element may be realized by a memory, and the function of the storage element may be the same as the function of the storage unit described in FIG. 10.
  • the storage element can be a single memory or a collective term for multiple memories.
  • the terminal device shown in FIG. 13 can implement various processes related to the terminal device in the above illustrated method embodiment.
  • the operations and/or functions of the various modules in the terminal device shown in FIG. 13 are used to implement the corresponding processes in the foregoing method embodiments.
  • system and “network” in the embodiments of this application can be used interchangeably.
  • At least one means one or more, and “plurality” means two or more.
  • And/or describes the association relationship of the associated objects, indicating that there can be three types of relationships, for example, A and/or B, which can mean: the existence of A alone, both A and B, and B alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the associated objects before and after are in an “or” relationship.
  • “The following at least one item (a)” or similar expressions refers to any combination of these items, including any combination of a single item (a) or a plurality of items (a).
  • At least one of A, B, and C includes A, B, C, AB, AC, BC, or ABC.
  • the ordinal numbers such as “first” and “second” mentioned in the embodiments of this application are used to distinguish multiple objects, and are not used to limit the order, timing, priority, or importance of multiple objects. degree.
  • this application can be provided as methods, systems, or computer program products. Therefore, this application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this application may adopt the form of a computer program product implemented on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) containing computer-usable program codes.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

La présente invention, qui appartient au domaine technique des communications, concerne un procédé et un appareil de communication. Le procédé comprend : la réception, par un dispositif de réseau, d'une première trame vidéo, la première trame vidéo comprenant au moins un paquet de données de couche de base et au moins un paquet de données de couche d'amélioration ; et la détermination, par le dispositif de réseau, du dernier paquet de données de couche de base dans la première trame vidéo. Selon le mode, par la détermination du dernier paquet de données de couche de base dans la première trame vidéo, le dispositif de réseau peut réaliser un groupage de paquets après la réception du dernier paquet de données de couche de base dans la première trame vidéo de façon à obtenir un bloc de transmission, par exemple, tous les paquets de données dans la première trame vidéo peuvent être groupés dans un bloc de transmission, ce qui évite, lors du groupage, d'obtenir de multiples blocs de transmission de petite taille, et améliore en outre l'efficacité de transmission d'une interface aérienne.
PCT/CN2021/086210 2020-04-30 2021-04-09 Procédé et appareil de communication WO2021218593A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010368237.0A CN113596915A (zh) 2020-04-30 2020-04-30 一种通信方法及装置
CN202010368237.0 2020-04-30

Publications (1)

Publication Number Publication Date
WO2021218593A1 true WO2021218593A1 (fr) 2021-11-04

Family

ID=78237689

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/086210 WO2021218593A1 (fr) 2020-04-30 2021-04-09 Procédé et appareil de communication

Country Status (2)

Country Link
CN (1) CN113596915A (fr)
WO (1) WO2021218593A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117014955A (zh) * 2022-04-27 2023-11-07 维沃移动通信有限公司 数据处理方法及装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105556981A (zh) * 2013-07-23 2016-05-04 佳能株式会社 使用针对编码依赖性的通用信号通知来封装分区定时媒体数据的方法、装置和计算机程序
US20160261877A1 (en) * 2015-03-04 2016-09-08 Qualcomm Incorporated Signaling output indications in codec-hybrid multi-layer video coding
CN106464876A (zh) * 2014-06-20 2017-02-22 高通股份有限公司 使用序列结束网络抽象层单元的改进视频编码
WO2018064175A1 (fr) * 2016-09-29 2018-04-05 Cisco Technology, Inc. Placement de paquets pour des schémas de codage vidéo échelonnable

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105556981A (zh) * 2013-07-23 2016-05-04 佳能株式会社 使用针对编码依赖性的通用信号通知来封装分区定时媒体数据的方法、装置和计算机程序
CN106464876A (zh) * 2014-06-20 2017-02-22 高通股份有限公司 使用序列结束网络抽象层单元的改进视频编码
US20160261877A1 (en) * 2015-03-04 2016-09-08 Qualcomm Incorporated Signaling output indications in codec-hybrid multi-layer video coding
WO2018064175A1 (fr) * 2016-09-29 2018-04-05 Cisco Technology, Inc. Placement de paquets pour des schémas de codage vidéo échelonnable

Also Published As

Publication number Publication date
CN113596915A (zh) 2021-11-02

Similar Documents

Publication Publication Date Title
WO2021259112A1 (fr) Appareil et procédé de transmission de service
WO2021244218A1 (fr) Procédé et appareil de communication
WO2021219098A1 (fr) Procédé et appareil de communication
US20220303825A1 (en) Data transmission method and apparatus
US20230354334A1 (en) Communication method and apparatus
US20240236765A1 (en) Communication method and apparatus
WO2021249039A1 (fr) Procédé, appareil et système de communication
US20230050923A1 (en) Media packet transmission method, apparatus, and system
US20240031870A1 (en) Media data transmission method and communication apparatus
US20240314637A1 (en) Data transmission method and communication apparatus
WO2021218593A1 (fr) Procédé et appareil de communication
WO2021254238A1 (fr) Procédé et appareil de communication
CN112135329B (zh) 参数传输方法、装置及系统
US11064503B2 (en) Method and apparatus for transmitting control information
WO2024055871A1 (fr) Procédé de transmission de données dans un système de communication, et appareil de communication
WO2024067374A1 (fr) Procédé et appareil de communication
WO2023109743A1 (fr) Procédé de transmission de données et appareil de communication
WO2023070392A1 (fr) Procédé de transmission de données, dispositif et support de stockage
WO2023185608A1 (fr) Procédé de transmission de données et appareil de communication
EP4391638A1 (fr) Procédé et appareil de transmission de données
WO2024207772A1 (fr) Procédé de communication et équipement terminal
WO2024031407A1 (fr) Procédé, dispositif, et système de transmission et de réception de données à base de sensibilité dans des réseaux sans fil
CN117082566A (zh) 数据传输方法、装置及通信设备
CN116321475A (zh) 传输数据的方法和通信装置
TW202431883A (zh) 一種通信方法及裝置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21796614

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21796614

Country of ref document: EP

Kind code of ref document: A1