WO2021244218A1 - 一种通信方法及装置 - Google Patents

一种通信方法及装置 Download PDF

Info

Publication number
WO2021244218A1
WO2021244218A1 PCT/CN2021/092358 CN2021092358W WO2021244218A1 WO 2021244218 A1 WO2021244218 A1 WO 2021244218A1 CN 2021092358 W CN2021092358 W CN 2021092358W WO 2021244218 A1 WO2021244218 A1 WO 2021244218A1
Authority
WO
WIPO (PCT)
Prior art keywords
data packet
data
data packets
video frame
communication device
Prior art date
Application number
PCT/CN2021/092358
Other languages
English (en)
French (fr)
Inventor
黄曲芳
曾清海
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021244218A1 publication Critical patent/WO2021244218A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/06Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information
    • H04W28/065Optimizing the usage of the radio link, e.g. header compression, information sizing, discarding information using assembly or disassembly of packets

Definitions

  • the embodiments of the present application relate to the field of communication technologies, and in particular, to a communication method and device.
  • the reference frame also called the I frame
  • the compressed size of P frame and B frame is second.
  • TCP/IP transmission control protocol/internet protocol
  • a typical I frame is divided into 64 IP packets.
  • the receiver's application layer the time it takes to decode each video frame has a certain time limit. For example, the time required for the receiver's application layer to decode each video frame cannot exceed the extended delay. At this time, how the receiver processes multiple data packets of the same video frame is currently a technical problem to be solved.
  • the embodiments of the present application provide a communication method and device to solve the technical problem of the receiver processing multiple data packets of the same video frame.
  • a communication method includes: a communication device obtains an extended delay budget; the communication device receives a data packet of a first service; and the communication device processes the data packet according to the extended delay budget.
  • the extended delay budget can also be called spread delay budget.
  • the extended delay budget can be pre-configured, or specified in the agreement, and is not limited. If it is pre-configured, the communication device may receive the first information, the first information may directly indicate the size of the extended delay, or indirectly indicate the size of the extended delay budget, for example, the first information indicates the type of service or the type of decoding. Among them, the service type and the decoding type may have a corresponding relationship with the extended delay budget. The communication device may determine the extended delay budget according to the first information.
  • the communication device may be an access network device.
  • the solution in the first aspect can be applied to the downlink video transmission process.
  • the access network device processes the data packet according to the extended delay budget. Including: The access network device determines the timing of transmitting N data packets to the terminal device according to the extended delay budget and the data volume of the N data packets. In addition, at the foregoing timing, N data packets are sent to the terminal device, thereby avoiding the delay of receiving or processing the N data by the terminal device from exceeding the extended delay budget, and improving the decoding success rate of the terminal device.
  • the solution in the first aspect can be applied to the uplink video transmission process, and the access network device can determine the transmission timing of the N data packets according to the extended delay budget and the data volume of the N data packets. And send scheduling information to the terminal device for scheduling the terminal device to transmit N data packets to the network device at the above transmission timing, thereby reducing the probability that the delay of receiving or processing N data packets exceeds the extended delay budget, and improving The decoding success rate of the video server.
  • the communication device may be a terminal device.
  • the access layer of the terminal device can deliver N data packets to the upper layer according to the extended delay budget, thereby reducing the probability that the delay of the upper decoder processing N data packets exceeds the extended delay budget and increasing the probability of successful decoding.
  • the communication device may receive N data packets belonging to the same video frame in the first service.
  • N may be less than or equal to the number of data packets included in a video frame.
  • the value of N may be a positive integer less than or equal to 64.
  • the first packet of the N data may carry the frame start identifier, or the communication device may receive an independent frame start identifier and the first data packet, and the frame is not limited.
  • the sequence of the start identifier and the first data packet may be referred to as the first data packet.
  • the previous data packet of the first data packet carries an end-of-frame identifier, or the communication device may receive an independent end-of-frame identifier and the last data packet of the previous frame.
  • the communication device does not receive any data packet within a period of time T, and the data packet received after that may be regarded as the first data packet and so on.
  • the value of T may be pre-configured, or stipulated by a protocol, or confirmed by the communication device itself.
  • the communication device can continue to receive other data packets of N data packets.
  • the way of receiving other data packets can be realized through a preset duration, or through a preset number or a preset data amount, etc., and is not limited.
  • the end packet of the N data packets may carry the frame end identifier.
  • the communication device may determine that other data packets between the frame end identifiers and the last packet corresponding to the next frame end identifier form N data packets.
  • N data packets may carry the same indication information.
  • the communication device can determine that the data packets carrying the same indication information are N data packets, etc., which is not limited.
  • the access network device when the communication device is an access network device, the access network device can arrange data scheduling according to the extended delay budget. In this way, after the access network device receives the tail packet of a video frame, it needs to calculate the sum of the data volume of all data packets in the video frame.
  • each video frame that is, the first packet of N data packets, may carry N data packet size indication information. In this way, when the access network device receives the first packet, it can determine the size of the N data packets, and can start looking for an opportunity to transmit the above N data packets, thereby improving the transmission efficiency of the video frame.
  • the above-mentioned "indication information of the size of N data packets" can also be replaced with "indication information of the average data packet size in the N data packets".
  • a communication method including: an access layer receives a first data packet, where the first data packet is the first data packet of a video frame; and the access layer receives and belongs to the first data packet Other data packets of the same video frame; the access layer delivers the N data packets to the upper layer when the preset time expires or when all N data packets belonging to the same video frame are received, the The N data packets include the first data packet and the other data packets.
  • the aforementioned preset time may be pre-configured, or the agreement stipulates the aforementioned preset time.
  • the above solution can be implemented through a timer.
  • the access layer may receive the instruction information sent by the access network device, the core network element or the video data source, and determine the preset time according to the instruction information.
  • the indication information may indicate the specific preset time size.
  • the indication information may indicate an extended delay budget. According to the extended delay budget, a preset time can be determined, and the preset time can be less than or equal to the extended delay budget, etc.
  • every time a data packet is received by the access layer it is delivered to the upper layer. This may cause the time interval between the first packet to the last packet of the upper layer to receive the video frame to exceed the limit of the extended delay budget, and the upper layer decoder may fail to decode.
  • the access layer no longer submits to the upper layer every time it receives a data packet, but instead delivers the data packets continuously received for a period of time (that is, the preset time) to the upper layer uniformly, which can reduce The probability that the delay of each video frame in the upper layer exceeds the extended delay budget.
  • the upper layer can deliver all the data packets of a video frame to the upper layer uniformly, so that the extended delay of each video frame of the upper layer does not exceed the extended delay budget, and the upper layer decodes The decoder successfully decoded.
  • the access layer can uniformly deliver N data packets to the upper layer when all N data packets are received.
  • the N may be less than or equal to the number of data packets included in one video frame. For example, a video frame includes 64 data packets, the value of N may be less than or equal to 64, and so on.
  • the access layer uniformly delivers N data packets to the upper layer.
  • the way that the access layer delivers data packets to the upper layer one by one can also reduce the probability of the extended delay timeout of the upper-layer video frame extending the delay budget limit, and reduce the probability of decoding failure.
  • the access layer can deliver all data packets of a video frame to the upper layer uniformly, which can ensure that the extended delay of each video frame in the upper layer does not exceed the extended time. Delay the budget so that the upper decoder can successfully decode.
  • the access layer when the access layer receives the tail packet of a video frame, it can be considered that all data packets of the video frame have been received, and all data packets are delivered to the upper layer, otherwise it is not delivered to the upper layer data pack.
  • the access layer can determine that a data packet is a tail packet. For example, it can carry the end of the video frame in the tail packet, or it can carry the indication information of the tail data packet in the tail packet, or it can send the video frame separately.
  • the end flag or the instruction information of the tail data packet, etc. are not limited.
  • the UE may rely on the PDCP serial number (SN) allocated to each data packet by the PDCP layer of the base station to determine whether all data packets in the video frame are received.
  • the UE may discard the data packet of the i-th frame and no longer deliver it to the upper layer.
  • a device which includes units or means for performing each step included in the first aspect or the second aspect.
  • a device including a processor and an interface circuit.
  • the processor is configured to communicate with other devices through the interface circuit and execute the method provided in the first or second aspect above.
  • the processor includes one or Multiple.
  • a device including a processor, which is connected to a memory and is used to call a program stored in the memory to execute the method provided in the first or second aspect.
  • the memory may be located in the Inside the device, it may also be located outside the device, and the processor includes one or more.
  • a device including at least one processor and at least one memory, and the at least one processor is configured to execute the method provided in the first aspect or the second aspect.
  • a program is provided, when the program is executed by a processor, it is used to execute the method provided in the above first aspect or the second aspect.
  • An eighth aspect provides a program product, such as a computer-readable storage medium, including the program of the first aspect or the second aspect described above.
  • a computer-readable storage medium which includes a program, and when the program is executed by a processor, the method provided in the first aspect or the second aspect is executed.
  • the above device may be a chip, and the processor may be realized by hardware or software.
  • the processor When realized by hardware, the processor may be a logic circuit, an integrated circuit, etc.; when realized by software, the processor may be It is a general-purpose processor, realized by reading the software code stored in the memory, the memory can be integrated in the processor, can be located outside the processor, and exist independently.
  • the memory can be integrated with the processor, or the memory and the processor can be provided separately. In the specific implementation process, the memory and the processor can be integrated on the same chip, or can be separately arranged on different chips.
  • the embodiment of the present application does not limit the type of the memory and the arrangement of the memory and the processor.
  • FIG. 1 is a schematic diagram of a video frame provided by an embodiment of the application
  • Figure 2 is a schematic diagram of different transmission schemes provided by an embodiment of the application.
  • FIG. 3 is a schematic diagram of a network architecture provided by an embodiment of the application.
  • FIG. 8 is a schematic diagram of a receiver protocol stack provided by an embodiment of the application.
  • FIG. 11 and FIG. 12 are schematic diagrams of the structure of a communication device provided by an embodiment of this application.
  • At least one item (a) refers to any combination of these items, including any combination of a single item (a) or a plurality of items (a).
  • at least one of a, b, or c can mean: a, b, c, ab, ac, bc, or abc, where a, b, and c can be single or multiple .
  • words such as “first” and “second” are used to distinguish the same items or similar items with substantially the same function and effect. Those skilled in the art can understand that the words “first”, “second” and the like do not limit the quantity and order of execution, and the words “first” and “second” do not limit the difference.
  • the basic processing is to divide the video into N pictures per second, and each picture is coded as a video frame.
  • each video frame it can be encoded from two aspects: color and brightness to form digital information.
  • color and brightness to form digital information.
  • the size of the digital information formed by encoding is quite large, and direct transmission will take up a lot of bandwidth. Therefore, the video service can be compressed and retransmitted.
  • video frames can be grouped, the first frame of each group is the reference frame, and the subsequent frames are dependent frames.
  • the decompressor receives a reference frame and can decompress independently without other frames; for the dependent frames following the reference frame, the reference frame is referenced for inter-frame compression, that is, the code in the frame is referenced during compression. Streams also refer to other frames, such as reference frames. In this way, the compression rate can be greatly improved during compression, and the compressed data size can be reduced.
  • each compressed video frame will vary greatly.
  • the reference frame also called I frame
  • Dependent frames namely P-frames (relying only on the previous frame when decoding) and B-frames (relying not only on the previous frame but also on the latter frame when decoding) in the figure, have smaller sizes.
  • each frame may include one or more IP packets.
  • the video frame transmission scheme of the base station may include the following three transmission schemes.
  • the transmission scheme can be used to transmit I frames, B frames, or P frames.
  • the air interface load of the gNB is very light, and the gNB quickly transmits the IP packet of the I frame to the terminal device. After receiving the terminal device, it immediately decodes the IP packet of the I frame, the delay is very small, and the extended delay is also very small, and neither of them exceeds the limit.
  • the second transmission scheme corresponds to a situation where the air interface of the gNB is heavily loaded, and the gNB has other more urgent data to be transmitted, which results in the IP packet of the I frame being dispatched very late. This may result in that, until the delay budget of the I frame is exceeded, the gNB does not schedule all the IP packets of the I frame. From the perspective of the gNB side, only a part of the IP packets did not arrive at the receiver on time, and the quality of service (Qos) is still satisfactory. But from the perspective of the receiver's video decoder, the I frame exceeds the delay budget, which not only affects the decoding of the I frame itself, but also affects the decoding of the subsequent P and B frames.
  • the corresponding gNB air interface load is medium.
  • the gNB since the gNB does not know that the dozens of IP packets mentioned above belong to the same I frame, it may cause the extended delay of the I frame to be too large. Although all the IP packets of the I frame arrive at the receiver before the delay budget, the extended delay of the I frame is too large to exceed the limit of the extended delay budget, and does not meet the requirements of the video decoder.
  • the delay budget can also be called delay budget; the extended delay budget can also be called spread delay budget, and the two are different.
  • the delay budget defines the upper limit of the time delay of the data packet transmission between the core network element and the terminal device, or in other words, refers to the time limit for the core network element to receive a data packet until it is delivered to the terminal device, and then it is delivered to the terminal device.
  • the core network element is a core network element used for user plane control, such as a user plane function (UPF) network element.
  • the extended delay budget may refer to the upper limit of the time interval from the first packet to the last packet of the received video frame by the video decoder.
  • the video decoder if all the IP packets of the I frame have not been collected when the extended delay budget is reached, the video decoder will discard the previously received IP packets, causing the decoding of the I frame to fail.
  • the method may be: a communication device obtains an extended delay budget; the communication device receives a data packet of a first service; and the communication device processes the foregoing according to the extended delay budget. Packets received.
  • the above-mentioned communication device may be an access network device, and the access network device can allocate a reasonable transmission opportunity according to the extended delay budget of the I frame, and send the data packet of the I frame received from the core network to the terminal device, thereby reducing When the I frame data packet reaches the access layer of the terminal device, it exceeds the limit of the extended delay budget, which solves the technical problem of the failure of decoding the I frame.
  • a network architecture including: an access network and a core network.
  • the access network is used to implement functions related to wireless access
  • the access network device is a device that provides access for terminal devices.
  • Access network equipment includes radio access network (RAN) equipment and/or access network (AN) equipment.
  • the RAN device may be an access network device defined in 3GPP.
  • the AN device may be an access network device defined by non-3GPP (non-3GPP).
  • the RAN equipment is mainly responsible for radio resource management, quality of service (QoS) management, data compression, and security processing on the air interface side.
  • the RAN equipment may include various forms of base stations. For example, a macro base station, a micro base station (small station), a relay station, or an access point, etc.
  • RAN equipment includes, but is not limited to: next-generation base stations (gNB) in 5G, evolved node B (evolved node B, eNB), radio network controller (RNC), node B (node B, NB), base station controller (BSC), base transceiver station (base transceiver station, BTS), home base station (for example, home evolved nodeB, or home node B, HNB), baseband unit (BBU) , Transmitting and receiving point (TRP), transmitting point (TP), or mobile switching center, etc.
  • gNB next-generation base stations
  • gNB next-generation base stations
  • gNB next-generation base stations
  • gNB next-generation base stations
  • gNB next-generation base stations
  • gNB next-generation base stations
  • gNB next-generation base stations
  • gNB next-generation base stations
  • gNB next-generation base stations
  • gNB next-generation base stations
  • gNB next-generation base stations
  • gNB next
  • the RAN device may also be a wireless controller, a centralized unit (CU), and/or a distributed unit (DU) in a cloud radio access network (cloud radio access network, CRAN) scenario, or the RAN device may It is relay station, access point, in-vehicle equipment, terminal equipment, wearable equipment, and access network equipment in the future 6G network or access network equipment in the public land mobile network (PLMN) network that will evolve in the future Wait.
  • PLMN public land mobile network
  • AN equipment is used to enable non-3GPP technology to be used for interconnection and intercommunication between terminal equipment and the 3GPP core network.
  • the non-3GPP technologies include but are not limited to: wireless fidelity (WIFI), worldwide interoperability for microwave access (WiMAX), code division multiple access (CDMA) network technologies, etc. .
  • the core network equipment is mainly used to manage the terminal equipment and provide a gateway for communication with the external network.
  • the core network equipment may be, for example, core network elements in different network standards, for example, may include one or more of the following network elements: access and mobility management function (AMF) network elements, session management function (session management function, SMF) network element, user plane function (UPF) network element, policy control function (PCF) network element, application function (AF) network element, unified data management (unified data management, UDM) network element, authentication server function (authentication server function, AUSF) network element, network slice selection function (network slice selection function, NSSF) network element.
  • AMF access and mobility management function
  • SMF session management function
  • UPF user plane function
  • PCF policy control function
  • AF application function
  • UDM authentication server function
  • authentication server function authentication server function
  • AUSF network slice selection function
  • NSSF network slice selection function
  • AMF network element Mainly responsible for the mobility management in the mobile network, such as user location update, user registration network, user handover, etc.
  • SMF network element Mainly responsible for session management in the mobile network, such as session establishment, modification, and release. Specific functions include assigning IP addresses to users and selecting UPF network elements that provide message forwarding functions.
  • UPF network element Mainly responsible for the forwarding and receiving of user data.
  • the UPF network element can receive user data from the data network (DN) and transmit it to the terminal device through the access network device; in the uplink transmission, the UPF network element can receive the user data from the terminal device through the access network device User data, forward the user data to the DN.
  • DN data network
  • the transmission resources and scheduling functions in the UPF network element that provide services for the terminal device can be managed and controlled by the SMF network element.
  • PCF network element It mainly supports the provision of a unified policy framework to control network behavior, provides policy rules to the control layer network functions, and is responsible for obtaining user subscription information related to policy decisions.
  • AF network element It mainly supports interaction with the 3GPP core network to provide services, such as influencing data routing decisions, policy control functions, or providing third-party services to the network side.
  • UDM network elements are mainly used to generate authentication credential, user identification processing (such as storage and management of user permanent identities, etc.), access authorization control and contract data management, etc.
  • the AUSF network element is mainly used to perform authentication when the terminal device accesses the network, including receiving authentication requests sent by the security anchor function (SEAF), selecting the authentication method, and sending the authentication storage and processing function ( authentication repository and processing function (ARPF) request authentication vector, etc.
  • NSSF network elements are mainly used to select network slice instances for terminal devices, determine allowed network slice selection assistance information (NSSAI), configure NSSAI, and determine the AMF set serving the terminal device.
  • the network architecture shown in FIG. 3 may further include: terminal equipment.
  • a terminal device can be referred to as a terminal for short. It is a device with a wireless transceiver function.
  • the terminal device can be deployed on land, including indoor or outdoor, handheld or vehicle-mounted; it can also be deployed on the water (such as ships, etc.); it can also be deployed on In the air (for example, airplanes, balloons, satellites, etc.).
  • the terminal device may be a mobile phone (mobile phone), a tablet computer (pad), a computer with wireless transceiver function, virtual reality (VR) terminal equipment, augmented reality (AR) terminal equipment, industrial control ( Wireless terminal equipment in industrial control, wireless terminal equipment in self-driving, wireless terminal equipment in remote medical, wireless terminal equipment in smart grid, transportation safety (transportation) Wireless terminal equipment in safety), wireless terminal equipment in a smart city, or wireless terminal equipment in a smart home (smart home), etc.
  • VR virtual reality
  • AR augmented reality
  • industrial control Wireless terminal equipment in industrial control, wireless terminal equipment in self-driving, wireless terminal equipment in remote medical, wireless terminal equipment in smart grid, transportation safety (transportation) Wireless terminal equipment in safety), wireless terminal equipment in a smart city, or wireless terminal equipment in a smart home (smart home), etc.
  • the terminal device can also be a cellular phone, a cordless phone, a session initiation protocol (session initiation protocol, SIP) phone, a wireless local loop (WLL) station, a personal digital assistant (personal digital assistant, PDA), with wireless communication Functional handheld devices, computing devices or other processing devices connected to wireless modems, in-vehicle devices, wearable devices, terminal devices in the 5th generation (5G) network in the future, or public land mobile communication networks that will evolve in the future ( Public land mobile network (PLMN) terminal equipment, etc.
  • SIP session initiation protocol
  • WLL wireless local loop
  • PDA personal digital assistant
  • Terminal equipment can sometimes be called user equipment (UE), access terminal equipment, vehicle-mounted terminal equipment, industrial control terminal equipment, UE unit, UE station, mobile station, mobile station, remote station, remote terminal equipment, mobile Equipment, wireless communication equipment, UE agent or UE device, etc.
  • the terminal device can also be fixed or mobile.
  • the embodiments of the present application are not limited thereto.
  • the terminal device may be a wearable device.
  • Wearable devices can also be called wearable smart devices. It is a general term for the application of wearable technology to intelligently design daily wear and develop wearable devices, such as glasses, gloves, watches, clothing and shoes.
  • a wearable device is a portable device that is directly worn on the body or integrated into the user's clothes or accessories.
  • a wearable device is not only a hardware device, but also a device that achieves powerful functions through software support, data interaction, and cloud interaction.
  • wearable smart devices include full-featured, large-sized, complete or partial functions that can be achieved without relying on smart phones, such as smart watches or smart glasses, and only focus on a certain type of application function, which need to cooperate with other devices such as smart phones.
  • the terminal device can be a terminal in the Internet of Things (IoT) system. IoT is an important part of the development of information technology in the future.
  • IoT Internet of Things
  • the terminal device in this application may be a terminal device in machine type communication (MTC).
  • MTC machine type communication
  • the terminal device of the present application may be an in-vehicle module, an in-vehicle module, an in-vehicle component, an in-vehicle chip, or an in-vehicle unit that is built into a vehicle as one or more components or units. Components, on-board chips or on-board units can implement the method of the present application. Therefore, the embodiments of the present application can be applied to the Internet of Vehicles, such as vehicle to everything (V2X), long term evolution vehicle (LTE-V), and vehicle to vehicle (V2V). Wait.
  • V2X vehicle to everything
  • LTE-V long term evolution vehicle
  • V2V vehicle to vehicle
  • the DN can be a service network that provides users with data service services.
  • the DN may be an IP multi-media service (IP multi-media service) network or the Internet (Internet), etc.
  • IP multi-media service IP multi-media service
  • Internet Internet
  • the terminal device can establish a protocol data unit (protocol data unit, PDU) session from the terminal device to the DN to access the DN.
  • PDU protocol data unit
  • the core network equipment may include one or more of the following network elements: a mobility management entity (mobility management entity, MME), a serving gateway (serving gateway, S-GW), and so on.
  • the core network element in FIG. 3 is only for schematic illustration, and is not intended to limit the embodiment of the present application.
  • the core network elements may also include: network exposure function (NEF), network storage function (network repository function, NRF), or service control point (service control point) , SCP), etc., one or more network elements, etc.
  • an embodiment of the present application provides a flow chart of a communication method.
  • the flow includes but is not limited to:
  • Step 401 The communication device obtains an extended delay budget.
  • the extended delay budget may adopt any of the following descriptions:
  • the extended delay budget is used by the communication device to limit the processing time of all data packets of a video frame.
  • the extended delay budget refers to a time limit for the communication device to process all data packets of a video frame, for example, the maximum processing delay, tolerance time, or maximum decoding duration of all data packets of a video frame.
  • a video frame can usually be split into multiple data packets.
  • the video decoder of the communication device can use the extended delay budget to limit the processing time of all data packets of a video frame. For example, the video decoder of the communication device starts timing from receiving the first data packet of the video frame, or starts timing from the processing of the first data packet of the video frame; if the above-mentioned extended delay budget limit expires, the decoder still does not receive the timing.
  • the decoding of the video frame fails, and the decoder can discard the received data packets; if all data packets are received or decoded successfully within the limit of the above-mentioned extended delay budget, the The video frame is decoded successfully.
  • the above collection or successful decoding can be replaced with submission to the upper layer.
  • the aforementioned extended delay budget may be pre-configured, or stipulated by an agreement, etc., which is not limited. If it is pre-configured, the communication device may receive the first information, and determine the extended delay budget according to the first information.
  • the first information may directly indicate the size of the extended delay budget.
  • the first information may indirectly indicate the size of the extended delay budget.
  • the above-mentioned first information may indicate the service type, and the communication device determines the extended delay budget according to the corresponding relationship between the service type and the extended delay budget.
  • the extended delay budget corresponding to animation video can be 20ms
  • the extended delay budget corresponding to landscape video can be 15ms
  • the extended delay budget corresponding to action movie video can be 10ms.
  • the foregoing first information may indicate the decoding type of the video frame, and different decoding types may correspond to different extended delay budgets.
  • the communication device may determine the aforementioned extended delay budget and the like according to the corresponding relationship between the decoding type of the video frame and the extended delay budget, which is not limited.
  • the above-mentioned communication device may be an access network device, and the access network device may receive the above-mentioned first information from a core network element.
  • the SMF network element may send the first information to the access network device through the AMF network element.
  • the foregoing communication device may be a terminal device, and the terminal device may receive the foregoing first information from an access network device.
  • the access network device may first obtain the first information from the core network element, and then transparently transmit it to the terminal device.
  • the first information obtained by the above-mentioned access network device may be directly indicated or indirectly indicated; the first information transmitted to the terminal device may also be directly indicated or indirectly indicated, etc., which is not limited .
  • the first information may be carried in an application layer data packet, and the application layer data packet is sent by the access network device to the terminal device.
  • the terminal device can obtain the first information by analyzing the application layer data packet. It can be provided to the access layer by the application layer, and the access layer does not parse and transmit to the terminal device.
  • the first information may be used as an access layer control information element, which is sent by the access network device to the terminal device and the like.
  • the pre-configuration in this embodiment and the following embodiments refers to pre-configuration to the communication device.
  • the message carrying the configured content the embodiment of the present application does not impose any limitation.
  • Step 402 The communication device receives the data packet of the first service.
  • the communication device may receive N data packets of the same video frame, and the N data packets may be all or part of the data packets of the same video frame.
  • the value of N can be a positive integer less than or equal to 64.
  • the N data packets of the same video frame may also be referred to as a cluster of data.
  • the data packet can also be referred to as an IP packet, and the two are not distinguished.
  • the communication device may determine that the received first data packet is the first data packet of the video frame in the following manner.
  • the first data packet carries the frame start identifier, or the communication device may receive an independent frame start identifier and the first data packet, and the sequence of the frame start identifier and the first data packet is not limited.
  • the previous data packet of the first data packet carries an end-of-frame identifier, or the communication device may receive an independent end-of-frame identifier and the last data packet of the previous frame.
  • the communication device does not receive any data packet within a period of time T, and the data packet received after that may be regarded as the first data packet and so on.
  • the value of T may be pre-configured, or stipulated by a protocol, or confirmed by the communication device itself.
  • the frame start identifier may also be referred to as the indication information of the first data packet, or the first indication information, etc.
  • the frame end identifier can also be referred to as the indication information of the tail data packet, or the second indication information, etc., and is not limited.
  • the communication device can also receive other data packets in the video frame.
  • the communication device may receive a first data packet, which is called a data packet P1.
  • the data packet P1 is the first data packet among N data packets, and the data packet P1 may carry useful data.
  • the other data packets received by the communication device within the preset time and the data packet P1 belong to the same video frame.
  • the above functions can be implemented through a timer.
  • the communication device can start the first timer.
  • the other data packets received during the operation of the first timer may be the remaining data packets among the N data packets.
  • the timer may be specified by the protocol, or pre-configured, and is not limited.
  • both the aforementioned data packet P1 and the first indication information can be sent independently, and the order of the two is not limited.
  • the communication device receives the first data packet, which is called data packet P1.
  • the data packet P1 is the first data packet among N data packets, and the data packet P1 may carry the first indication Information, the first indication information is used to indicate that the data packet P1 is the first data packet among N data packets.
  • the preset number of other data packets received by the communication device belongs to the same video frame as the data packet P1, and the preset number may be specified in the protocol; or pre-configured, such as configuration using application layer configuration information or access Layer information is configured; or carried in the first data packet, for example, the first data packet includes information indicating a preset quantity and first indication information, and the information and the first indication information are independent information elements, or the first indication information It can indicate the preset number and the first packet of the video frame.
  • the communication device may receive a preset number of other data packets, and consider that the other data packets and the data packet P1 belong to the same video frame.
  • the above-mentioned pre-configured, or the preset number specified in the agreement may be N.
  • the communication device may continue to receive N-1 data packets and so on according to the above-mentioned preset number.
  • the first instruction information and the data packet P1 can also be sent independently of each other, and the order of the two is not limited.
  • the information indicating the preset quantity can be carried with the first instruction information In the same message, or the first indication information may indicate both the preset number and the first packet of the video frame.
  • the above preset number may be the total number of one video frame including the first packet, or may be the number of data packets excluding the first packet in one video frame.
  • the communication device receives the first data packet, which is called data packet P1.
  • the data packet P1 is the first data packet among N data packets, and the data packet P1 may carry the first data packet.
  • Indication information the first indication information is used to indicate that the data packet P1 is the first data packet among the N data packets.
  • the other data packets of the preset data volume received by the communication device belong to the same video frame as the data packet P1, and the preset data volume may be specified by the protocol; or pre-configured, for example, using application layer configuration information for configuration or utilization
  • the access layer information is configured; or carried in the first data packet.
  • the first data packet includes information indicating a preset amount of data and first indication information.
  • the information and the first indication information are independent information elements, or the first An indication information can indicate the preset data amount and the first packet of the video frame, and the unit of the preset data amount may be a byte or the like.
  • the communication device may receive other data packets of a preset data amount, and consider that other data packets of the preset data amount belong to the same data packet P1.
  • Video frame For example, the data amount of a video frame is 1500 bytes, and the above 1500 bytes can be pre-configured to the communication device.
  • the communication device receives the first data packet of a video frame, it can determine the data volume of the first data packet.
  • the communication device can continue to receive the data packet of the next video frame.
  • the first instruction information and the data packet P1 can also be sent independently of each other, and the order of the two is not limited.
  • the information indicating the preset data amount can be the same as the first instruction information. It is carried in the same message, or the first indication information can indicate both the preset data amount and the first packet of the video frame.
  • the above preset data amount may be the total data amount of one video frame including the first packet, or may be the remaining data amount of one video frame excluding the data amount of the first packet.
  • the communication device receives the first data packet, which is called data packet P1.
  • the data packet P1 is the first data packet among N data packets, and the data packet P1 may carry the first indication information.
  • the first indication information is used to indicate that the data packet P1 is the first data packet among the N data packets.
  • the other data packets received by the communication device during the data packet P1 and the data packet P2 (also called the second data packet) belong to the same video frame as the data packet P1, and the data packet P2 is the first data of the next video frame Bag.
  • the communication device can distinguish different video frames by indicating the first data packet of each video frame.
  • the first indication information and the data packet P1 can also be sent independently of each other, and the order of the two is not limited.
  • the communication device receives a data packet Pn (may be referred to as a third data packet), the data packet Pn is the last data packet among the N data, and the data packet Pn carries the second indication information ,
  • the second indication information is used to indicate that the data packet Pn is the last data packet among the N data packets, that is, the last data packet.
  • the other data packets received by the communication device from the last data packet of the previous video frame to the last data packet of the current video frame ie, the data packet Pn
  • the communication device can distinguish different video frames by indicating the tail data packet of each video frame.
  • the communication device may receive a data packet Pm, and the data packet Pm may carry indication information of a tail data packet. After that, the communication device receives other data packets that do not carry tail data packet indication information. After that, the communication device can receive the data packet Pn, and the data packet Pn carries the indication information of the tail data packet.
  • the terminal device can determine that other data packets received during the period of the aforementioned data packet Pm and the aforementioned data packet Pn belong to the same video frame as the aforementioned data packet Pn.
  • the second indication information and the data packet Pn can also be sent separately, and the order of the two is not limited.
  • the communication device receives N data packets of the same frame, and each of the N data packets carries third indication information of the corresponding video frame, which carries the same third indication information.
  • the data packets indicating the information belong to the same video frame. For example, if the communication device receives N1 data packets, and each of the N1 data packets carries indication information 1, the communication device determines that the above N1 data packets belong to the same video frame according to the indication information 1. After that, the communication device receives N2 data packets, and each of the N2 data packets carries indication information 0. The communication device can determine that the above N2 data packets belong to the same video frame according to the indication information 0. By analogy, the communication device receives N3 data packets, and each of the N3 data packets carries the instruction information 1 and so on.
  • the above first indication information is used to indicate the first data packet of a video frame; the second indication information is used to indicate the last data packet of a video frame, that is, the tail data packet; the third indication information can be understood as an indication of the same video frame , That is, the data packets carrying the same third indication information belong to the same video frame.
  • the above first indication information and the second indication information can be used in combination, and another implementation is given, that is, between the data packet carrying the first indication information and the data packet carrying the second indication information (including the first data packet) And the tail data packet) belong to the same video frame.
  • the above second indication information can replace the first indication information and be used in combination with a preset number or preset data volume. For example, when a tail data packet is received, the preset number of data packets or preset data before the tail data packet A large number of data packets are understood as belonging to the same video frame as the tail data packet.
  • Step 403 The communication device processes the data packet according to the extended delay budget. For example, the communication device may determine the time to transmit or deliver the N data packets according to the data volume of the N data packets and the extended delay budget.
  • the first data packet of the above N data packets may carry the data quantity indication information of the N data packets. In this way, when the communication device receives the first data packet of each video frame, it can determine the data volume of the entire video frame, and can start looking for an opportunity to transmit the above-mentioned video frame.
  • the average data volume of N data packets included in the first data packet of each video frame can also be carried, and the communication device can also determine the data volume of the entire video frame based on the average data volume of the N data packets. .
  • the indication information indicating the data volume of the N data packets may not be sent, but is specified by the protocol or indicated by information independent of the first data packet.
  • the communication device may be an access network device.
  • the scheme shown in Figure 4 above can be applied to downlink video transmission.
  • the access network device can determine that the access network device transmits N data packets to the terminal device according to the extended delay budget and the data volume of the N data packets.
  • the above-mentioned N data packets are sent to the terminal device, thereby reducing the delay of the terminal device receiving or processing the N data packets exceeding the extended delay budget, and improving the decoding success rate of the terminal device.
  • the solution shown in FIG. 4 may be applied to uplink video transmission.
  • the access network device may determine the transmission timing of N data packets according to the extended delay budget and the data volume of the N data packets. And send scheduling information to the terminal device for scheduling the terminal device to transmit N data packets to the network device at the above transmission timing, thereby reducing the delay of receiving or processing N data packets exceeding the extended delay budget, and improving the video server The decoding success rate.
  • the terminal device can transmit the uplink video service to the access network device, and the access network device will transmit the uplink video service to the video server through the UPF network element, and the video server will transmit the uplink video service to the video server. To decode.
  • the foregoing communication device may be a terminal device.
  • the access layer of the terminal device can deliver N data packets to the upper layer according to the extended delay budget, thereby reducing the delay of the upper decoder to process N data packets exceeding the extended delay budget and increasing the probability of successful decoding.
  • the solution of the flow in FIG. 4 may be applied to processing video frames including multiple data packets.
  • I-frames, P-frames, or B-frames there are no restrictions on I-frames, P-frames, or B-frames.
  • Figure 5 shows a flow chart of the communication method, which can be applied to downlink video transmission, including but not limited to:
  • Step 501 The SMF network element sends first information to the AMF network element, where the first information may be used to directly indicate the size of the extended delay budget of the first service; or, the first information is used to indicate the type of the first service, Alternatively, the first information may be used to indicate the decoding type of the first service, etc., to indirectly indicate the size of the extended delay budget of the first service.
  • the foregoing first information may be generated by the SMF network element, or the foregoing first information may be obtained by the SMF network element from other network elements. For example, UDM network element, PCF network element or video application server, etc.
  • Step 502 The AMF network element sends the first information to the gNB.
  • the gNB may determine the extended delay budget of the first service according to the first information.
  • the first information may directly indicate the size of the extended delay budget, or the first information may indirectly indicate the size of the extended delay budget.
  • the first information may indicate the type of the first service, or may indicate the decoding type of the first service, etc., which is not limited.
  • Step 503 The application server sends N data packets of the same video frame to the UPF network element.
  • the application server may send N data packets of the same video frame to the UPF network element through the DN.
  • Step 504 The UPF network element sends N data packets of the same video frame to the gNB.
  • the first data packet of each video frame may carry first indication information, and the first indication information is used to indicate the first data packet of a video frame.
  • the gNB can distinguish the data packets included in different video frames according to the above-mentioned first indication information.
  • the first indication information and the first data packet can also be sent separately, and the order of the two is not limited.
  • the gNB When the gNB receives the first data packet of each video frame, it can start a timer.
  • the timer may be stipulated by the protocol or pre-configured.
  • other data packets received from the UPF network element and the first data packet above belong to the same video frame. That is, the other data packets and the first data packet jointly constitute N data packets of the same video frame.
  • gNB In addition to receiving the first data packet of each video frame, gNB also receives other data packets from the UPF network element with a preset number or a preset data amount.
  • the preset number or the preset data amount can be specified in the protocol, or , Pre-configured, or carried in the first data packet, etc., without limitation.
  • the manner in which the preset quantity or the preset data quantity is carried in the first data packet please refer to the record in FIG. 4 above, which will not be described here.
  • the gNB can determine that other data packets between the two first data packets form the same video frame as the previous first data packet. For example, the gNB receives the first data packet of the i-th video frame from the UPF network element. After that, other data packets from the UPF network element are received. After that, the first data packet of the i+1th video frame from the UPF network element is received. The gNB may determine that the other data packets and the first data packet of the i-th video frame together form the i-th video frame, where i is a positive integer greater than or equal to 1.
  • the last data packet of each video frame may carry second indication information, and the second indication information is used to indicate a tail data packet of a video frame.
  • the gNB can distinguish the data packets included in different video frames according to the foregoing second indication information.
  • gNB can determine other data packets between two tail data packets, and the latter tail data packet, constitute the same video frame. For example, the gNB receives the tail data packet of the i-th video frame from the UPF network element. After that, other data packets from the UPF network element are received. After that, the tail data packet of the i+1th video frame from the UPF network element is received. The gNB can determine that the other data packets and the tail data packet of the i+1th video frame together form the i+1th video frame.
  • data packets of different video frames may carry different indication information to indicate the respective corresponding video frames.
  • the gNB can determine the data packets included in each video frame through the above indication information.
  • the gNB determines the transmission timing to start transmitting the N data packets according to the data amount of the N data packets of the same video frame and the extended delay budget.
  • the number of the foregoing transmission opportunities may be one or more.
  • the gNB transmits N data packets at that transmission opportunity.
  • the gNB can split the N data packets into multiple copies, and respectively transmit the corresponding data packets at each transmission opportunity.
  • the gNB can arrange data scheduling according to the extended delay budget. In this way, after receiving the tail packet of a video frame, the gNB calculates the sum of the data amount of all data packets of the video frame. Then arrange a continuous or non-continuous transmission resource to transmit all the data packets of the above-mentioned video frame.
  • the "instruction information of the size of the video frame” may be added to the data packet of the video frame, and the gNB may be notified of the instruction information. Theoretically, the above-mentioned "indication information of the size of the video frame" can be added to any data packet of a video frame.
  • the above-mentioned "indication information of the size of the video frame" can be added to the first packet of the video frame, so that gNB does not have to wait until the end packet of a video frame is received, and then calculate the data volume of all the data packets of the video frame. and. After that, a continuous or non-continuous transmission resource is arranged to transmit all the data packets of the above-mentioned video frame. Rather, when gNB receives the first packet of each video frame, it can determine the size of each video frame, and then start looking for an opportunity to transmit the video frame, which improves the transmission efficiency of the video frame.
  • the above-mentioned “indication information of the size of the video frame” may be replaced with “indication information of the average size of all data packets in the video frame”.
  • the gNB can also estimate the total size of the video frame based on the average size of the data packet and the number of data packets included in the video frame.
  • Fig. 6 shows another flow of the communication method, which can also be applied to downlink video transmission.
  • the difference from the above-mentioned flow in FIG. 5 is that the video data source (for example, the application server) notifies the gNB to extend the delay budget.
  • the video data source for example, the application server
  • Step 601 The video receiver reports any one or more of the following parameters to the video data source: video receiver buffer size, video receiver buffer time, compression and decompression algorithm, compression and decompression parameters, video type (for example, animation, landscape , Characters, etc.).
  • video type for example, animation, landscape , Characters, etc.
  • the above-mentioned video receiver may specifically be a UE.
  • Step 602 The video data source determines the extended delay budget of the video service according to the above-mentioned parameters.
  • the video receiver can report the buffer size of the video receiver in the following manner.
  • the video receiver such as the UE, can obtain the receiver buffer size of the current service configuration.
  • the size is reported in the above step 601.
  • the video data source determines the buffer size of each video service according to the number of video services received at the same time by the video receiver, and it is enough to ensure that it is not greater than the total size limit.
  • the video data source may need to discuss with the video receiver before determining the extended delay budget for each service. Through negotiation, the video data source can obtain the buffer size shared by the receiver and the current number of simultaneous video services received. Based on the two, the video data source determines an appropriate buffer size. After that, according to the appropriate buffer size, the extended delay budget of the current service is determined.
  • Step 603 The video data source sends first information to the gNB.
  • the first information may directly indicate the size of the extended delay budget.
  • the first information may indirectly indicate the size of the extended delay budget.
  • the first information may indicate the type of video service or the type of decoding.
  • the video service type or decoding type may have a corresponding relationship with the extended delay budget.
  • the gNB can determine the extended delay precode according to the video service type or decoding type.
  • terminal equipment may also consider terminal equipment capabilities. For example, terminal device buffer size, buffer time, etc. The terminal equipment combined with the video service type and terminal equipment capabilities, altogether decided to extend the delay budget.
  • Step 604 The UPF network element sends N data packets of the same video frame to the gNB.
  • the gNB receives N data packets of the same video frame from the UPF network element, refer to the record in FIG. 5 or FIG. 4, and will not be described again.
  • step 605 the gNB determines the time to transmit N data packets according to the extended delay budget and the data volume of the N data packets, so that the delay between the first data packet and the last data packet of the video frame does not exceed the limit of the extended delay budget .
  • Step 701 The SMF network element sends the first information to the AMF network element.
  • the first information may directly indicate the size of the extended delay budget, or indirectly indicate the size of the extended delay budget, which is not limited.
  • the extended delay budget may be an uplink extended delay budget, such as an uplink extended delay budget corresponding to a certain session (session) data.
  • Step 702 The AMF network element sends the first information to the gNB.
  • the description is made by taking the core network element notifying the gNB to extend the delay budget as an example.
  • the video data source in FIG. 6 can also be used to notify the gNB to extend the delay budget, which is not limited.
  • Step 703 The UE sends a notification message to the gNB, where the notification message is used to notify the gNB that there is an uplink video frame to be transmitted.
  • the notification message may also include indication information of the amount of data of an uplink video frame.
  • the upper layer of the UE may notify the access layer that there are video frames to be transmitted, and then the access layer sends a notification message to the gNB.
  • the upper layer of the UE may directly transmit the first packet of the video frame to the access layer, and the access layer sends the above notification message to the gNB when receiving the first packet of the video frame.
  • the UE may notify the gNB through a buffer status report (BSR). For example, a notification message may be carried in the BSR.
  • the UE may notify the gNB through other media access control control elements (MAC CE) in addition to the BSR, and the MAC CE may carry a notification message.
  • the UE may generate control signaling through the SDAP layer, the PDCP layer, or the RLC layer to notify the gNB.
  • the UE may notify the gNB through the reserved field of the protocol data unit (protocol data unit, PDU) header of the SDAP layer, PDCP layer, or RLC layer when transmitting other data. That is, the above reserved field may carry a notification message.
  • protocol data unit protocol data unit
  • the extended delay budgets of different video frames may be the same or different.
  • the UE may report notification messages of different video frames through different BSRs.
  • an example of reporting a notification message through the BSR is: "There is a 5000-byte video frame to be transmitted, and it is hoped that the above-mentioned video frame is to be transmitted within a delay of 30 ms".
  • the gNB can allocate transmission resources for the video frame according to the requirements of the UE and the extended delay budget corresponding to the video frame.
  • the gNB receives the above notification message, determines the scheduling behavior, and ensures that sufficient uplink resources are allocated to the UE within the extended delay budget to complete the uplink data transmission of the video frame. For example, when the gNB receives the notification message, it can allocate a suitable opportunity for the UE according to the data volume of the uplink video frame and the extended delay budget. And the gNB sends downlink control information (down control information, DCI) to the UE, where the DCI is used to allocate resources for uplink data transmission to the UE. The UE transmits uplink video frames to the gNB according to the uplink resources allocated by the DCI.
  • DCI down control information
  • the above DCI may include uplink grant (UL grant).
  • the gNB can allocate uplink resources for the UE through one or more UL grants.
  • the first UL grant may carry indication information, and the indication information may indicate that the UE will continue to allocate uplink resources in the future.
  • the indication may specifically indicate that in the future T time, the gNB will also allocate X-bit resources to the UE.
  • the logical channel priority division logical channel prioritization, LCP
  • the UE can temporarily increase the guaranteed bit rate (GBR) value of the video service, or temporarily increase the priority of the video service to ensure that it transmits as much as possible during LCP.
  • GRR guaranteed bit rate
  • the data of the current video frame further ensures that the delay of the current video frame does not exceed the limit of the uplink extended delay budget.
  • the UE decides to optimize the transmission of the current video service. For example, temporarily increase the GBR value of the current video service, or temporarily increase the priority of the current video service.
  • UE decides For example, the UE may end the above-mentioned preferential treatment when the video service of the current frame is transmitted.
  • gNB decides. For example, when the gNB allocates uplink resources for the UE, an explanation can be added. For example, the currently allocated uplink resources are hoped to favor the transmission of video services. In this way, the UE can preferentially treat the transmission of the video service when doing LCP.
  • the gNB may notify the UE to end the above-mentioned preferential treatment through indication information.
  • the indication information may be carried in DCI, MAC CE, or radio resource control (radio resource control, RRC).
  • RRC radio resource control
  • the gNB uses dynamic scheduling to allocate uplink resources for the UE, the gNB can notify the UE through DCI to end the preferential treatment.
  • the gNB uses semi-persistent scheduling to allocate uplink resources for the UE, the gNB can notify the UE through RRC or MAC CE to end the preferential treatment.
  • the UE notifies the gNB of the amount of uplink video data to be transmitted, and the above-mentioned video data belongs to the same frame of video service. Then the gNB can refer to the above information when scheduling. Ensure that the uplink video frame does not exceed the limit of the extended delay budget during transmission.
  • Fig. 8 shows a schematic diagram of a protocol stack of a video receiver (for example, a terminal device).
  • the protocol stack of the video receiver can include from top to bottom: application (APP) layer, service data adaptation protocol (SDAP) layer, packet data convergence protocol (packet data convergence protocol, PDCP) layer, radio link control (radio link control, RLC) layer, media access control (MAC) layer, and physical layer (physical, PHY) layer.
  • APP application
  • SDAP service data adaptation protocol
  • PDCP packet data convergence protocol
  • RLC radio link control
  • MAC media access control
  • physical layer physical layer
  • the access layer and the upper layer can be adjacent protocol layers or non-adjacent protocol layers, which is not limited.
  • an IP layer, a TCP layer, or a user datagram protocol (user datagram protocol, UDP) layer can also be used, which is not limited.
  • the method of the process can be executed on the video receiver (for example, the terminal device).
  • the video receiver for example, the terminal device.
  • the method of the process can be executed on the video receiver (for example, the terminal device). including but not limited to:
  • Step 901 The access layer receives a first data packet, where the first data packet is the first data packet of a video frame.
  • the access layer may determine that the received first data packet is the first data packet of the video frame in the following manner.
  • the first data packet carries the frame start identifier, or the access layer receives the independent frame start identifier and the first data packet, and the sequence of the frame start identifier and the first data packet is not limited.
  • the previous data packet of the first data packet carries the end of frame identifier, or the access layer receives an independent end of frame identifier and the last data packet of the previous frame, and the order of the two is not limited.
  • the access layer has not received any data packet within a period of time T, and the data packet received after that can be considered as the first data packet.
  • the value of T may be pre-configured, or stipulated by the protocol, or confirmed by the UE itself.
  • Step 902 The access layer receives other data packets that belong to the same video frame as the first data packet.
  • Step 903 When the preset time expires or N data packets belonging to the same video frame are received, the access layer delivers N data packets to the upper layer.
  • the N data packets include the first data packet and other data packets. data pack.
  • the aforementioned preset time may be pre-configured, or the agreement stipulates the aforementioned preset time.
  • the above solution can be implemented through a timer.
  • the access layer may receive the instruction information sent by the access network device, the core network element or the video data source, and determine the preset time according to the instruction information.
  • the indication information may indicate the specific preset time size.
  • the indication information may indicate an extended delay budget. According to the extended delay budget, a preset time can be determined, and the preset time can be less than or equal to the extended delay budget, etc.
  • every time a data packet is received by the access layer it is delivered to the upper layer. This may cause the time interval between the first packet to the last packet of the upper layer to receive the video frame to exceed the limit of the extended delay budget, and the upper layer decoder may fail to decode.
  • the access layer no longer submits to the upper layer every time it receives a data packet, but instead delivers the data packets continuously received for a period of time (that is, the preset time) to the upper layer uniformly, which can reduce The probability that the delay of each video frame in the upper layer exceeds the extended delay budget.
  • the upper layer can deliver all the data packets of a video frame to the upper layer uniformly, so that the extended delay of each video frame of the upper layer does not exceed the extended delay budget, and the upper layer decodes The decoder successfully decoded.
  • the method for determining whether a data packet belongs to the same video frame can adopt any of the methods in the above embodiments, and details are not described again.
  • the access layer can uniformly deliver N data packets to the upper layer when all N data packets are received.
  • the N may be less than or equal to the number of data packets included in one video frame. For example, a video frame includes 64 data packets, the value of N may be less than or equal to 64, and so on.
  • the access layer uniformly delivers N data packets to the upper layer.
  • the way that the access layer delivers data packets to the upper layer one by one can also reduce the probability of the extended delay timeout of the upper-layer video frame extending the delay budget limit, and reduce the probability of decoding failure.
  • the access layer can deliver all data packets of a video frame to the upper layer uniformly, which can ensure that the extended delay of each video frame in the upper layer does not exceed the extended time. Delay the budget so that the upper decoder can successfully decode.
  • the access layer when the access layer receives the tail packet of a video frame, it can be considered that all data packets of the video frame have been received, and all data packets are delivered to the upper layer, otherwise it is not delivered to the upper layer data pack.
  • the access layer can determine that a data packet is a tail packet. For example, it can carry the end of the video frame in the tail packet, or it can carry the indication information of the tail data packet in the tail packet, or it can send the video frame separately.
  • the end flag or the instruction information of the tail data packet, etc. are not limited.
  • the UE may rely on the PDCP serial number (SN) allocated to each data packet by the PDCP layer of the base station to determine whether all data packets in the video frame are received.
  • the UE may discard the data packet of the i-th frame and no longer deliver it to the upper layer.
  • the method shown in FIG. 9 and the methods shown in FIG. 4 to FIG. 7 may be used in combination.
  • the gNB can allocate transmission opportunities for N data packets according to the extended delay budget, so that the transmission of N data packets meets the requirements of the extended delay budget.
  • the access layer of the UE may deliver the N data packets to the upper layer uniformly.
  • a flow chart of a communication method including but not limited to:
  • Step 1000 The SMF network element sends the first information to the UE.
  • the SMF may send the first information to the UE through the AMF network element and the gNB.
  • the first information may directly indicate the size of the extended delay budget, or indirectly indicate the size of the extended delay budget.
  • Step 1001 The UPF network element receives N data packets from the same video frame.
  • the video data source can send N data packets of the same video frame to the UPF network element through the DN.
  • Step 1002 The UPF network element sends N data packets of the same video frame to the gNB.
  • the gNB determines N data packets of the same video frame, please refer to the above-mentioned record.
  • Step 1003 The gNB sends N data packets of the same video frame to the UE.
  • step 1004 the access layer of the UE waits until all the N data packets are received, and then uniformly delivers to the upper layer.
  • the N data packets of the same video frame sent by the gNB to the UE carry the indication information of the first packet, or the indication information of the tail packet, or the indication information of different video frames. Therefore, when the access layer of the UE receives a data packet, it can distinguish which data packets are included in each video frame.
  • the difference from the existing solution is that the access layer of the UE uniformly delivers all data packets of the same video frame to the upper layer.
  • the access layer of the UE receives a data packet. If it is found that it belongs to the packet in the current video frame, it will not be submitted to the upper layer for the time being, but will be cached.
  • the UE may determine that the data packets carrying the same indication information belong to the same video frame.
  • the UE can determine that all the data packets of the current video frame have been received, and at this time, it can be delivered to the upper layer in a unified manner.
  • the video receiver (for example, UE) collects all the data packets of the same video frame, and then collectively delivers them to the upper layer.
  • the solutions in the above process can be used individually or in combination without limitation.
  • the gNB in downlink video transmission, can allocate different video frames according to the extended delay budget, allocate different transmission opportunities, and use the foregoing transmission opportunities to send different video frames to the UE.
  • the UE side After receiving the video frame, the UE side can determine the data packets included in each video frame, and deliver all the data packets of each video frame to the upper layer in a unified manner.
  • FIG. 11 is a schematic block diagram of an apparatus 1100 provided by an embodiment of the present application, which is used to implement the functions of an access network device or a terminal device in the foregoing method.
  • the device may be a software unit or a chip system.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • the device includes a communication unit 1101 and may also include a processing unit 1102.
  • the communication unit 1101 can communicate with each other.
  • the processing unit 1102 is used for processing.
  • the communication unit 1101 may also be called a communication interface, a transceiver unit, an input/output interface, and so on.
  • the apparatus 1100 may implement the steps performed by the access network device in the foregoing method embodiment, and the apparatus 1100 may be the access network device, or a chip or circuit configured in the access network device.
  • the communication unit 1101 performs the transceiving operations of the access network device in the above method embodiment, and the processing unit 1102 is configured to perform the processing related operations on the access network device side in the above method embodiment.
  • the apparatus 1100 may implement the steps performed by the terminal device in the above method embodiments, and the apparatus 1100 may be a terminal device, or a chip or circuit configured in the terminal device.
  • the communication unit 1101 performs the receiving and sending operations of the terminal device in the above method embodiment, and the processing unit 1102 is configured to perform the same operations as the processing of the terminal device in the above method embodiment.
  • the processing unit 1102 is configured to obtain an extended delay budget, and the extended delay budget is used by the communication device to time limit the processing of all data packets of a video frame; the communication unit 1101 is configured to receive the first service The processing unit 1102 is also configured to process the data packet according to the extended delay budget.
  • the extended delay budget is pre-configured; or pre-configured.
  • the communication unit 1101 may receive first information used to indicate the extended delay budget from a core network element; or, receive first information used to indicate the extended delay budget from an access network device.
  • obtaining the extended delay budget by the processing unit 1102 includes: determining the service type of the first service according to the first information; Determine the extended delay budget according to the correspondence between the service type and the extended delay budget.
  • receiving the data packet of the first service by the communication unit 1101 includes: receiving N data packets belonging to the same video frame in the first service, where N is an integer greater than 1, and the processing unit 1102 according to the extension
  • the delay budget, processing the data packet includes: determining the time to transmit or deliver the N data packets according to the data volume of the N data packets and the extended delay budget of the first service.
  • the communication unit 1101 receiving N data packets belonging to the same video frame in the first service includes:
  • the first data packet carries first indication information, and the first indication information is used to indicate that the first data packet is the first data packet among the N data packets;
  • the other data packets received within the preset time and the first data packet belong to the same video frame.
  • the communication unit 1101 receiving N data packets belonging to the same video frame in the first service includes:
  • a first data packet is received, the first data packet carries first indication information, and the first indication information is used to indicate that the first data packet is the first data in the N data packets.
  • the received other data packets of the preset number or the preset data amount belong to the same video frame as the first data packet.
  • the communication unit 1101 receiving N data packets belonging to the same video frame in the first service includes:
  • the first data packet carries first indication information, and the first indication information is used to indicate that the first data packet is the first data packet among the N data packets;
  • the other data packets received during the first data packet and the second data packet belong to the same video frame as the first data packet, and the second data packet is the first data packet of the next video frame.
  • the communication unit 1101 receiving N data packets belonging to the same video frame in the first service includes: receiving a third data packet, the third data packet carries second indication information, and the first The second indication information is used to indicate that the third data packet is the last data packet of the N data packets; the other data packets received during the last data packet and the third data packet and all other data packets The third data packet belongs to the same video frame.
  • the first data packet of the N data packets carries third indication information, and the third indication information is used to indicate the data size of the video frame in the first service; or, the The first data packet of the N data packets carries fourth indication information, and the fourth indication information is used to indicate the average data volume size of the data packets included in the video frame in the first service.
  • the extended delay budget of the first service is determined according to one or more of the following parameters: the buffer space of the video receiver, the buffer duration of the video receiver, the decompression algorithm of the video receiver, The decompression parameter of the video receiver, the video type of the first service.
  • the division of units in the embodiments of this application is illustrative, and is only a logical function division. In actual implementation, there may be other division methods.
  • the functional units in each embodiment of this application can be integrated into one processing unit. In the device, it can also exist alone physically, or two or more units can be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the functions of the communication unit in the foregoing embodiments may be implemented by a transceiver, and the functions of the processing unit may be implemented by a processor.
  • the transceiver may include a transmitter and/or a receiver, etc., which are used to implement the functions of the transmitting unit and/or the receiving unit, respectively. The following is an example for description with reference to FIG. 12.
  • FIG. 12 is a schematic block diagram of an apparatus 1200 provided by an embodiment of the present application.
  • the apparatus 1200 shown in FIG. 12 may be a hardware circuit implementation of the apparatus shown in FIG. 11.
  • the device can perform the functions of the access network device or the terminal device in the foregoing method embodiment.
  • FIG. 12 only shows the main components of the communication device.
  • the communication device 1200 shown in FIG. 12 includes at least one processor 1201.
  • the communication device 1200 may also include at least one memory 1202 for storing program instructions and/or data.
  • the memory 1202 is coupled with the processor 1201.
  • the coupling in the embodiments of the present application is an indirect coupling or communication connection between devices, units, or modules, which can be electrical, mechanical, or other forms, and is used for information exchange between devices, units, or modules.
  • the processor 1201 may operate in cooperation with the memory 1202, the processor 1201 may execute program instructions stored in the memory 1202, and at least one of the at least one memory 1202 may be included in the processor 1201.
  • the apparatus 1200 may further include a communication interface 1203 for communicating with other devices through a transmission medium, so that the communication apparatus 1200 can communicate with other devices.
  • the communication interface may be a transceiver, a circuit, a bus, a module, or other types of communication interfaces.
  • the transceiver when the communication interface is a transceiver, the transceiver may include an independent receiver and an independent transmitter; it may also be a transceiver with integrated transceiver functions, or an interface circuit.
  • connection medium between the processor 1201, the memory 1202, and the communication interface 1203 is not limited in the embodiment of the present application.
  • the memory 1202, the processor 1201, and the communication interface 1203 are connected through a communication bus 1204.
  • the bus is represented by a thick line in FIG. , Not as a limitation.
  • the bus may include an address bus, a data bus, a control bus, and so on. For ease of representation, only a thick line is used in FIG. 12 to represent it, but it does not mean that there is only one bus or one type of bus.
  • the apparatus 1200 is used to implement the steps performed by the access network device in the above method embodiment.
  • the communication interface 1203 is used to perform the transceiving related operations of the access network device in the above method embodiment, and the processor 1201 is used to perform the processing related operations on the access network device side in the above method embodiment.
  • the apparatus 1200 is configured to implement the steps performed by the terminal device in the above method embodiment.
  • the communication interface 1203 is used to perform the transceiving related operations of the terminal device in the above method embodiment, and the processor 1201 is used to perform the processing related operations on the terminal device side in the above method embodiment.
  • the processor 1201 is configured to obtain an extended delay budget, where the extended delay budget is used by the communication device to time limit the processing of all data packets of a video frame; the communication interface 1203 is configured to receive the first service The processor 1201 is also configured to process the data packet according to the extended delay budget.
  • the extended delay budget is pre-configured; for example, the communication interface 1203 may receive the first information for indicating the extended delay budget from the core network element, or receive the data from the access network device. In the first information indicating the extended delay budget.
  • obtaining the extended delay budget by the processor 1201 includes: determining the service type of the first service according to the first information; Determine the extended delay budget according to the correspondence between the service type and the extended delay budget.
  • the communication interface 1203 receiving data packets of the first service includes: receiving N data packets belonging to the same video frame in the first service, where N is an integer greater than 1, and the processor 1201 performs the extension according to the extension
  • the delay budget, processing the data packet includes: determining the time to transmit or deliver the N data packets according to the data volume of the N data packets and the extended delay budget of the first service.
  • the communication interface 1203 receiving N data packets belonging to the same video frame in the first service includes: receiving a first data packet, where the first data packet carries first indication information, and the first The indication information is used to indicate that the first data packet is the first data packet among the N data packets; other data packets received within a preset time and the first data packet belong to the same video frame.
  • the communication interface 1203 receiving N data packets belonging to the same video frame in the first service includes: receiving a first data packet, where the first data packet carries first indication information, and the first The indication information is used to indicate that the first data packet is the first data packet among the N data packets; other received data packets of a preset number or a preset amount of data belong to the same video as the first data packet frame.
  • the communication interface 1203 receiving N data packets belonging to the same video frame in the first service includes: receiving a first data packet, where the first data packet carries first indication information, and the first The indication information is used to indicate that the first data packet is the first data packet among the N data packets; other data packets received during the first data packet and the second data packet and the first data packet The packets belong to the same video frame, and the second data packet is the first data packet of the next video frame.
  • the communication interface 1203 receiving N data packets belonging to the same video frame in the first service includes: receiving a third data packet, where the third data packet carries second indication information, and the first service
  • the second indication information is used to indicate that the third data packet is the last data packet of the N data packets; the other data packets received during the last data packet and the third data packet and all other data packets
  • the third data packet belongs to the same video frame.
  • the first data packet of the N data packets carries third indication information, and the third indication information is used to indicate the data size of the video frame in the first service; or, the The first data packet of the N data packets carries fourth indication information, and the fourth indication information is used to indicate the average data volume size of the data packets included in the video frame in the first service.
  • the extended delay budget of the first service is determined according to one or more of the following parameters: the buffer space of the video receiver, the buffer duration of the video receiver, the decompression algorithm of the video receiver, The decompression parameter of the video receiver, the video type of the first service.
  • an embodiment of the present application also provides a device, which is configured to execute the method in the above method embodiment.
  • a computer-readable storage medium includes a program, and when the program is executed by a processor, the method in the above method embodiment is executed.
  • a computer program product, the computer program product includes computer program code, when the computer program code runs on a computer, the computer realizes the method in the above method embodiment.
  • a chip includes: a processor, the processor is coupled with a memory, the memory is used to store a program or an instruction, when the program or an instruction is executed by the processor, the device executes the above method embodiment Methods.
  • a system includes at least one of an access network device, a terminal device, a core network network element, or an application server that executes the above method embodiment.
  • the processor may be a general-purpose processor, a digital signal processor, an application specific integrated circuit, a field programmable gate array or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, which can implement or execute The methods, steps, and logic block diagrams disclosed in the embodiments of the present application.
  • the general-purpose processor may be a microprocessor or any conventional processor or the like.
  • the steps of the method disclosed in the embodiments of the present application may be directly embodied as being executed and completed by a hardware processor, or executed and completed by a combination of hardware and software modules in the processor.
  • the memory may be a non-volatile memory, such as a hard disk drive (HDD) or a solid-state drive (SSD), etc., or a volatile memory (volatile memory), for example Random-access memory (random-access memory, RAM).
  • the memory is any other medium that can be used to carry or store desired program codes in the form of instructions or data structures and that can be accessed by a computer, but is not limited to this.
  • the memory in the embodiments of the present application may also be a circuit or any other device capable of realizing a storage function for storing program instructions and/or data.
  • the methods provided in the embodiments of the present application may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software When implemented by software, it can be implemented in the form of a computer program product in whole or in part.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general-purpose computer, a special-purpose computer, a computer network, network equipment, user equipment, or other programmable devices.
  • the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium. For example, the computer instructions may be transmitted from a website, computer, server, or data center.
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server or data center integrated with one or more available media.
  • the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a digital video disc (digital video disc, DVD for short)), or a semiconductor medium (for example, SSD).

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

一种通信方法及装置,该方法可减少对视频帧解码失败的概率,该方法包括:通信装置获取扩展时延预算;通信装置接收第一业务的数据包;通信装置根据所述扩展时延预算,处理上述接收的数据包。例如,上述通信装置可为接入网设备,接入网设备可根据I帧的扩展时延预算,分配合理的传输时机,且在上述传输时机向终端设备发送从核心网接收到的I帧的数据包,从而减少I帧数据包到达终端设备的接入层时超过扩展时延预算限制的概率,减少I帧解码失败的概率。

Description

一种通信方法及装置
相关申请的交叉引用
本申请要求在2020年06月05日提交国家知识产权局、申请号为202010504771.X、申请名称为“一种通信方法及装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及通信技术领域,尤其涉及一种通信方法及装置。
背景技术
在视频处理领域,由于压缩方式的不同,会导致不同视频帧压缩后的尺寸相差很多。其中,基准帧,又称为I帧,其压缩后的视频帧的尺寸最大。P帧和B帧压缩后的尺寸次之。由于I帧尺寸太大,所以在传输时被传输控制协议/网际互连协议(transmission control protocol/internet protocol,TCP/IP)层或以太层分割成多个分片,再交给无线通信网络传输。例如,典型的I帧被划分成64个IP包。由于对于接收方的应用层而言,其解码每个视频帧的时间有一定的时间限制。例如,接收方的应用层解码每个视频帧的时间不能超过扩展时延。此时,接收方如何对同一个视频帧的多个数据包进行处理,是当前待解决的技术问题。
发明内容
本申请实施例提供一种通信方法及装置,以解决接收方对同一个视频帧的多个数据包进行处理的技术问题。
第一方面,提供一种通信方法,该方法包括:通信装置获取扩展时延预算;通信装置接收第一业务的数据包;通信装置根据所述扩展时延预算,处理所述数据包。
可选的,扩展时延预算还可称为spread delay budget。扩展时延预算可以是预配置的,或者,协议规定的等,不作限定。若为预配置的,通信装置可接收第一信息,第一信息可直接指示扩展时延的大小,或者间接指示扩展时延预算的大小,例如,第一信息指示业务类型或解码类型等。其中,业务类型与解码类型可与扩展时延预算存在对应关系。通信装置可根据第一信息,确定扩展时延预算。
在一种可能的设计中,通信装置可以为接入网设备,第一方面的方案可应用于下行视频传输过程中,接入网设备根据所述扩展时延预算,处理数据包的过程,可包括:接入网设备根据扩展时延预算和N个数据包的数据量,确定向终端设备传输N个数据包的时机。且上述时机上,向终端设备发送N个数据包,从而可避免终端设备接收或处理N个数据的时延超过扩展时延预算,提高终端设备的解码成功率。或者,第一方面的方案可应用于上行视频传输过程中,接入网设备可根据扩展时延预算和N个数据包的数据量,确定N个数据包的传输时机。且向终端设备发送调度信息,用于调度终端设备在上述传输时机上,向网络设备传输N个数据包,从而可减少接收或处理N个数据包的时延超过 扩展时延预算的概率,提高视频服务器的解码成功率。
在另一种可能的设计中,通信装置可以为终端设备。终端设备的接入层可根据扩展时延预算,向上层递交N个数据包,从而减少上层的解码器处理N个数据包的时延超过扩展时延预算的概率,提高解码成功的概率。
在一种可能的设计中,通信装置可以接收第一业务中属于同一视频帧的N个数据包。例如,N可以小于或等于一个视频帧所包括数据包的数量。例如,当一个视频帧包括64个数据包时,N的取值可为小于或等于64的正整数。
在一种可能的设计中,N个数据中的首包即第一数据包中可携带有帧起始标识,或者,通信装置可接收独立的帧起始标识和第一数据包,不限定帧起始标识和第一数据包的先后顺序。或者,第一数据包的前一个数据包中携带有帧结束标识,或者,通信装置可接收到独立的帧结束标识和前一帧的最后一个数据包。或者,通信装置在一段时间T内没有收到任何数据包,之后收到的数据包,可认为是第一数据包等。所述T的值可以是预配置的,或者协议规定的,或者通信装置自行确认的等。接收到首包之外,通信装置可继续接收N个数据包的其它数据包。接收其它数据包的方式可通过预设时长实现,或者通过预设数量或预设数据量等实现,不作限定。
在一种可能的设计中,N个数据包中的尾包即第二数据包中可携带有帧结束标识。通信装置可确定帧结束标识之间的其它数据包与后一个帧结束标识所对应的尾包组成N个数据包。
在一种可能的设计中,N个数据包中可携带有相同的指示信息。通信装置可确定携带有相同指示信息的数据包,为N个数据包等,不作限定。
在一种可能的设计中,当通信装置为接入网设备时,接入网设备可根据扩展时延预算,安排数据调度。如此,接入网设备在接收到一个视频帧的尾包之后,需要计算视频帧中所有数据包的数据量之和。可选的,可在每个视频帧即N数据包的首包中携带有N个数据包大小的指示信息。这样接入网设备在接收到首包时,即可确定N个数据包的大小,可开始寻找时机传输上述N个数据包,提高视频帧的传输效率。可替代的,上述“N个数据包大小的指示信息”还可替代为“N个数据包中平均数据包大小的指示信息”。
第二方面,提供一种通信方法,包括:接入层接收第一数据包,所述第一数据包为视频帧的首个数据包;所述接入层接收与所述第一数据包属于同一个视频帧的其它数据包;所述接入层在预设时间到期或属于所述同一视频帧的N个数据包都被接收到时,向上层递交所述N个数据包,所述N个数据包中包括所述第一数据包和所述其它数据包。
在一种可能的实现方式中,可预配置上述预设时间,或者,协议规定上述预设时间。可选的,可通过定时器实现上述方案。比如,接入层在接收到一视频帧的首包时,即开启定时器;在定时器结束时,即向上层递交所接收到的数据包。或者,接入层可接收接入网设备,核心网网元或视频数据源发送的指示信息,根据该指示信息,确定预设时间。该指示信息可指示具体的预设时间大小。或者,该指示信息可指示扩展时延预算。根据该扩展时延预算,可确定预设时间,预设时间可小于或等于扩展时延预算等。在一种方案中,接入层每接收到一个数据包,即递交到上层。这样可能会导致上层接收视频帧的首包至尾包的时间间隔超过扩展时延预算的限制,使得上层解码器解码失败。而在该实现方式中,接入层不再每接收到一个数据包即向上层递交,而是将一段时间(即预设时间)连续所接收到的数据包,统一递交到上层,这样可减少上层中每个视频帧的时延超 过扩展时延预算的概率。当然,如果上述预设时间设置的合适,上层可将一个视频帧的全部数据包统一递交到上层,这样就可以保证上层每个视频帧的扩展时延均不超过扩展时延预算,保证上层解码器成功解码。
在另一种可能的实现方式中,接入层可在N个数据包均被接收到时,统一向上层递交N个数据包。所述N可小于或等于一个视频帧所包括数据包的数量。例如,一个视频帧包括64个数据包,则N的取值可小于或等于64等。在该实现方式中,接入层统一向上层递交N个数据包。相对于,接入层一个个向上层递交数据包的方式,同样可减少上层视频帧的扩展时延超时扩展时延预算限制的概率,降低解码失败的概率。当然,如果N的取值等于一个视频帧全部数据包的数量,那么接入层可将一视频帧的全部数据包统一递交到上层,可以保证上层每个视频帧的扩展时延不超过扩展时延预算,使得上层解码器成功解码。
在另一种可能的实现方式中,当接入层接收到一视频帧的尾包时,可认为本视频帧的全部数据包均收到,且向上层递交全部数据包,否则不向上层递交数据包。接入层判断一数据包为尾包的方式很多,比如,可在尾包中携带视频帧结束标识,或者,可在尾包中携带尾数据包的指示信息等,或者,可单独发送视频帧结束标识或尾数据包的指示信息等,不作限定。可选的,在另一种方式中,UE可依靠基站PDCP层为每个数据包分配的PDCP序列号(serial number,SN),判断本视频帧所有数据包是否都收齐。可选的,如果UE始终未能收齐第i视频帧的所有数据包,且收到第i+1帧的首数据包,则UE可丢弃第i帧的数据包,不再向上层递交。
第三方面,提供一种装置,包括用于执行上述第一方面或第二方面所包括的各个步骤的单元或手段(means)。
第四方面,提供一种装置,包括处理器和接口电路,所述处理器用于通过接口电路与其它装置通信,并执行上述第一方面或第二方面所提供的方法,该处理器包括一个或多个。
第五方面,提供一种装置,包括处理器,用于与存储器相连,用于调用所述存储器中存储的程序,以执行上述第一方面或第二方面所提供的方法,该存储器可以位于该装置之内,也可以位于该装置之外,且该处理器包括一个或多个。
第六方面,提供一装置,包括至少一个处理器和至少一个存储器,所述至少一个处理器用于执行上述第一方面或第二方面所提供的方法。
第七方面,提供一种程序,该程序在被处理器执行时用于执行以上第一方面或第二方面所提供的方法。
第八方面,提供一种程序产品,例如计算机可读存储介质,包括上述第一方面或第二方面的程序。
第九方面,提供一种计算机可读存储介质,包括程序,当程序被处理器运行时,上述第一方面或第二方面所提供的方法被执行。
以上装置可以是一个芯片,处理器可以通过硬件来实现也可以通过软件来实现,当通过硬件实现时,该处理器可以是逻辑电路、集成电路等;当通过软件来实现时,该处理器可以是一个通用处理器,通过读取存储器中存储的软件代码来实现,该存储器可以集成在处理器中,可以位于该处理器之外,独立存在。以上处理器为一个或多个,存储器为一个或多个。存储器可以与处理器集成在一起,或者存储器与处理器分离设置。在 具体实现过程中,存储器可以与处理器集成在同一块芯片上,也可以分别设置在不同的芯片上,本申请实施例对存储器的类型以及存储器与处理器的设置方式不做限定。
附图说明
图1为本申请实施例提供的视频帧的示意图;
图2为本申请实施例提供的不同传输方案的示意图;
图3为本申请实施例提供的网络架构的示意图;
图4至图7为本申请实施例提供的通信方法的流程图;
图8为本申请实施例提供的接收方协议栈的示意图;
图9和图10为本申请实施例提供的通信方法的流程图;
图11和图12为本申请实施例提供的通信装置的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。其中,在本申请的描述中,除非另有说明,“/”表示前后关联的对象是一种“或”的关系,例如,A/B可以表示A或B;本申请中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况,其中A,B可以是单数或者复数。并且,在本申请的描述中,除非另有说明,“多个”是指两个或多于两个。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a-b,a-c,b-c,或a-b-c,其中a,b,c可以是单个,也可以是多个。另外,为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
此外,本申请实施例描述的网络架构以及业务场景是为了更加清楚的说明本申请实施例的技术方案,并不构成对于本申请实施例提供的技术方案的限定,本领域普通技术人员可知,随着网络架构的演变和新业务场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
对于视频业务,基本的处理是将视频划分为每秒N个画面,每个画面作为一个视频帧进行编码。对每个视频帧,可以从色彩、亮度两个方面进行编码,形成数字信息。但是,由于每个视频帧包含的像素较多,所以编码形成的数字信息,其尺寸相当大,直接传输会占用很大的带宽。所以,可以对视频业务进行压缩再传输。
由于视频业务的天然性,只要镜头不切换,相邻的几个帧之间,通常大部分画面的内容是相同的,只有少量内容不同。所以,可以将视频帧分组,每组的第一帧是基准帧,后续的帧是依赖帧。压缩时,对基准帧进行帧内压缩,即压缩时只参考帧内自身的码流,不参考其它帧。这样,解压缩方接收到一个基准帧,不需要其它的帧,就可以独立解压缩;对基准帧后面的依赖帧,则参照基准帧进行帧间压缩,即压缩时既参考帧内自身的码流,也参考其它帧,比如参照基准帧等。这样,压缩时可以大大提升压缩率,减少压 缩后的数据尺寸。
由于上述的压缩方式,会导致压缩后的各个视频帧的尺寸相差很大。如图1所示,基准帧(又称为I帧),其尺寸最大。依赖帧即图中的P帧(解码时只依赖前面的帧)和B帧(解码时不仅依赖前面的帧,也依赖后面的帧),其尺寸较小。
因为I帧的尺寸太大,所以在传输时,通常被传输控制协议/网际互连协议(transmission control protocol/internet protocol,TCP/IP)层或以太层分割成多个IP包,再递交给通信网络传输。典型的I帧会被划分成64个IP包。从基站(例如,gNB)的角度,会集中收到几十个IP包,然后再继续收到后面的P帧和B帧。可选的,P帧和B帧因为尺寸较小,每个帧可包括一个或多个IP包。
参见图2,基站(例如gNB)传输视频帧的方案可包括以下三种传输方案。可选的,该传输方案可用于传输I帧、B帧或P帧等。以下以传输I帧的IP包为例进行描述:
传输方案一,对应的是理想情况。gNB的空口负载很轻,gNB很快就把I帧的IP包传输给终端设备。终端设备收到后,马上对I帧的IP包进行解码,时延很小,扩展时延也很小,两者均未超出限制。
传输方案二,对应的是gNB的空口负载很重的情况,gNB有其它更紧急的数据要传输,导致很晚才调度I帧的IP包。这样可能会导致,直至I帧的时延预算超出了,gNB也没有调度完I帧的所有IP包。从gNB侧的角度,只有一部分IP包没有按时到达接收方,服务质量(quality of service,Qos)还是满足的。但在接收方视频解码器的角度,I帧超出了时延预算,不仅影响I帧本身的解码,还会影响后面的P帧和B帧的解码。
传输方案三,对应的gNB的空口负载中等的情况。但是由于gNB不知道上述几十个IP包属于同一个I帧,可能会导致I帧的扩展时延太大。虽然I帧的所有IP包在时延预算之前到达了接收方,但I帧的扩展时延太大超出了扩展时延预算的限制,不符合视频解码器的要求。
其中,时延预算,还可称为delay budget;扩展时延预算,还可称为spread delay budget,两者是不同的。时延预算定义了核心网网元和终端设备之间数据包传输时延的时间上限,或者说,是指核心网网元接收到一个数据包至递交到终端设备的时间限制,递交至终端设备例如是指递交至终端设备的非接入层,核心网网元为用于用户面控制的核心网网元,例如用户面功能(user plane function,UPF)网元。所述扩展时延预算可指视频解码器对接收的视频帧的首包至尾包的时间间隔的时间上限。可选的,对于视频解码器,若在达到上述扩展时延预算时,还没有收齐I帧的所有IP包,则视频解码器会丢弃前面收到的IP包,导致I帧的解码失败。
基于上述,本申请实施例提供一种通信方法及装置,该方法可为:通信装置获取扩展时延预算;通信装置接收第一业务的数据包;通信装置根据所述扩展时延预算,处理上述接收的数据包。例如,上述通信装置可为接入网设备,接入网设备可根据I帧的扩展时延预算,分配合理的传输时机,向终端设备发送从核心网接收到的I帧的数据包,从而减少I帧数据包到达终端设备的接入层时超过扩展时延预算的限制,解决对I帧解码失败的技术问题。
如图3所示,提供一种网络架构,包括:接入网和核心网。
其中,接入网用于实现无线接入有关的功能,接入网设备是为终端设备提供接入的设备。接入网设备包括无线接入网(radio access network,RAN)设备和/或接入网 (access network,AN)设备。RAN设备可以是3GPP中定义的接入网设备。AN设备可以是非3GPP(non-3GPP)定义的接入网设备。
RAN设备,主要负责空口侧的无线资源管理、服务质量(quality of service,QoS)管理、数据压缩和安全处理等。所述RAN设备可以包括各种形式的基站。例如,宏基站、微基站(小站)、中继站或接入点等。RAN设备包括但不限于:5G中的下一代基站(generation nodeB,gNB)、演进型节点B(evolved node B,eNB)、无线网络控制器(radio network controller,RNC)、节点B(node B,NB)、基站控制器(base station controller,BSC)、基站收发台(base transceiver station,BTS)、家庭基站(例如,home evolved nodeB,或home node B,HNB)、基带单元(baseband unit,BBU)、收发点(transmitting and receiving point,TRP)、发射点(transmitting point,TP)、或移动交换中心等。RAN设备还可以是云无线接入网络(cloud radio access network,CRAN)场景下的无线控制器、集中单元(centralized unit,CU),和/或分布单元(distributed unit,DU),或者RAN设备可以为中继站、接入点、车载设备、终端设备、可穿戴设备以及未来6G网络中的接入网设备或者未来演进的公用陆地移动通信网络(public land mobile network,PLMN)网络中的接入网设备等。
AN设备,用于使得终端设备与3GPP核心网之间可采用非3GPP技术互联互通。所述非3GPP技术包括但不限于:无线保真(wireless fidelity,WIFI)、全球微波互联接入(worldwide interoperability for microwave access,WiMAX)、码分多址(code division multiple access,CDMA)网络技术等。
其中,核心网设备主要用于对终端设备进行管理并提供与外网通信的网关。核心网设备例如可以为不同网络制式中的核心网网元,例如可包括以下中的一个或多个网元:接入和移动管理功能(access and mobility management function,AMF)网元、会话管理功能(session management function,SMF)网元、用户面功能(user plane function,UPF)网元、策略控制功能(policy control function,PCF)网元、应用功能(application function,AF)网元、统一数据管理(unified data management,UDM)网元、认证服务器功能(authentication server function,AUSF)网元、网络切片选择功能(network slice selection function,NSSF)网元。
AMF网元:主要负责移动网络中的移动性管理,如用户位置更新、用户注册网络、用户切换等。SMF网元:主要负责移动网络中的会话管理,如会话建立、修改、释放。具体功能如为用户分配IP地址、选择提供报文转发功能的UPF网元等。UPF网元:主要负责用户数据的转发和接收。在下行传输中,UPF网元可以从数据网络(data network,DN)接收用户数据,通过接入网设备传输给终端设备;在上行传输中,UPF网元可以通过接入网设备从终端设备接收用户数据,向DN转发该用户数据。可选的,UPF网元中为终端设备提供服务的传输资源和调度功能可以由SMF网元管理控制。PCF网元:主要支持提供统一的策略框架来控制网络行为,提供策略规则给控制层网络功能,同时负责获取与策略决策相关的用户签约信息。AF网元:主要支持与3GPP核心网交互来提供服务,例如影响数据路由决策,策略控制功能或者向网络侧提供第三方的一些服务。UDM网元,主要用于生成认证信任状,用户标识处理(如存储和管理用户永久身份等),接入授权控制和签约数据管理等。AUSF网元,主要用于在终端设备接入网络时执行认证,包括接收安全锚点功能(security anchor function,SEAF)发送的鉴权请求,选择鉴权方法,以及向 鉴权存储和处理功能(authentication repository and processing function,ARPF)请求鉴权向量等。NSSF网元,主要用于为终端设备选择网络切片实例,确定允许的网络切片选择辅助信息(network slice selection assistance information,NSSAI)、配置NSSAI和确定服务终端设备的AMF集。
可选的,图3所示的网络架构中,还可包括:终端设备。终端设备可以简称为终端,是一种具有无线收发功能的设备,终端设备可以部署在陆地上,包括室内或室外、手持或车载;也可以部署在水面上(如轮船等);还可以部署在空中(例如飞机、气球和卫星上等)。所述终端设备可以是手机(mobile phone)、平板电脑(pad)、带无线收发功能的电脑、虚拟现实(virtual reality,VR)终端设备、增强现实(augmented reality,AR)终端设备、工业控制(industrial control)中的无线终端设备、无人驾驶(self driving)中的无线终端设备、远程医疗(remote medical)中的无线终端设备、智能电网(smart grid)中的无线终端设备、运输安全(transportation safety)中的无线终端设备、智慧城市(smart city)中的无线终端设备、或智慧家庭(smart home)中的无线终端设备等。终端设备还可以是蜂窝电话、无绳电话、会话启动协议(session initiation protocol,SIP)电话、无线本地环路(wireless local loop,WLL)站、个人数字助理(personal digital assistant,PDA)、具有无线通信功能的手持设备、计算设备或连接到无线调制解调器的其它处理设备、车载设备、可穿戴设备,未来第五代(the 5th generation,5G)网络中的终端设备或者未来演进的公用陆地移动通信网络(public land mobile network,PLMN)中的终端设备等。终端设备有时也可以称为用户设备(user equipment,UE)、接入终端设备、车载终端设备、工业控制终端设备、UE单元、UE站、移动站、移动台、远方站、远程终端设备、移动设备、无线通信设备、UE代理或UE装置等。终端设备也可以是固定的或者移动的。本申请实施例对此并不限定。作为示例而非限定,在本申请实施例中,终端设备可以是可穿戴设备。可穿戴设备也可以称为穿戴式智能设备,是应用穿戴式技术对日常穿戴进行智能化设计、开发出可以穿戴的设备的总称,如眼镜、手套、手表、服饰及鞋等。可穿戴设备即直接穿在身上,或是整合到用户的衣服或配件的一种便携式设备。可穿戴设备不仅仅是一种硬件设备,更是通过软件支持以及数据交互、云端交互来实现强大功能的设备。广义穿戴式智能设备包括功能全、尺寸大、可不依赖智能手机实现完整或者部分的功能,例如:智能手表或智能眼镜等,以及只专注于某一类应用功能,需要和其它设备如智能手机配合使用,如各类进行体征监测的智能手环、智能首饰等。在本申请中,终端设备可以是物联网(internet of things,IoT)系统中的终端,IoT是未来信息技术发展的重要组成部分,其主要技术特点是将物品通过通信技术与网络连接,从而实现人机互连,物物互连的智能化网络。本申请中的终端设备可以是机器类型通信(machine type communication,MTC)中的终端设备。本申请的终端设备可以是作为一个或多个部件或者单元而内置于车辆的车载模块、车载模组、车载部件、车载芯片或者车载单元,车辆通过内置的所述车载模块、车载模组、车载部件、车载芯片或者车载单元可以实施本申请的方法。因此,本申请实施例可以应用于车联网,例如车辆外联(vehicle to everything,V2X)、车间通信长期演进技术(long term evolution vehicle,LTE-V)、车到车(vehicle to vehicle,V2V)等。
可选的,在图3所示的网络架构中,还可包括:数据网络(data network,DN)。DN可以是为用户提供数据业务服务的服务网络。例如,DN可以是IP多媒体业务(IP multi- media service)网络或互联网(internet)等。其中,终端设备可以建立从终端设备到DN的协议数据单元(protocol data unit,PDU)会话,来访问DN。
需要说明的是,在不同的通信系统中,上述核心网中的网元可以有不同的名称。在上述图3所示的示意图中,是以第五代移动通信系统为例进行说明的,并不作为对本申请的限定。例如,在LTE通信系统中,核心网设备可以包括以下中的一个或多个网元:移动性管理实体(mobility management entity,MME)和服务网关(serving gateway,S-GW)等。进一步,上述图3中的核心网网元仅为示意性说明,并不作为对本申请实施例的限定。比如,在图1所示的网络架构中,核心网网元还可包括:网络开放功能(network exposure function,NEF)、网络存储器功能(network repository function,NRF)、或业务控制点(service control point,SCP)等中的一个或多个网元等。
如图4所示,本申请实施例提供一种通信方法的流程图,该流程包括但不限于:
步骤401,通信装置获取扩展时延预算。
可选的,所述扩展时延预算可采用以下描述中的任一种:
1、扩展时延预算用于所述通信装置对一个视频帧的所有数据包的处理进行时间限制。
2、扩展时延预算是指通信装置对一个视频帧的所有数据包进行处理的一个时间限制,例如,一个视频帧的所有数据包的最大处理时延、容忍时间或最大解码持续时间等。举例而言,一个视频帧通常可以被拆分为多个数据包。通信装置视频解码器可利用该扩展时延预算对一个视频帧的所有数据包的处理时间进行限制。例如,通信装置视频解码器从接收视频帧的第一数据包开始计时,或从对视频帧的第一数据包进行处理开始计时;如果超时上述扩展时延预算的限制后,解码器仍没有收齐或解码成功所有的数据包,则对视频帧解码失败,解码器可以丢弃已经收到的数据包;如果在上述扩展时延预算的限制内,收齐或解码成功所有的数据包,则对视频帧解码成功。当然可以理解的是,以上收齐或解码成功可以替换为向上层递交。
在一种可能的实现方式中,上述扩展时延预算可以为预配置的,或者,协议规定的等,不作限定。如果为预配置的,通信装置可接收第一信息,根据第一信息,确定所述扩展时延预算。所述第一信息可直接指示扩展时延预算的大小。或者,所述第一信息可间接指示扩展时延预算的大小。例如,上述第一信息可指示业务类型,通信装置根据业务类型与扩展时延预算的对应关系,确定扩展时延预算。比如,动画类视频对应的扩展时延预算可以为20ms,风景类视频对应的扩展时延预算可以为15ms,动作片类视频对应的扩展时延预算可以为10ms。或者,上述第一信息可指示视频帧的解码类型,不同解码类型可对应不同的扩展时延预算。通信装置可根据视频帧的解码类型与扩展时延预算的对应关系,确定上述扩展时延预算等,不作限定。其中,上述通信装置可为接入网设备,接入网设备可从核心网网元接收上述第一信息。例如,SMF网元可通过AMF网元将所述第一信息发送给接入网设备。或者,上述通信装置可以为终端设备,终端设备可从接入网设备接收上述第一信息。比如,接入网设备可先从核心网网元获取第一信息,之后透明传输给终端设备。可选的,上述接入网设备获取的第一信息可以是直接指示的,也可以是间接指示的;传输到终端设备的第一信息也可以是直接指示的,或者间接指示的等,不作限定。其中,第一信息可以携带于应用层数据包中,该应用层数据包由接入网设备发送给终端设备,终端设备通过解析该应用层数据包即可获取第一信息,该应用层数据包可以由应用层提供给接入层,接入层不做解析传输给终端设备。或者,第一信息可以 作为接入层控制信元,由接入网设备发送给终端设备等。
本实施例和以下实施例中的预配置,是指预先配置给通信装置,至于携带配置的内容的消息,本申请实施例不做任何限制。
步骤402,通信装置接收第一业务的数据包。例如,通信装置可接收同一个视频帧的N个数据包,所述N个数据包可以为同一个视频帧的全部或部分数据包。例如,对于基准帧,即I帧而言,设定一个I帧包括64个IP包,那么N的取值可以为小于或等于64的正整数。可选的,同一个视频帧的N个数据包,还可称为一簇(cluster)数据。所述数据包还可称为IP包,两者不作区分。
在本申请实施例中,通信装置可采用以下方式确定接收的第一数据包为视频帧的首数据包。例如,第一数据包中携带有帧起始标识,或者,通信装置可接收独立的帧起始标识和第一数据包,不限定帧起始标识和第一数据包的先后顺序。或者,第一数据包的前一个数据包中携带有帧结束标识,或者,通信装置可接收到独立的帧结束标识和前一帧的最后一个数据包。或者,通信装置在一段时间T内没有收到任何数据包,之后收到的数据包,可认为是第一数据包等。所述T的值可以是预配置的,或者协议规定的,或者通信装置自行确认的等。在以下描述中,帧起始标识,还可称为首数据包的指示信息,或者,第一指示信息等。帧结束标识,还可称为尾数据包的指示信息,或第二指示信息等,不作限定。通信装置除了接收视频帧的首数据包,还可以接收该视频帧中的其它数据包。
在一种可能的实现方式中,通信装置可接收第一数据包,称为数据包P1,所述数据包P1为N个数据包中的首个数据包,所述数据包P1中可携带有用于指示首个数据包的第一指示信息。通信装置在预设时间内接收到的其它数据包与数据包P1属于同一个视频帧。在一种可能的实现方式中,可以通过定时器实现上述功能。通信装置在接收到数据包P1时,可开启第一定时器。在第一定时器的运行期间所接收到的其它数据包可为N个数据包中的剩余数据包。所述定时器可以为协议规定的,或者,预配置的,不作限定。或者,上述数据包P1和第一指示信息两者可独立发送,两者的前后顺序不作限定。
在一种可能的实现方式中,通信装置接收第一数据包,称为数据包P1,数据包P1为N个数据包中的首个数据包,所述数据包P1中可携带有第一指示信息,第一指示信息用于指示所述数据包P1为N个数据包中的首个数据包。通信装置所接收到的预设数量的其它数据包与数据包P1属于同一视频帧,所述预设数量可以为协议规定的;或预配置的,例如利用应用层配置信息进行配置或利用接入层信息进行配置;或携带于首个数据包中,例如首个数据包包括指示预设数量的信息和第一指示信息,该信息和第一指示信息为独立的信元,或者第一指示信息即可以指示预设数量又可以指示视频帧的首包。例如,在一种可能的实现方式中,通信装置在接收到数据包P1之后,可再接收预设数量的其它数据包,认为该其它数据包与数据包P1属于同一个视频帧。例如,上述预配置的,或者,协议规定的预设数量可为N。通信装置在接收到同一视频帧的首个数据包即数据包P1之后,可根据上述预设数量再继续接收N-1个数据包等。同理,在该实现方式中,第一指示信息与数据包P1还可为相互独立发送的,且两者的先后顺序不作限定,此时,指示预设数量的信息可以跟第一指示信息携带在同一个消息中,或者第一指示信息既可以指示预设数量又可以指示视频帧的首包。
以上预设数量可以是包括首包的一个视频帧的总数量,或者,可以是一个视频帧中除去首包的数据包的数量。
在另一种可能的实现方式中,通信装置接收第一数据包,称为数据包P1,数据包P1为N个数据包中的首个数据包,所述数据包P1中可携带有第一指示信息,第一指示信息用于指示数据包P1为N个数据包中的首个数据包。通信装置所接收到的预设数据量的其它数据包与数据包P1属于同一视频帧,所述预设数据量可以为协议规定的;或预配置的,例如利用应用层配置信息进行配置或利用接入层信息进行配置;或携带于首个数据包中,例如首个数据包包括指示预设数据量的信息和第一指示信息,该信息和第一指示信息为独立的信元,或者第一指示信息即可以指示预设数据量又可以指示视频帧的首包,所述预设数据量的单位可以为字节(byte)等。例如,在一种可能的实现方式中,通信装置在接收到数据包P1之后,可再接收预设数据量的其它数据包,认为该预设数据量的其它数据包与数据包P1属于同一个视频帧。例如,一视频帧的数据量为1500字节,可将上述1500字节预配置给通信装置。通信装置在接收到一视频帧的首个数据包时,可确定首个数据包的数据量。之后,在接收到第二个数据包之后,确定首个数据包+第二个数据包的数据量之和。同理,在接收到第三个数据包之后,确定首个数据包+第二个数据包+第三个数据包之和。直至所接收到的多个数据包的数据量之和达到1500字节,则停止接收当前视频帧的数据包,且认为上述多个数据包是属于同一视频帧的。后续,通信装置可继续接收下一视频帧的数据包。同理,在该实现方式中,第一指示信息与数据包P1还可为相互独立发送的,且两者的先后顺序不作限定,此时,指示预设数据量的信息可以跟第一指示信息携带在同一个消息中,或者第一指示信息既可以指示预设数据量又可以指示视频帧的首包。
以上预设数据量可以是包括首包的一个视频帧的总数据量,或者,可以是一个视频帧中除去首包数据量的剩余数据量。
在另一种可能的实现方式中,通信装置接收第一数据包,称为数据包P1,数据包P1为N个数据包中的首个数据包,数据包P1中可携带有第一指示信息,第一指示信息用于指示数据包P1为N个数据包中的首个数据包。通信装置在数据包P1与数据包P2(又可以称为第二数据包)期间所接收到的其它数据包与数据包P1属于同一个视频帧,数据包P2为下一视频帧的首个数据包。在该方式时,通信装置可通过指示每个视频帧的首数据包,区分不同的视频帧。同理,在该实现方式中,第一指示信息与数据包P1还可为相互独立发送的,且两者的先后顺序不作限定。
在另一种可能的实现方式中,通信装置接收数据包Pn(可以称为第三数据包),数据包Pn为N个数据中的最后一个数据包,数据包Pn中携带有第二指示信息,第二指示信息用于指示数据包Pn为N个数据包中的尾数据包,即最后一个数据包。通信装置在上一视频帧的尾数据包至当前视频帧的尾数据包(即数据包Pn)期间所接收到的其它数据包与数据包Pn属于同一个视频帧。在该方式中,通信装置可通过指示每个视频帧的尾数据包,区分不同的视频帧。
例如,通信装置可接收数据包Pm,所述数据包Pm中可携带有尾数据包的指示信息。之后,通信装置接收没有携带尾数据包指示信息的其它数据包。再之后,通信装置可接收数据包Pn,该数据包Pn中携带有尾数据包的指示信息。终端设备在可确定在上述数据包Pm与数据包Pn期间所接收的其它数据包,与上述数据包Pn属于同一个视频帧。可选 的,第二指示信息与数据包Pn还可单独发送,且不限定两者的先后顺序。
在另一种可能的实现方式中,通信装置接收同一帧的N个数据包,N个数据包中的每个数据包中均携带有对应视频帧的第三指示信息,其中携带有相同第三指示信息的数据包属于同一个视频帧。举例说明,通信装置接收到N1个数据包,该N1个数据包中每个数据包均携带有指示信息1,则通信装置根据指示信息1,确定上述N1个数据包属于同一个视频帧。之后,通信装置接收到N2个数据包,该N2个数据包中每个数据包中均携带有指示信息0,则通信装置可根据指示信息0,确定上述N2个数据包属于同一个视频帧。之后类推,通信装置接收N3个数据包,该N3个数据包中每个数据包均携带指示信息1等。
以上第一指示信息用于指示一个视频帧的首个数据包;第二指示信息用于指示一个视频帧的最后一个数据包,即尾数据包;第三指示信息可以理解为同一视频帧的指示,即携带有相同第三指示信息的数据包属于同一个视频帧。以上第一指示信息和第二指示信息可以结合使用,给出另一种实现方式,即在携带有第一指示信息的数据包到携带有第二指示信息的数据包之间(包括首数据包和尾数据包)的数据包属于同一个视频帧。
此外,以上第二指示信息可以替换第一指示信息,结合预设数量或预设数据量使用,例如在接收到尾数据包时,将该尾数据包之前预设数量的数据包或预设数据量的数据包理解为与该尾数据包属于同一个视频帧。
步骤403,通信装置根据扩展时延预算,处理所述数据包。例如,通信装置可根据N个数据包的数据量以及扩展时延预算,确定传输或递交N个数据包的时机。可选的,由于通信装置根据同一视频帧中N个数据包的数据量,确定传输时机。因此,可在上述N个数据包中的首个数据包中携带N个数据包数据量的指示信息。如此,通信装置在接收到每个视频帧的首个数据包时,即可确定整个视频帧的数据量,可开始寻找时机传输上述视频帧。或者,也可在上述每个视频帧的首个数据包中携带其所包括N数据包的平均数据量,通信装置根据N个数据包的平均数据量,也可确定整个视频帧的数据量大小。
可选的,也可以不发送该指示N个数据包数据量的指示信息,而由协议规定的,或通过独立于首个数据包的信息进行指示。
在一种可能的实现方式中,通信装置可以为接入网设备。上述图4所示的方案可应用于下行视频传输,上述403中,接入网设备可根据扩展时延预算和N个数据包的数据量,确定接入网设备向终端设备传输N个数据包的时机,在上述时机,向终端设备发送上述N个数据包,从而可减少终端设备接收或处理N个数据包的时延超过扩展时延预算,提高终端设备的解码成功率。或者,上述图4所示的方案可应用于上行视频传输,在上述403中,接入网设备可根据扩展时延预算和N个数据包的数据量,确定N个数据包的传输时机。且向终端设备发送调度信息,用于调度终端设备在上述传输时机上,向网络设备传输N个数据包,从而可减少接收或处理N个数据包的时延超过扩展时延预算,提高视频服务器的解码成功率。可选的,在上行视频传输方案中,终端设备可将上行视频业务传输给接入网设备,接入网设备将通过UPF网元将上行视频业务传输给视频服务器,由视频服务器对上行视频业务进行解码。
在另一种可能的实现方式中,上述通信装置可以为终端设备。终端设备的接入层可根据扩展时延预算,向上层递交N个数据包,从而减少上层的解码器处理N个数据包的时延超过扩展时延预算,提高解码成功的概率。
可选的,上述图4流程的方案,可应用于对包括多个数据包的视频帧进行处理。例如,对I帧、P帧或B帧等,不作限定。
图5示出了通信方法的一流程图,该流程可应用于下行视频传输中,包括但不限于:
步骤501,SMF网元向AMF网元发送第一信息,所述第一信息可用于直接指示第一业务的扩展时延预算大小;或者,所述第一信息用于指示第一业务的类型,或者,第一信息可用于指示第一业务的解码类型等,以间接指示第一业务的扩展时延预算大小。上述第一信息可以为SMF网元生成的,或者,上述第一信息可以是SMF网元从其它网元获取的。例如,UDM网元、PCF网元或视频应用服务器等。
步骤502,AMF网元向gNB发送第一信息。可选的,gNB根据第一信息,可确定第一业务的扩展时延预算。其中,第一信息可直接指示扩展时延预算的大小,或者,第一信息可间接指示扩展时延预算的大小。例如,第一信息可指示第一业务的类型,或者,可指示第一业务的解码类型等,不作限定。
步骤503,应用服务器向UPF网元发送同一个视频帧的N个数据包。在一种可能的实现方式中,应用服务器可通过DN向UPF网元发送同一个视频帧的N个数据包。
步骤504,UPF网元向gNB发送同一个视频帧的N个数据包。
在一种可能的实现方式中,每个视频帧的首个数据包中可携带有第一指示信息,所述第一指示信息用于指示一个视频帧的首个数据包。gNB可根据上述第一指示信息,区分不同视频帧所包括的数据包。可选的,第一指示信息和首数据包还可单独发送,且不限定两者的先后顺序。
1、gNB在接收到每个视频帧的首个数据包时,可启动定时器。所述定时器可为协议规定或预配置的。UPF网元在定时器运行期间,所接收到的来自UPF网元的其它数据包与上述首个数据包,属于同一个视频帧。即上述其它数据包与上述首个数据包,共同构成了同一个视频帧的N个数据包。
2、gNB除了接收到每个视频帧的首个数据包,还接收来自UPF网元预设数量或预设数据量的其它数据包,预设数量或预设数据量可以为协议规定的,或者,预配置的,或携带于首个数据包中等,不作限定。关于预设数量或预设数据量在所述首个数据包中的携带方式,可参见上述图4中的记载,在此不再说明。
3、gNB可确定两个首数据包之间的其它数据包,与前一个首数据包,构成同一个视频帧。举例来说,gNB接收来自UPF网元的第i视频帧的首个数据包。之后,接收来自UPF网元的其它数据包。再之后,接收来自UPF网元的第i+1视频帧的首个数据包。gNB可确定上述其它数据包与第i视频帧的首个数据包,共同构成第i视频帧,所述i为大于或等于1的正整数。
在另一种可能的实现方式中,每个视频帧的最后一个数据包中可携带有第二指示信息,所述第二指示信息用于指示一个视频帧的尾数据包。gNB可根据上述第二指示信息,区分不同视频帧所包括的数据包。
1、gNB可确定两个尾数据包之间的其它数据包,与后一个尾数据包,构成同一个视频帧。举例来说,gNB接收来自UPF网元的第i视频帧的尾数据包。之后,接收来自UPF网元的其它数据包。再之后,接收来自UPF网元的第i+1视频帧的尾数据包。gNB可确定上述其它数据包与第i+1视频帧的尾数据包,共同构成第i+1视频帧。
在另一种可能的实现方式中,不同视频帧的数据包中可携带不同的指示信息,用于 指示各自对应的视频帧。gNB可通过上述指示信息,确定每个视频帧所包括的数据包。
步骤505,gNB根据同一个视频帧N个数据包的数据量与扩展时延预算,确定开始传输N个数据包的传输时机。可选的,上述传输时机的数量可为一个或多个。当传输时机的数量为一个时,gNB在该传输时机上传输完N个数据包。而当传输时机的数量为多个时,gNB可将N个数据包拆分成多份,且在每个传输时机上分别传输对应的数据包。
在一种可能的实现方式中,gNB可根据扩展时延预算,安排数据调度,如此,gNB在接收到一个视频帧的尾包之后,计算该视频帧所有数据包的数据量之和。再安排一段连续或非连续的传输资源,传输上述视频帧的所有数据包。可选的,可在视频帧的数据包中,增加“视频帧尺寸大小的指示信息”,并将该指示信息通知给gNB。理论上,可在一视频帧的任一个数据包中增加上述“视频帧尺寸大小的指示信息”。可选的,可在视频帧的首包中增加上述“视频帧尺寸大小的指示信息”,这样gNB不必等到接收到一个视频帧的尾包之后,再计算该视频帧所有数据包的数据量之和。之后,再安排一段连续或非连续的传输资源,用于传输上述视频帧的所有数据包。而是gNB在接收到每个视频帧的首包时,即可确定每个视频帧的尺寸大小,即可开始寻找时机传输视频帧,提高了视频帧的传输效率。
可选的,上述“视频帧尺寸大小的指示信息”还可替代为“视频帧中所有数据包的平均尺寸大小的指示信息”。gNB根据数据包的平均尺寸大小以及视频帧中所包括数据包的数量,也可估算出视频帧的总尺寸。
图6示出了通信方法的另一流程,该流程同样可应用于下行视频传输中。与上述图5流程不同的是,由视频数据源(例如应用服务器)通知gNB扩展时延预算。
步骤601,视频接收方向视频数据源上报以下参数中的任一个或多个:视频接收方缓存尺寸、视频接收方缓存时间、压缩解压缩算法、压缩解压缩参数,视频类型(例如,动画,风景,人物等)。可选的,对于下行传输方案,上述视频接收方可具体为UE。
步骤602,视频数据源根据上述参数,确定视频业务的扩展时延预算。
在一种可能的实现方式中,若视频数据源需要根据上述“视频接收方缓存尺寸”,确定上述视频业务的扩展时延预算。则视频接收方可按照以下方式上报上述视频接收方缓存尺寸。
1、为每个业务配置不同的接收方缓存尺寸。视频接收方,例如UE,可获取当前业务配置的接收方缓存尺寸。在上述步骤601中上报该尺寸。
2、为每个业务配置相同的接收方缓存尺寸。由于每个视频业务的接收方缓存尺寸都相同。那么接收方可预先将上述尺寸上报给视频数据源。无论何种类型的业务,视频数据源均按照上述尺寸,确定扩展时延预算。
3、配置一个统一的接收方缓存尺寸,多个业务共享。视频数据源根据视频接收方同时接收的视频业务数量,确定每一个视频业务的缓存尺寸,保证不大于总的尺寸限制就可以。视频数据源在确定每个业务的扩展时延预算之前,可能需要与视频接收方进行商榷。通过商榷,视频数据源可获取接收方共享的缓存尺寸,以及当前同时接收视频业务的数量。根据两者,视频数据源确定一个合适的缓存尺寸。之后,根据该合适的缓存尺寸,确定当前业务的扩展时延预算。
步骤603,视频数据源向gNB发送第一信息,第一信息可直接指示扩展时延预算的大小。或者,第一信息可间接指示扩展时延预算的大小。例如,第一信息可指示视频业 务类型或解码类型等。视频业务类型或解码类型可与扩展时延预算存在对应关系。gNB可根据视频业务类型或解码类型,确定扩展时延预码。可选的,除了视频业务类型,终端设备可能还要考虑终端设备能力。例如,终端设备缓存尺寸,缓存时间等。终端设备结合视频业务类型和终端设备能力,总共决定扩展时延预算。
步骤604,UPF网元向gNB发送同一视频帧的N个数据包。关于gNB接收来自UPF网元的同一个视频帧的N个数据包的方式,可参见上述图5或图4中的记载,不再说明。
步骤605,gNB根据扩展时延预算以及N个数据包的数据量,确定传输N个数据包的时机,从而使得视频帧的首数据包至尾数据包间的时延不超过扩展时延预算的限制。
上述图5和图6所示的方法,是针对下行视频传输的。在实际网络中,大部分视频传输是下行的,但也存在上行视频传输。比如,网络直播业务,主播需要将实时视频上传到视频服务器,视频服务器再将直播视频下传到各个观众。对上行视频业务,可能也存在扩展时延预算的问题。参见图7,提供通信方法的一具体流程,该流程可应用于上行视频传输,包括但不限于:
步骤701,SMF网元向AMF网元发送第一信息。第一信息可直接指示扩展时延预算的大小,或者间接指示扩展时延预算的大小,不作限定。可选的,扩展时延预算可为上行扩展时延预算,例如某个会话(session)数据对应的上行扩展时延预算等。
步骤702,AMF网元向gNB发送第一信息。
可选的,在图7的流程中,是以核心网网元通知gNB扩展时延预算为例进行说明的。除此之外,还可采用图6中视频数据源通知gNB扩展时延预算的方式,不作限定。
步骤703,UE向gNB发送通知消息,所述通知消息用于通知gNB有上行视频帧待传输。可选的,通知消息中还可包括一个上行视频帧的数据量的指示信息。可选的,UE上层可通知接入层有视频帧待传输,然后由接入层向gNB发送通知消息。或者,UE的上层可直接将视频帧的首包传输到接入层,接入层在接收到视频帧的首包时,向gNB发送上述通知消息。
在一种可能的实现方式中,UE可通过缓存状态报告(buffer status report,BSR)通知gNB。例如,BSR中可携带有通知消息。或者,UE可通过除BSR外的,其它媒体接入控制控制元素(media access control control element,MAC CE)通知gNB,所述MAC CE可携带有通知消息。或者,UE可通过SDAP层、PDCP层、或RLC层生成控制信令,通知gNB。或者,UE可在传输其它数据时,通过SDAP层、PDCP层或RLC层的协议数据单元(protocol data unit,PDU)头的保留字段,通知gNB。即上述保留字段中可携带有通知消息。在一种更具体的实现方式中,UE中存在多个待传输的视频帧,不同视频帧的扩展时延预算可相同或不同。当上述多个待传输视频帧的扩展时延预算不同时,UE可通过不同的BSR上报不同视频帧的通知消息。例如,通过BSR上报通知消息的一示例为:“有一5000字节的视频帧要传输,且希望在30ms的时延内传输完上述视频帧”。gNB可根据UE的要求以及该视频帧对应的扩展时延预算,为该视频帧分配传输资源。
步骤704,gNB收到上述通知消息,确定调度行为,确保在扩展时延预算内为UE分配足够的上行资源,完成视频帧的上行数据传输。例如,gNB在接收到通知消息时,可根据上行视频帧的数据量以及扩展时延预算,为UE分配合适的时机。且gNB向UE发送下行控制信息(down control information,DCI),所述DCI用于为UE分配上行数据传输的资源。UE根据DCI分配的上行资源,向gNB传输上行视频帧。
可选的,上述DCI中可包括上行授权(uplink grant,UL grant)。gNB可通过一个或多个UL grant为UE分配上行资源。当gNB通过多个UL grant为UE分配上行授权时,在首个UL grant中可携带有指示信息,该指示信息可指示后续会继续为UE分配上行资源。例如,所述指示可具体指示:在未来T时间内,gNB还会分配X比特资源给UE。进一步的,UE接收到该指示信息时,可优化逻辑信道优先级划分(logical channel prioritization,LCP))行为。比如,针对该视频业务对应的逻辑信道,UE可暂时提高该视频业务的保证比特速率(guaranteed bit rate,GBR)的值,或者暂时提高该视频业务的优先级,保证在LCP时尽量多传输一些当前视频帧的数据,进一步确保当前视频帧的时延不超过上行扩展时延预算的限制。
进一步的,若UE决定优化当前视频业务的传输。例如,暂时提高当前视频业务的GBR值,或暂时提高当前视频业务的优先级。则何时结束这种优待,有以下几种实现方式:UE决定。例如,UE可在传输完当前帧的视频业务时,即结束上述优待。或者,gNB决定。例如,gNB为UE分配上行资源时,可增加说明。例如,当前分配的上行资源,希望倾向传输视频业务。如此,UE做LCP时,可优待该视频业务的传输。或者,gNB可通过指示信息,通知UE结束上述优待。所述指示信息可携带在DCI、MAC CE或无线资源控制(radio resource control,RRC)中。例如,若gNB采用动态调度的方式为UE分配上行资源,则gNB可通过DCI通知UE,结束优待。或者,若gNB采用半静态调度的方式为UE分配上行资源,则gNB可通过RRC或MAC CE通知UE,结束优待。
通过上述图7描述的方法,UE通知gNB,待传输上行视频数据的数据量,且上述视频数据属于同一帧视频业务。则gNB在调度时可参考上述信息。保证上行视频帧在传输中不超过扩展时延预算的限制。
图8示出了视频接收方(例如,终端设备)协议栈的一示意图。在一种可能的实现方式中,视频接收方的协议栈,由上至下可包括:应用(application,APP)层、业务数据适配协议(service data adapt protocol,SDAP)层、分组数据汇聚协议(packet data convergence protocol,PDCP)层、无线链路控制(radio link control,RLC)层、媒体接入控制(media access control,MAC)层和物理层(physical,PHY)层。可选的,本申请实施例所提及的“接入层”可包括:SDAP层、PDCP层、RLC层、MAC层或PHY层中的一个或多个;上层可包括APP层。接入层和上层之间可为相邻的协议层,也可为非相邻的协议层,不作限定。比如,在上层和接入层之间还可IP层、TCP层或用户数据报协议(user datagram protocol,UDP)层等,不作限定。
如图9所示,提供一种通信方法的流程图。可选的,可在视频接收方(例如,终端设备)执行该流程的方法。包括但不限于:
步骤901,接入层接收第一数据包,所述第一数据包为视频帧的首个数据包。
在本申请实施例中,接入层可采用以下方式确定接收的第一数据包为视频帧的首数据包。例如,第一数据包中携带有帧起始标识,或者,接入层收到独立的帧起始标识和第一数据包,不限定帧起始标识和第一数据包的先后顺序。或者,第一数据包的前一个数据包中携带有帧结束标识,或者,接入层收到独立的帧结束标识和前一帧的最后一个数据包,不限定两者的先后顺序。或者,接入层在一段时间T内没有收到任何数据包,之后收到的数据包,可认为是第一数据包。所述T的值可以是预配置的,或者,协议规定的,或者UE自行确认的等。
步骤902,接入层接收与第一数据包属于同一个视频帧的其它数据包。
步骤903,接入层在预设时间到期或属于同一视频帧的N个数据包都被接收到时,向上层递交N个数据包,所述N个数据包中包括第一数据包和其它数据包。
在一种可能的实现方式中,可预配置上述预设时间,或者,协议规定上述预设时间。可选的,可通过定时器实现上述方案。比如,接入层在接收到一视频帧的首包时,即开启定时器;在定时器结束时,即向上层递交所接收到的数据包。或者,接入层可接收接入网设备,核心网网元或视频数据源发送的指示信息,根据该指示信息,确定预设时间。该指示信息可指示具体的预设时间大小。或者,该指示信息可指示扩展时延预算。根据该扩展时延预算,可确定预设时间,预设时间可小于或等于扩展时延预算等。在一种方案中,接入层每接收到一个数据包,即递交到上层。这样可能会导致上层接收视频帧的首包至尾包的时间间隔超过扩展时延预算的限制,使得上层解码器解码失败。而在该实现方式中,接入层不再每接收到一个数据包即向上层递交,而是将一段时间(即预设时间)连续所接收到的数据包,统一递交到上层,这样可减少上层中每个视频帧的时延超过扩展时延预算的概率。当然,如果上述预设时间设置的合适,上层可将一个视频帧的全部数据包统一递交到上层,这样就可以保证上层每个视频帧的扩展时延均不超过扩展时延预算,保证上层解码器成功解码。
本实施例中,确定数据包是否属于同一个视频帧的方式可以采用以上实施例的任一种方式,再此不再赘述。
在另一种可能的实现方式中,接入层可在N个数据包均被接收到时,统一向上层递交N个数据包。所述N可小于或等于一个视频帧所包括数据包的数量。例如,一个视频帧包括64个数据包,则N的取值可小于或等于64等。在该实现方式中,接入层统一向上层递交N个数据包。相对于,接入层一个个向上层递交数据包的方式,同样可减少上层视频帧的扩展时延超时扩展时延预算限制的概率,降低解码失败的概率。当然,如果N的取值等于一个视频帧全部数据包的数量,那么接入层可将一视频帧的全部数据包统一递交到上层,可以保证上层每个视频帧的扩展时延不超过扩展时延预算,使得上层解码器成功解码。
在另一种可能的实现方式中,当接入层接收到一视频帧的尾包时,可认为本视频帧的全部数据包均收到,且向上层递交全部数据包,否则不向上层递交数据包。接入层判断一数据包为尾包的方式很多,比如,可在尾包中携带视频帧结束标识,或者,可在尾包中携带尾数据包的指示信息等,或者,可单独发送视频帧结束标识或尾数据包的指示信息等,不作限定。可选的,在另一种方式中,UE可依靠基站PDCP层为每个数据包分配的PDCP序列号(serial number,SN),判断本视频帧所有数据包是否都收齐。可选的,如果UE始终未能收齐第i视频帧的所有数据包,且收到第i+1帧的首数据包,则UE可丢弃第i帧的数据包,不再向上层递交。
可选的,上述图9所示的方法,与上述图4至图7所示的方法,可相结合使用。例如,在一种可能的实现方式中,gNB可根据扩展时延预算,为N个数据包分配传输时机,以使得N个数据包的传输满足时延扩展预算的要求。进一步的,UE的接入层在接收到N个数据包时,接入层可将N个数据包,统一递交到上层。
如图10所示,提供一种通信方法的流程图,包括但不限于:
步骤1000,SMF网元向UE发送第一信息。可选的,SMF可通过AMF网元、gNB 向UE发送第一信息。第一信息可直接指示扩展时延预算的大小,或者,间接指示扩展时延预算的大小。
步骤1001,UPF网元接收来自同一视频帧的N个数据包。例如,视频数据源可通过DN向UPF网元发送同一视频帧的N个数据包。
步骤1002,UPF网元向gNB发送同一视频帧的N个数据包。关于gNB确定同一视频帧的N个数据包的方式,可参见上述记载。
步骤1003,gNB向UE发送同一视频帧的N个数据包。
步骤1004,UE的接入层在等到N个数据包全部收齐时,再向上层统一递交。
同理,gNB向UE发送的同一视频帧的N个数据包中携带有首包的指示信息,或者,携带有尾包的指示信息,或者,携带有不同视频帧的指示信息。因此,UE的接入层在接收到数据包时,可区分每个视频帧包括哪些数据包。不同与现有方案的是,UE的接入层将同一视频帧的所有数据包,统一递交给上层。在一种具体的实现方式中,UE的接入层在接收到一个数据包时。如果发现是属于当前视频帧内的包,则暂时不向上层递交,而是缓存起来。等到收到当前视频帧的全部数据包,再集中向上层递交。可选的,如果UE采用指示信息用于不同视频帧的数据包,那么UE可确定携带相同指示信息的数据包属于同一个视频帧。而当UE接收到不同指示信息的数据包时,UE可确定当前视频帧的数据包已经全部收到,此时可向上层统一递交。
在本申请实施例中,为了防止扩展时延预算超时,视频接收方(例如UE)收齐同一视频帧的全部数据包,再集中向上层递交。
可以理解的是,上述流程中的方案可各自单独使用,也可相结合使用,不作限定。比如,在一种可能的方式中,在下行视频传输中,gNB可根据扩展时延预算为不同的视频帧,分配不同的传输时机,且利用上述传输时机,向UE发送不同的视频帧。而UE侧在接收到视频帧之后,可确定每个视频帧所包括的数据包,将每个视频帧的所有数据包统一递交到上层。
以上结合图1至图10详细说明了本申请实施例所提供的方法。以下结合图11和图12详细说明本申请实施例所提供的装置。应理解,装置与方法实施例的描述相互对应,装置中未详细描述的内容可参见上文方法实施例中的描述。
图11是本申请实施例提供的装置1100的示意性框图,用于实现上述方法中接入网设备或终端设备的功能。例如,该装置可以为软件单元或芯片系统。所述芯片系统可以由芯片构成,也可以包括芯片和其它分立器件。该装置包括通信单元1101,还可包括处理单元1102。通信单元1101,可以与处部进行通信。处理单元1102,用于进行处理。通信单元1101,还可以称为通信接口、收发单元、输入\输出接口等。
在一种示例中,装置1100可实现上述方法实施例中接入网设备执行的步骤,所述装置1100可以是接入网设备,或者配置于接入网设备中的芯片或电路。通信单元1101执行上文方法实施例中接入网设备的收发操作,处理单元1102用于执行上文方法实施例中接入网设备侧的处理相关操作。或者,装置1100可实现上文方法实施例中由终端设备执行的步骤,所述装置1100可以是终端设备,或者配置于终端设备中的芯片或电路。通信单元1101执行上文方法实施例中终端设备的收发操作,处理单元1102用于执行上文方法实施例中终端设备的处理相同操作。
比如,处理单元1102,用于获取扩展时延预算,所述扩展时延预算用于所述通信装 置对一个视频帧的所有数据包的处理进行时间限制;通信单元1101,用于接收第一业务的数据包;处理单元1102,还用于根据所述扩展时延预算,处理所述数据包。
可选的,所述扩展时延预算为预配置的;或者预配置的。例如,通信单元1101可从核心网网元接收用于指示所述扩展时延预算的第一信息;或者,从接入网设备接收用于指示所述扩展时延预算的第一信息。
可选的,所述第一信息用于指示所述第一业务的业务类型时,处理单元1102获取扩展时延预算,包括:根据所述第一信息,确定所述第一业务的业务类型;根据业务类型与扩展时延预算的对应关系,确定所述扩展时延预算。
可选的,通信单元1101接收第一业务的数据包,包括:接收所述第一业务中属于同一视频帧的N个数据包,所述N为大于1的整数;处理单元1102根据所述扩展时延预算,处理所述数据包,包括:根据所述N个数据包的数据量以及所述第一业务的扩展时延预算,确定传输或递交所述N个数据包的时机。
可选的,通信单元1101接收所述第一业务中属于同一视频帧的N个数据包,包括:
接收第一数据包,所述第一数据包中携带有第一指示信息,所述第一指示信息用于指示所述第一数据包为所述N个数据包中的首个数据包;在预设时间内接收到的其它数据包与所述第一数据包属于同一视频帧。
可选的,通信单元1101接收所述第一业务中属于同一视频帧的N个数据包,包括:
接收第一数据包,所述第一数据包中携带有第一指示信息,所述第一指示信息用于指示所述第一数据包为所述N个数据包中的首个数据。所接收的预设数量或预设数据量的其它数据包与所述第一数据包属于同一视频帧。
可选的,通信单元1101接收所述第一业务中属于同一视频帧的N个数据包,包括:
接收第一数据包,所述第一数据包中携带有第一指示信息,所述第一指示信息用于指示所述第一数据包为所述N个数据包中的首个数据包;在所述第一数据包与第二数据包期间所接收的其它数据包与所述第一数据包属于同一个视频帧,所述第二数据包为下一视频帧的首个数据包。
可选的,通信单元1101接收所述第一业务中属于同一个视频帧的N个数据包,包括:接收第三数据包,所述第三数据包中携带有第二指示信息,所述第二指示信息用于指示所述第三数据包为所述N个数据包中的尾数据包;在上一帧的尾数据包与所述第三数据包期间所接收到的其它数据包与所述第三数据包属于同一个视频帧。
可选的,所述N个数据包中的首个数据包中携带有第三指示信息,所述第三指示信息用于指示所述第一业务中视频帧的数据量大小;或者,所述N个数据包中的首个数据包中携带有第四指示信息,所述第四指示信息用于指示所述第一业务中视频帧所包括数据包的平均数据量大小。
可选的,所述第一业务的扩展时延预算是根据以下参数中的一个或多个所确定的:视频接收方的缓冲空间,视频接收方的缓存时长,视频接收方的解压缩算法,视频接收方的解压缩参数,所述第一业务的视频类型。
本申请实施例中对单元的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,另外,在本申请各个实施例中的各功能单元可以集成在一个处理器中,也可以是单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
可以理解的是,上述实施例中的通信单元的功能可以由收发器实现,处理单元的功能可以由处理器实现。收发器可以包括发射器和/或接收器等,分别用于实现发送单元和/或接收单元的功能。以下结合图12举例进行说明。
图12是本申请实施例提供的装置1200的示意性框图,图12所示的装置1200可以为图11所示的装置的一种硬件电路的实现方式。该装置可执行上述方法实施例中接入网设备或终端设备的功能。为了便于说明,图12仅示出该通信装置的主要部件。
图12所示的通信装置1200包括至少一个处理器1201。通信装置1200还可以包括至少一个存储器1202,用于存储程序指令和/或数据。存储器1202和处理器1201耦合。本申请实施例中的耦合是装置、单元或模块之间的间接耦合或通信连接,可以是电性、机械性或其它的形式,用于装置、单元或模块之间的信息交互。处理器1201可以和存储器1202协同操作,处理器1201可以执行存储器1202中存储的程序指令,所述至少一个存储器1202中的至少一个可以包括于处理器1201中。
装置1200还可以包括通信接口1203,用于通过传输介质和其它设备进行通信,从而用于通信装置1200可以和其它设备进行通信。在本申请实施例中,通信接口可以是收发器、电路、总线、模块或其它类型的通信接口。在本申请实施例中,通信接口为收发器时,收发器可以包括独立的接收器、独立的发射器;也可以集成收发功能的收发器、或者是接口电路。
应理解,本申请实施例中不限定上述处理器1201、存储器1202以及通信接口1203之间的连接介质。本申请实施例在图12中以存储器1202、处理器1201以及通信接口1203之间通过通信总线1204连接,总线在图12中以粗线表示,其它部件之间的连接方式,仅是示意性说明,并不作为限定。所述总线可以包括地址总线、数据总线、控制总线等。为了便于表示,图12中仅用一条粗线表示,但并不表示仅有一根总线或一种类型的总线等。
在一种示例中,装置1200用于实现上文方法实施例中接入网设备执行的步骤。通信接口1203用于执行上文方法实施例中接入网设备的收发相关操作,处理器1201用于执行上文方法实施例中接入网设备侧的处理相关操作。或者,装置1200用于实现上文方法实施例中终端设备执行的步骤。通信接口1203用于执行上文方法实施例中终端设备的收发相关操作,处理器1201用于执行上文方法实施例中终端设备侧的处理相关操作。
例如,处理器1201,用于获取扩展时延预算,所述扩展时延预算用于所述通信装置对一个视频帧的所有数据包的处理进行时间限制;通信接口1203,用于接收第一业务的数据包;处理器1201,还用于根据所述扩展时延预算,处理所述数据包。
可选的,所述扩展时延预算为预配置的;例如,通信接口1203,可从核心网网元接收用于指示所述扩展时延预算的第一信息,或者从接入网设备接收用于指示所述扩展时延预算的第一信息。
可选的,所述第一信息用于指示所述第一业务的业务类型时,处理器1201获取扩展时延预算,包括:根据所述第一信息,确定所述第一业务的业务类型;根据业务类型与扩展时延预算的对应关系,确定所述扩展时延预算。
可选的,通信接口1203接收第一业务的数据包,包括:接收所述第一业务中属于同一视频帧的N个数据包,所述N为大于1的整数;处理器1201根据所述扩展时延预算,处理所述数据包,包括:根据所述N个数据包的数据量以及所述第一业务的扩展时延预 算,确定传输或递交所述N个数据包的时机。
可选的,通信接口1203接收所述第一业务中属于同一视频帧的N个数据包,包括:接收第一数据包,所述第一数据包中携带有第一指示信息,所述第一指示信息用于指示所述第一数据包为所述N个数据包中的首个数据包;在预设时间内接收到的其它数据包与所述第一数据包属于同一视频帧。
可选的,通信接口1203接收所述第一业务中属于同一视频帧的N个数据包,包括:接收第一数据包,所述第一数据包中携带有第一指示信息,所述第一指示信息用于指示所述第一数据包为所述N个数据包中的首个数据包;所接收的预设数量或预设数据量的其它数据包与所述第一数据包属于同一视频帧。
可选的,通信接口1203接收所述第一业务中属于同一视频帧的N个数据包,包括:接收第一数据包,所述第一数据包中携带有第一指示信息,所述第一指示信息用于指示所述第一数据包为所述N个数据包中的首个数据包;在所述第一数据包与第二数据包期间所接收的其它数据包与所述第一数据包属于同一个视频帧,所述第二数据包为下一视频帧的首个数据包。
可选的,通信接口1203接收所述第一业务中属于同一个视频帧的N个数据包,包括:接收第三数据包,所述第三数据包中携带有第二指示信息,所述第二指示信息用于指示所述第三数据包为所述N个数据包中的尾数据包;在上一帧的尾数据包与所述第三数据包期间所接收到的其它数据包与所述第三数据包属于同一个视频帧。
可选的,所述N个数据包中的首个数据包中携带有第三指示信息,所述第三指示信息用于指示所述第一业务中视频帧的数据量大小;或者,所述N个数据包中的首个数据包中携带有第四指示信息,所述第四指示信息用于指示所述第一业务中视频帧所包括数据包的平均数据量大小。
可选的,所述第一业务的扩展时延预算是根据以下参数中的一个或多个所确定的:视频接收方的缓冲空间,视频接收方的缓存时长,视频接收方的解压缩算法,视频接收方的解压缩参数,所述第一业务的视频类型。
进一步的,本申请实施例还提供一种装置,所述装置用于执行上文方法实施例中的方法。一种计算机可读存储介质,包括程序,当所述程序被处理器运行时,上文方法实施例中的方法被执行。一种计算机程序产品,所述计算机程序产品包括计算机程序代码,当所述计算机程序代码在计算机上运行时,使得计算机实现上文方法实施例中的方法。一种芯片,包括:处理器,所述处理器与存储器耦合,所述存储器用于存储程序或指令,当所述程序或指令被所述处理器执行时,使得装置执行上文方法实施例中的方法。一种系统,包括执行上文方法实施例的接入网设备、终端设备、核心网网元或应用服务器中的至少一个。
本申请实施例中,处理器可以是通用处理器、数字信号处理器、专用集成电路、现场可编程门阵列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件,可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器中的硬件及软件模块组合执行完成。
在本申请实施例中,存储器可以是非易失性存储器,比如硬盘(hard disk drive,HDD)或固态硬盘(solid-state drive,SSD)等,还可以是易失性存储器(volatile  memory),例如随机存取存储器(random-access memory,RAM)。存储器是能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。本申请实施例中的存储器还可以是电路或者其它任意能够实现存储功能的装置,用于存储程序指令和/或数据。
本申请实施例提供的方法中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本发明实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、网络设备、用户设备或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,简称DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机可以存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,数字视频光盘(digital video disc,简称DVD))、或者半导体介质(例如,SSD)等。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (14)

  1. 一种通信方法,其特征在于,包括:
    通信装置获取扩展时延,所述扩展时延用于所述通信装置对一个视频帧的所有数据包的处理进行时间限制;
    所述通信装置接收第一业务的数据包;
    所述通信装置根据所述扩展时延,处理所述数据包。
  2. 如权利要求1所述的方法,其特征在于,所述扩展时延为预配置的;或者
    所述通信装置为接入网设备,所述接入网设备从核心网网元接收用于指示所述扩展时延的第一信息;或者
    所述通信装置为终端设备,所述终端设备从接入网设备接收用于指示所述扩展时延的第一信息。
  3. 如权利要求2所述的方法,其特征在于,所述第一信息用于指示所述第一业务的业务类型,所述通信装置获取扩展时延,包括:
    所述通信装置根据所述第一信息,确定所述第一业务的业务类型;
    所述通信装置根据业务类型与扩展时延的对应关系,确定所述扩展时延。
  4. 如权利要求1至3中任一项所述的方法,其特征在于,所述通信装置接收第一业务的数据包,包括:所述通信装置接收所述第一业务中属于同一视频帧的N个数据包,所述N为大于1的整数;
    所述通信装置根据所述扩展时延,处理所述数据包,包括:所述通信装置根据所述N个数据包的数据量以及所述第一业务的扩展时延,确定传输或递交所述N个数据包的时机。
  5. 如权利要求4所述的方法,其特征在于,所述通信装置接收所述第一业务中属于同一视频帧的N个数据包,包括:
    所述通信装置接收第一数据包,所述第一数据包中携带有第一指示信息,所述第一指示信息用于指示所述第一数据包为所述N个数据包中的首个数据包;
    所述通信装置在预设时间内接收到的其它数据包与所述第一数据包属于同一视频帧。
  6. 如权利要求4所述的方法,其特征在于,所述通信装置接收所述第一业务中属于同一视频帧的N个数据包,包括:
    所述通信装置接收第一数据包,所述第一数据包中携带有第一指示信息,所述第一指示信息用于指示所述第一数据包为所述N个数据包中的首个数据包;
    所述通信装置所接收的预设数量或预设数据量的其它数据包与所述第一数据包属于同一视频帧。
  7. 如权利要求4所述的方法,其特征在于,所述通信装置接收所述第一业务中属于同一视频帧的N个数据包,包括:
    所述通信装置接收第一数据包,所述第一数据包中携带有第一指示信息,所述第一指示信息用于指示所述第一数据包为所述N个数据包中的首个数据包;
    所述通信装置在所述第一数据包与第二数据包期间所接收的其它数据包与所述第一 数据包属于同一个视频帧,所述第二数据包为下一视频帧的首个数据包。
  8. 如权利要求4所述的方法,其特征在于,所述通信装置接收所述第一业务中属于同一视频帧的N个数据包,包括:
    所述通信装置接收第三数据包,所述第三数据包中携带有第二指示信息,所述第二指示信息用于指示所述第三数据包为所述N个数据包中的尾数据包;
    所述通信装置在上一帧的尾数据包与所述第三数据包期间所接收到的其它数据包与所述第三数据包属于同一个视频帧。
  9. 如权利要求4至8中任一项所述的方法,其特征在于,所述N个数据包中的首个数据包中携带有第三指示信息或第四指示信息,所述第三指示信息用于指示所述N个数据包的数据量大小,所述第四指示信息用于指示所述N个数据包平均数据量的大小。
  10. 如权利要求1至9中任一项所述的方法,其特征在于,所述第一业务的扩展时延是根据以下参数中的一个或多个所确定的:视频接收方的缓存空间,视频接收方的缓存时长,视频接收方的解压缩算法,视频接收方的解压缩参数,所述第一业务的视频类型。
  11. 一种装置,其特征在于,包括用于执行权利要求1至10任一项所述的方法的各步骤的单元。
  12. 一种装置,其特征在于,包括至少一个处理器和接口电路,所述至少一个处理器用于通过所述接口电路与其它装置通信,并执行权利要求1至10任一项所述的方法。
  13. 一种装置,其特征在于,包括处理器,用于调用存储器中存储的程序,以执行如权利要求1至10任一项所述的方法。
  14. 一种计算机可读存储介质,其特征在于,包括程序,当所述程序被处理器运行时,如权利要求1至10中任一项所述的方法被执行。
PCT/CN2021/092358 2020-06-05 2021-05-08 一种通信方法及装置 WO2021244218A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010504771.XA CN113766567A (zh) 2020-06-05 2020-06-05 一种通信方法及装置
CN202010504771.X 2020-06-05

Publications (1)

Publication Number Publication Date
WO2021244218A1 true WO2021244218A1 (zh) 2021-12-09

Family

ID=78784003

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/092358 WO2021244218A1 (zh) 2020-06-05 2021-05-08 一种通信方法及装置

Country Status (2)

Country Link
CN (1) CN113766567A (zh)
WO (1) WO2021244218A1 (zh)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023226879A1 (zh) * 2022-05-25 2023-11-30 华为技术有限公司 通信方法及装置
WO2023241446A1 (zh) * 2022-06-13 2023-12-21 维沃移动通信有限公司 信息处理方法及通信设备
WO2024067424A1 (zh) * 2022-09-30 2024-04-04 中国移动通信有限公司研究院 一种数据处理方法、装置、通信设备和存储介质
WO2024099239A1 (zh) * 2022-11-11 2024-05-16 上海朗帛通信技术有限公司 一种被用于无线通信的通信节点中的方法和装置

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023109743A1 (zh) * 2021-12-17 2023-06-22 华为技术有限公司 传输数据的方法和通信装置
CN118235470A (zh) * 2022-03-15 2024-06-21 Oppo广东移动通信有限公司 无线通信方法及设备
CN118525581A (zh) * 2022-03-15 2024-08-20 Oppo广东移动通信有限公司 无线通信方法及设备
CN114726513A (zh) * 2022-03-18 2022-07-08 阿里巴巴(中国)有限公司 数据传输方法、设备、介质及产品

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007138243A1 (en) * 2006-05-26 2007-12-06 British Telecommunications Public Limited Company Video processing
CN101271720A (zh) * 2008-04-22 2008-09-24 中兴通讯股份有限公司 一种手机流媒体音视频的同步方法
CN102497578A (zh) * 2011-11-25 2012-06-13 武汉大学 一种3g网络环境下的移动音视频实时通信方法
CN106331820A (zh) * 2015-06-29 2017-01-11 成都鼎桥通信技术有限公司 音视频的同步处理方法和装置
CN110351201A (zh) * 2018-04-04 2019-10-18 华为技术有限公司 一种数据处理方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007138243A1 (en) * 2006-05-26 2007-12-06 British Telecommunications Public Limited Company Video processing
CN101271720A (zh) * 2008-04-22 2008-09-24 中兴通讯股份有限公司 一种手机流媒体音视频的同步方法
CN102497578A (zh) * 2011-11-25 2012-06-13 武汉大学 一种3g网络环境下的移动音视频实时通信方法
CN106331820A (zh) * 2015-06-29 2017-01-11 成都鼎桥通信技术有限公司 音视频的同步处理方法和装置
CN110351201A (zh) * 2018-04-04 2019-10-18 华为技术有限公司 一种数据处理方法及装置

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023226879A1 (zh) * 2022-05-25 2023-11-30 华为技术有限公司 通信方法及装置
WO2023241446A1 (zh) * 2022-06-13 2023-12-21 维沃移动通信有限公司 信息处理方法及通信设备
WO2024067424A1 (zh) * 2022-09-30 2024-04-04 中国移动通信有限公司研究院 一种数据处理方法、装置、通信设备和存储介质
WO2024099239A1 (zh) * 2022-11-11 2024-05-16 上海朗帛通信技术有限公司 一种被用于无线通信的通信节点中的方法和装置

Also Published As

Publication number Publication date
CN113766567A (zh) 2021-12-07

Similar Documents

Publication Publication Date Title
WO2021244218A1 (zh) 一种通信方法及装置
JP6907444B2 (ja) データ伝送方法、通信デバイス、端末、および基地局
US20200068652A1 (en) Data transmission processing method and apparatus
WO2021259112A1 (zh) 一种业务传输方法及装置
US20230354334A1 (en) Communication method and apparatus
US20240031870A1 (en) Media data transmission method and communication apparatus
WO2019206322A1 (zh) 能力开放方法、相关装置及系统
US20230090232A1 (en) Terminal device and network device
US20230050923A1 (en) Media packet transmission method, apparatus, and system
US20230231787A1 (en) Communication method and an apparatus
US20240314637A1 (en) Data transmission method and communication apparatus
US20240340693A1 (en) Communication method and communication apparatus
US12047806B2 (en) Interface between a radio access network and an application
WO2023087145A1 (en) Methods and apparatuses for pdcp reordering management
WO2021218593A1 (zh) 一种通信方法及装置
WO2024067374A1 (zh) 一种通信方法及装置
WO2023109743A1 (zh) 传输数据的方法和通信装置
WO2024055871A1 (zh) 一种通信系统中传输数据的方法和通信装置
WO2023070392A1 (zh) 数据传输方法、设备及存储介质
KR20150040080A (ko) 통신 시스템에서 트래픽 오프로딩 방법 및 장치
WO2023173292A1 (zh) 无线通信方法及设备
WO2023185608A1 (zh) 一种数据传输的方法及通信装置
US12058039B2 (en) Packet validity time enhancement for quality of service flows
WO2024169477A1 (zh) 上行链路控制信息的发送方法及装置
US20240305574A1 (en) Quality of service qos management method and apparatus

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21817332

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21817332

Country of ref document: EP

Kind code of ref document: A1