WO2012116563A1 - 视频处理方法和设备 - Google Patents

视频处理方法和设备 Download PDF

Info

Publication number
WO2012116563A1
WO2012116563A1 PCT/CN2011/083526 CN2011083526W WO2012116563A1 WO 2012116563 A1 WO2012116563 A1 WO 2012116563A1 CN 2011083526 W CN2011083526 W CN 2011083526W WO 2012116563 A1 WO2012116563 A1 WO 2012116563A1
Authority
WO
WIPO (PCT)
Prior art keywords
codec unit
video data
decoded
unit
encoder
Prior art date
Application number
PCT/CN2011/083526
Other languages
English (en)
French (fr)
Inventor
李波杰
彭程晖
张锦芳
张伟
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2012116563A1 publication Critical patent/WO2012116563A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements

Definitions

  • the embodiments of the present invention relate to the field of video technologies, and in particular, to a video processing method and device. Background technique
  • video is more and more widely used in mobile networks.
  • the video service while providing rich information, also occupies most of the system bandwidth and becomes a major pressure for operators, especially for mobile operators. Due to limited bandwidth, video services are prone to congestion in mobile networks, and the experience of video users is degraded, affecting other voice/data users.
  • an intra-coded frame In the existing video coding format, there is a distinction between an intra-coded frame and an inter-coded frame.
  • the intra-coded frame is an independently encoded image, and the receiving end does not rely on other frame decoding.
  • the decoding of the inter-coded frame by the receiving end depends on the previously received frame. If the previously received frame is lost or transmitted incorrectly. , then the image that is subsequently dependent on this frame decoding will have an error until the next intra-coded frame is decoded normally. Since the intra-coded frame does not refer to other frames, the amount of encoded data is generally much larger than the amount of data of the inter-coded frame.
  • I uses I to represent intra-coded frames, and P stands for forward inter-frame predictive coded frames.
  • Inter-predictive coding generally used for non-real-time video).
  • this periodic sequence is called an image group, and the first I frame in the image group is called a key frame.
  • I, P, and B can also represent the coding type of the macroblock, and the meaning is the same as the coding type of the frame, where the macroblock is the coding unit of the frame. All macroblocks in the I frame are encoded as I macroblocks, and there are two types of macroblocks, I and P, in the P frame. There are three types of macroblocks, I, P and B, in the B frame.
  • the embodiment of the invention provides a video processing method and device, so as to reduce bandwidth resources occupied by video data and improve user experience.
  • the embodiment of the invention provides a video processing method, including:
  • the first device refers to a codec unit in the reference image buffer of the own encoder that can be correctly decoded by the second device, and encodes the original codec unit or the decoded codec unit of the video data into a P or B type codec unit;
  • the codec unit includes a frame or a macroblock;
  • the first device sends the encoded codec unit to the second device.
  • the embodiment of the invention further provides a first device, including:
  • An encoding module configured to reference a codec unit in a reference image buffer of the own encoder that can be correctly decoded by the second device, and encode the original codec unit or the decoded codec unit of the video data into a P or B type codec a unit; the codec unit includes a frame or a macroblock;
  • a sending module configured to send the codec unit that is encoded by the encoding module to the second device.
  • the first device may encode the original codec unit or the decoded codec unit of the video data into a reference to the codec unit in the reference image buffer of the own encoder that can be correctly decoded by the second device. Or a B type codec unit, and then the encoded codec unit is sent to the second device. Since the P or B type codec unit has a small amount of data, encoding the original codec unit or the decoded codec unit of the video data into a P or B type codec unit can reduce the bandwidth resource occupied by the video data, and thus can Improve user experience.
  • Embodiment 1 is a flowchart of Embodiment 1 of a video processing method according to the present invention
  • Embodiment 1 of an application scenario of the present invention
  • Embodiment 3 is a flowchart of Embodiment 2 of a video processing method according to the present invention.
  • 4 is a schematic diagram of Embodiment 2 of an application scenario of the present invention.
  • FIG. 5 is a schematic diagram of Embodiment 3 of an application scenario of the present invention.
  • FIG. 6 is a flowchart of Embodiment 3 of a video processing method according to the present invention.
  • Embodiment 7 is a schematic structural diagram of Embodiment 1 of a first device of the present invention.
  • FIG. 8 is a schematic structural diagram of Embodiment 2 of the first device of the present invention. detailed description
  • FIG. 1 is a flowchart of Embodiment 1 of a video processing method according to the present invention. As shown in FIG. 1, the video processing method may include:
  • Step 1 01 The first device refers to a codec unit in the reference image buffer of the self-encoder that can be correctly decoded by the second device, and encodes the original codec unit or the decoded codec unit of the video data into a P or B type.
  • a codec unit wherein the codec unit may be a frame or a macroblock.
  • Step 1 02 The first device sends the coded codec unit to the second device.
  • the codec unit that can be correctly decoded by the second device in the reference image buffer of the self-encoder of the first device encodes the original codec unit or the decoded codec unit of the video data as P or Before the B-type codec unit, the first device may further maintain a correspondence between the coded unit in the reference image buffer of the transmitted data unit and the self-encoder, and according to the first device and the second device
  • the transmission error feedback flag encodes a codec unit in the reference picture buffer of the own encoder that can be correctly decoded by the second device.
  • the first A device may encode the original codec unit or the decoded codec unit of the video data into an I type codec unit.
  • the second device may decode and re-encode the encoded codec unit, and encode the partial codec unit into an image group key frame codec unit according to a predetermined period in the re-encoding process, for example: an I-type codec unit.
  • the codec unit that can be correctly decoded by the second device in the reference image buffer of the self-encoder of the first device encodes the decoded codec unit of the video data into a P or B type codec unit.
  • the first device may further receive the video data from the network, and decode the video data to obtain a decoded codec unit.
  • the first device may further mark the reference image buffer of the first device encoder according to whether the process of decoding the video data uses error concealment technology and transmission error feedback between the first device and the second device. A codec unit that can be correctly decoded by the second device.
  • the first device may further use an Internet protocol (hereinafter referred to as IP address), a port number, and a protocol number according to the video data.
  • IP address an Internet protocol
  • the first device may further use an Internet protocol (hereinafter referred to as IP address), a port number, and a protocol number according to the video data.
  • IP address an Internet protocol
  • the internal tags of the network determines a service flow corresponding to the video data; and acquires parameters required to decode or re-encode the video data.
  • the first device may only decode the type I codec unit in the received video data, obtain the decoded codec unit, and then reference the decoded codec unit to reference the reference image buffer of the own encoder.
  • the codec unit that can be correctly decoded by the second device is encoded as a P or B type codec unit.
  • the first device may directly send the codec unit of the P or B type in the video data to the second device.
  • the first device encodes, by the original codec unit or the decoded codec unit of the video data, a codec unit that can be correctly decoded by the second device in the reference picture buffer of the first device encoder as P. Or a B type codec unit, and then the encoded codec unit is sent to the second device. Since the P or B type codec unit has a small amount of data, encoding the original codec unit or the decoded codec unit of the video data into a P or B type codec unit can reduce the bandwidth resource occupied by the video data, and thus can Improve user experience.
  • the application scenario shown in FIG. 2 is a downlink real-time video service scenario, as shown in FIG. 2, in this scenario, the transport layer generally does not adopt end-to-end
  • the method of retransmission allows the occurrence of packet loss/error packets.
  • the video source sends the downlink video data to the network backup.
  • the network side network element may be a base station, a gateway, or a core network device, and the like, which is not limited in this embodiment of the present invention.
  • FIG. 3 is a flowchart of a second embodiment of a video processing method according to the present invention.
  • the video processing method provided in this embodiment can be applied to the application scenario shown in FIG. 2, and the network side network element is used as a base station as an example for description.
  • the video processing method may include:
  • Step 301 The base station receives downlink video data, where the downlink video data includes I or P type frames or macroblocks, and may also include B type frames or macroblocks.
  • Step 302 The base station determines a service flow corresponding to the downlink video data, and acquires parameters required for decoding or re-encoding the downlink video data.
  • the base station may determine, according to at least one of an IP address, a port, a protocol number, and a network internal label, a service flow corresponding to the downlink video data;
  • the base station may obtain and record the video parameters transmitted in the service flow, or decode or re-encode the downlink video data by interacting with the video source, or by interacting with the receiving device, or by setting. Required parameters.
  • Step 303 The base station decodes the received downlink video data, and marks the correctly decoded frame or macroblock by using a error concealment technique according to the decoding process.
  • the frame or macroblock uses error concealment techniques when decoding, the frame or macroblock is not correctly decoded; or if a frame or macroblock referenced data is not correctly decoded, the frame or macroblock is considered Also not decoded correctly. Conversely, if the decoding of a frame or macroblock and all its referenced data do not use error concealment techniques, the frame or macroblock is marked as a correctly decoded frame or macroblock.
  • Step 304 The base station re-encodes the decoded frame or macroblock into a P or B type frame or macroblock, and the reference frame or macroblock used in the encoding is a frame in the base station encoder reference image buffer that can be correctly decoded by the terminal. Or a macroblock.
  • the base station maintains the transmitted data unit in advance (such as a hybrid automatic repeat request (Hybrid Automa tic Repeas Reques t; HARQ) block, media access control (Med ia Acces s Cont ro) l; hereinafter referred to as: MAC) package, automatic retransmission
  • a hybrid automatic repeat request Hybrid Automa tic Repeas Reques t; HARQ
  • MAC media access control
  • ARQ Automatic retransmission
  • ARQ hybrid automatic repeat request
  • step 303 marks a correctly decoded frame or macroblock if previously referenced is a frame or macroblock that can be correctly decoded by the terminal, and is correctly transmitted to the terminal, for example, by no HARQ or ARQ error.
  • the feedback determining step 303 marks that the correctly decoded frame or macroblock is correctly transmitted to the terminal, and then the frame or macroblock can be considered to be correctly decoded at the terminal, and the base station mainly marks the frame placed in the reference encoder buffer of its own encoder. Or a macroblock.
  • the base station can re-encode the originally decoded frame or macroblock into an I-type frame or macroblock.
  • the correctly decoded frame or macroblock marked in step 303 is re-encoded into an I-type frame or macroblock and correctly transmitted to the terminal, the frame or macroblock is considered to be correctly decoded at the terminal.
  • Step 305 The base station sends the re-encoded video data to the terminal.
  • step 304 only the type I frame or macroblock in the received downlink video data may be re-encoded into a P or B type frame or macroblock, and the remaining data in the downlink video data may be directly sent to the terminal. .
  • the network side network element is used as the base station as an example.
  • the transmission feedback between the network side network element and the terminal needs to rely on the base station to transmit the air interface.
  • the feedback (for example, HARQ feedback or ARQ feedback) is sent to the network side network element; in step 305, the network side network element first transmits the re-encoded video data to the base station through the wired link, and then the base station sends the video data to the terminal through the air interface.
  • the network side network element re-encodes the video data of the type I or P (optionally containing B) type frame or macroblock, and uses the P or B type frame or macro as much as possible during the re-encoding process.
  • the block then sends the encoded downlink video data to the terminal device. Since the P or B type codec unit contains a small amount of data, the use of P or B type frames or macroblocks in the re-encoding process can reduce the bandwidth resources occupied by the video data, and can also improve the user experience.
  • FIG. 4 is a schematic diagram of Embodiment 2 of the application scenario of the present invention, and the scenario shown in FIG. 4 is a non-real-time downlink scenario, as shown in FIG. 4, in this scenario, an end-to-end between the video source and the terminal device may be adopted.
  • the retransmission mode eliminates the error; or the video source establishes a TCP connection with the network side network element, and the network side network element establishes a TCP connection with the terminal device, and uses segmented TCP transmission to eliminate the transmission error.
  • the network side network element may be a certain network element of the access network or the core network, and the network side network element may receive the received IPPP.
  • the period may be a fixed period or an unfixed period, and the middle may be Some frames are encoded as B frames.
  • I frames in the downlink video data of the periodic structure are converted into P or B frames.
  • the I macroblocks in the received video data can also be re-encoded into P or B macroblocks, and then re-encoded.
  • the downlink video data is sent to the terminal device.
  • the downlink video data received by the network side network element may be out of order due to different path transmission or error packet retransmission.
  • the network side network element may use any one of the following three methods to set the I frame in the downlink video data.
  • the macroblock is recoded into a P or B type frame or macroblock:
  • the network side network element may refer to the frame that the network side network element has correctly received and decoded, and convert the I frame or macro block in the received downlink video data into a P or B type frame or macro block;
  • the network side network element may not use the I in the downlink video data. Converting a frame or macroblock to a P or B type frame or macroblock;
  • the network side network element waits for the appropriate required reference frame or macroblock to arrive and then converts the I frame or macroblock into a P or B type frame or macroblock.
  • the processing logic of the network side network element is for a specific service flow. Therefore, before the network side network element processes the received downlink video data, the network side network element needs to determine the downlink video first.
  • the network side network element may determine the service flow corresponding to the downlink video data according to at least one of the IP address, the port, the protocol number, and the internal label of the network; the network side network element may intercept and record the video parameter transmitted in the service flow,
  • the parameters required to decode or re-encode the downstream video data are obtained either by decoding, or by interacting with the video source, or by interacting with the receiving device, or by setting.
  • the terminal device may receive and decode the video for the control of a local video cassette recorder (V ideo Ca s set te recorderer (VCR) for playback or skipping.
  • VCR video cassette recorder
  • the data is re-encoded into a group of key frames of the image group according to a certain period, for example, a video sequence re-encoded into a periodic structure of IPPP ... (or some encoded as B frames in the middle).
  • 5 is a schematic diagram of Embodiment 3 of the application scenario of the present invention.
  • the scenario shown in FIG. 5 is an uplink real-time video service scenario. As shown in FIG.
  • the terminal device sends real-time uplink video data as a video source, and the transport layer does not. End-to-end retransmission is used to allow packet loss or packet error.
  • the network side (access network or core network) NE needs to deploy video transcoding.
  • FIG. 6 is a flowchart of a third embodiment of a video processing method according to the present invention.
  • the video processing method provided in this embodiment may be applied to the application scenario shown in FIG. 5 of the present invention.
  • the video processing method may include:
  • Step 601 In addition to the first frame in the uplink video data, the terminal encodes the video frame or the macro block into a P or B type frame or a macro block, and the reference frame or macro block used in the encoding is a terminal encoder reference image buffer. A frame or macroblock that can be correctly decoded by the network side network element.
  • the terminal maintains in advance the transmitted data unit (for example: HARQ block, MAC packet, ARQ block or IP packet) and the frame or macro block in the reference encoder image buffer of the terminal encoder: HARQ feedback or ARQ Feedback) Marks a frame or macroblock in the terminal encoder reference picture buffer that can be correctly decoded by the network side. For example, if a frame or a macroblock is previously referenced to a frame or macroblock that can be correctly decoded by the network side network element, and is correctly transmitted to the network side network element, for example, there may be no HARQ or ARQ error.
  • the transmitted data unit for example: HARQ block, MAC packet, ARQ block or IP packet
  • the frame or macro block in the reference encoder image buffer of the terminal encoder HARQ feedback or ARQ Feedback
  • the feedback determines that the frame or macroblock is correctly transmitted to the network side network element, and considers that the frame or macroblock can be correctly decoded on the network side network element, and the terminal mainly marks the frame or macro placed in the reference encoder image buffer of the own encoder. Piece.
  • the terminal encodes the frame or macroblock to be encoded into an I type frame or macroblock.
  • the Type I frame or macroblock is considered to be correctly decoded by the network side network element as long as it is correctly transmitted to the network side network element.
  • Step 602 The terminal sends the encoded video data to the network side network element.
  • Step 603 The network side network element performs decoding and re-encoding after receiving the video data sent by the terminal, and re-encodes part of the frame into a key group of the image group according to a certain period in the re-encoding process, for example, re-encoding into I PPP... (or Some video sequences encoded as a B frame) periodic structure.
  • the processing logic of the uplink video data by the network side network element is for a specific service flow, so the network side network element needs to determine the uplink before decoding and re-encoding the received uplink video data.
  • the traffic corresponding to the video data and obtain the uplink
  • the network side network element may determine the service flow corresponding to the uplink video data according to at least one of the IP address, the port, the protocol number, and the internal label of the network; the network side network element may intercept and record the video parameter transmitted in the service, or The parameters required to decode or re-encode the upstream video data are obtained by decoding, or by interacting with the video source, or by interacting with the receiving device, or by way of setting.
  • the terminal device uses P or B type frames or macroblocks as much as possible in the video data encoding process, thereby reducing the bandwidth resources occupied by the video data and improving the user experience.
  • the foregoing storage medium includes: a medium that can store program codes, such as a ROM, a RAM, a magnetic disk, or an optical disk.
  • FIG. 7 is a schematic structural diagram of Embodiment 1 of the first device of the present invention.
  • the first device in this embodiment can implement the process of the embodiment shown in FIG. 1 of the present invention.
  • the first device may include:
  • the encoding module 71 is configured to encode the original codec unit or the decoded codec unit of the video data into a P or B type code by referring to a codec unit in the reference image buffer of the own encoder that can be correctly decoded by the second device.
  • a decoding unit where the codec unit may be a frame or a macroblock
  • a sending module 72 configured to send the codec unit encoded by the encoding module 71 to the second device.
  • the encoding module 71 refers to the codec unit in the reference image buffer of the self-encoder that can be correctly decoded by the second device, and encodes the original codec unit or the decoded codec unit of the video data into P or B.
  • the type codec unit, and then the transmitting module 72 transmits the encoded codec unit to the second device. Since the P or B type codec unit contains a small amount of data, the encoding module 71 encodes the original codec unit or the decoded codec unit of the video data into a P or B type codec unit to reduce bandwidth resources occupied by the video data. , in turn, can improve user experience.
  • FIG. 8 is a schematic structural diagram of Embodiment 2 of the first device of the present invention, which is different from the first device shown in FIG. 7 in that the first device shown in FIG. 8 may further include: a maintenance module 73, configured to maintain a correspondence between codec units in the reference image buffer of the transmitted data unit and the own encoder;
  • the marking module 74 is configured to mark, according to the transmission error between the first device and the second device, a codec unit in the reference image buffer of the own encoder that can be correctly decoded by the second device.
  • the encoding module 71 may further encode the original codec unit or the decoded codec unit of the video data when there is no codec unit in the reference image buffer of the own encoder that can be correctly decoded by the second device. It is an I type codec unit.
  • the first device may further include:
  • a receiving module 75 configured to receive video data from a network
  • the decoding module 76 is configured to decode the video data received by the receiving module 75 to obtain a decoded codec unit.
  • the marking module 74 may further use the error concealment technique and the transmission error feedback between the first device and the second device according to the decoding process of the decoding module 76, and mark the reference image of the encoder. A codec unit in the rush area that can be correctly decoded by the second device.
  • the decoding module 76 may decode only the type I codec unit in the video data to obtain a decoded codec unit.
  • the sending module 72 can directly send the codec unit of the P or B type in the above video data to the second device.
  • the first device may further include:
  • a determining module 77 configured to determine, according to at least one of an IP address, a port number, a protocol number, and a network internal label of the video data, a service flow corresponding to the video data;
  • the obtaining module 78 is configured to acquire parameters required for decoding or re-encoding the video data.
  • the encoding module 71 re-encodes the video data of the type I or P (optionally containing B) type frame or macroblock, and uses the P or B type as much as possible in the re-encoding process, and then transmits the module. 72.
  • the encoded downlink video data is sent to the second device. Since the P or B type codec unit contains a small amount of data, the encoding module 71 encodes the original codec unit or the decoded codec unit of the video data into a P or B type codec unit to reduce bandwidth resources occupied by the video data. , can also improve user experience.
  • modules in the apparatus in the embodiments may be distributed in the apparatus of the embodiment according to the embodiment description, or the corresponding changes may be located in one or more apparatuses different from the embodiment.
  • the modules of the above embodiments may be combined into one module, or may be further split into a plurality of sub-modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Description

视频处理方法和设备
技术领域
本发明实施例涉及视频技术领域, 尤其涉及一种视频处理方法和设 备。 背景技术
随着通信技术的发展, 视频在移动网络中的应用越来越广泛。 但视频 业务在提供丰富信息的同时, 也占用了大部分的系统带宽, 成为运营商的 一大压力, 特别是对于移动运营商。 由于带宽有限, 视频业务容易造成移 动网络拥塞, 视频用户的体验下降, 同时给其他语音 /数据用户造成影响。
现有的视频编码格式中, 存在帧内编码帧和帧间编码帧的区分。 帧内 编码帧为独立编码的图像, 接收端不依赖于其他帧解码; 而接收端对帧间 编码帧的解码则需要依赖于之前收到的帧, 如果之前收到的帧发生丟失或 者传输错误, 则后续依赖这一帧解码的图像都会发生错误, 直至下一个帧 内编码帧正常解码。 帧内编码帧由于没有参考其他帧, 其编码后的数据量 一般比帧间编码帧的数据量大的多。 用 I代表帧内编码帧, P代表前向帧 间预测编码帧,则现有的许多视频,尤其是实时视频,都表现为 IPPP…(或 者中间有一些编码为 B帧, B帧为双向帧间预测编码, 一般用于非实时视 频) 的周期格式, 有些编码标准中称这种周期序列为图像组, 称图像组中 第一个 I帧为关键帧。 另外, I、 P和 B也可以代表宏块的编码类型, 含义 与帧的编码类型一样, 其中宏块是帧的编码单位。 I 帧中所有的宏块都编 码为 I宏块, 而 P帧中可以有 I和 P两种类型的宏块, B帧中可以有 I、 P 和 B三种类型的宏块。
但是, 对于 IPPP ... (或者中间有一些编码为 B帧)这种 I帧周期间隔 的视频数据, 存在视频数据量过大, 占用带宽资源较多, 用户体验不好的 问题。 另外, 对于 I宏块, 目前有些做法是在 P帧或 B帧中随机地插入 I 宏块, 以减小传输错误带来的影响; 但是, 这些随机插入的 I宏块在没有 错误发生时是不必要的, 同样存在占用带宽资源较多的问题, 相应地用户 体验度不高。 发明内容
本发明实施例提供一种视频处理方法和设备, 以减少视频数据占用的 带宽资源, 以及提高用户体验度。
本发明实施例提供一种视频处理方法, 包括:
第一设备参考自身编码器的参考图像緩冲区中能被第二设备正确解 码的编解码单元, 将视频数据的原始编解码单元或者解码后编解码单元编 码为 P或 B类型编解码单元; 所述编解码单元包括帧或宏块;
所述第一设备将编码后的编解码单元发送给所述第二设备。
本发明实施例还提供一种第一设备, 包括:
编码模块, 用于参考自身编码器的参考图像緩冲区中能被第二设备正 确解码的编解码单元, 将视频数据的原始编解码单元或者解码后编解码单 元编码为 P或 B类型编解码单元; 所述编解码单元包括帧或宏块;
发送模块, 用于将所述编码模块编码后的编解码单元发送给所述第二 设备。
通过本发明实施例, 第一设备可以参考自身编码器的参考图像緩冲区 中能被第二设备正确解码的编解码单元, 将视频数据的原始编解码单元或 者解码后编解码单元编码为 P或 B类型编解码单元, 然后将编码后的编解 码单元发送给第二设备。 由于 P或 B类型编解码单元包含的数据量较小, 因此将视频数据的原始编解码单元或者解码后编解码单元编码为 P或 B类 型编解码单元可以减少视频数据占用的带宽资源, 进而可以提高用户体验 度。 附图说明
为了更清楚地说明本发明实施例或现有技术中的技术方案, 下面将对实 施例或现有技术描述中所需要使用的附图作一简单地介绍, 显而易见地, 下 面描述中的附图是本发明的一些实施例, 对于本领域普通技术人员来讲, 在 不付出创造性劳动的前提下, 还可以根据这些附图获得其他的附图。
图 1为本发明视频处理方法实施例一的流程图;
图 2为本发明应用场景实施例一的示意图;
图 3为本发明视频处理方法实施例二的流程图; 图 4为本发明应用场景实施例二的示意图;
图 5为本发明应用场景实施例三的示意图;
图 6为本发明视频处理方法实施例三的流程图;
图 7为本发明第一设备实施例一的结构示意图;
图 8为本发明第一设备实施例二的结构示意图。 具体实施方式
为使本发明实施例的目的、 技术方案和优点更加清楚, 下面将结合本 发明实施例中的附图, 对本发明实施例中的技术方案进行清楚、 完整地描 述, 显然, 所描述的实施例是本发明一部分实施例, 而不是全部的实施例。 基于本发明中的实施例, 本领域普通技术人员在没有做出创造性劳动的前 提下所获得的所有其他实施例, 都属于本发明保护的范围。
图 1为本发明视频处理方法实施例一的流程图, 如图 1所示, 该视频 处理方法可以包括:
步骤 1 01 , 第一设备参考自身编码器的参考图像緩冲区中能被第二设 备正确解码的编解码单元, 将视频数据的原始编解码单元或者解码后编解 码单元编码为 P或 B类型编解码单元, 其中, 上述编解码单元可以为帧或 宏块。
步骤 1 02 , 第一设备将编码后的编解码单元发送给第二设备。
本实施例中, 在第一设备参考自身编码器的参考图像緩冲区中能被第 二设备正确解码的编解码单元, 将视频数据的原始编解码单元或者解码后 编解码单元编码为 P或 B类型编解码单元之前, 该第一设备还可以维护已 发送数据单元和自身编码器的参考图像緩冲区中编解码单元之间的对应 关系, 并根据第一设备与第二设备之间的传输错误反馈标记自身编码器的 参考图像緩冲区中能被第二设备正确解码的编解码单元。
本实施例中, 在第一设备将编码后的编解码单元发送给第二设备之 前, 当自身编码器的参考图像緩冲区中没有能被第二设备正确解码的编解 码单元时, 该第一设备可以将该视频数据的原始编解码单元或者解码后编 解码单元编码为 I类型编解码单元。
进一步地, 在第一设备将编码后的编解码单元发送给第二设备之后, 第二设备可以对编码后的编解码单元进行解码和重新编码, 在重新编码过 程中按照预定周期将部分编解码单元编码为图像组关键帧编解码单元, 例 如: I类型编解码单元。
本实施例中, 在第一设备参考自身编码器的参考图像緩冲区中能被第 二设备正确解码的编解码单元, 将视频数据的解码后编解码单元编码为 P 或 B类型编解码单元之前, 第一设备还可以从网络接收上述视频数据, 并 对该视频数据进行解码, 获得解码后编解码单元。
进一步地, 第一设备还可以根据对所述视频数据进行解码的过程是否 使用错误隐藏技术以及第一设备与第二设备之间的传输错误反馈, 标记第 一设备编码器的参考图像緩冲区中能被第二设备正确解码的编解码单元。
另外, 第一设备从网络接收视频数据之后, 对该视频数据进行解码之 前,第一设备还可以根据上述视频数据的因特网协议( Interne t Protoco l ; 以下简称: IP )地址、 端口号、 协议号和网络内部标签中的至少一个确定 该视频数据对应的业务流; 并获取对该视频数据进行解码或重编码所需的 参数。
本实施例中, 第一设备可以只对接收的视频数据中的 I类型编解码单 元进行解码, 获得解码后编解码单元, 然后将该解码后编解码单元参考自 身编码器的参考图像緩冲区中能被第二设备正确解码的编解码单元编码 为 P或 B类型编解码单元。而对于视频数据中的 P或 B类型的编解码单元, 第一设备可以将该视频数据中的 P或 B类型的编解码单元直接发送给第二 设备。
上述实施例中, 第一设备将视频数据的原始编解码单元或者解码后编 解码单元参考该第一设备编码器的参考图像緩冲区中可以被第二设备正 确解码的编解码单元编码为 P或 B类型编解码单元, 然后将编码后的编解 码单元发送给第二设备。 由于 P或 B类型编解码单元包含的数据量较小, 因此将视频数据的原始编解码单元或者解码后编解码单元编码为 P或 B类 型编解码单元可以减少视频数据占用的带宽资源, 进而可以提高用户体验 度。
图 2为本发明应用场景实施例一的示意图, 图 2所示的应用场景为下 行实时视频业务场景, 如图 2所示, 本场景中, 传输层一般不采用端到端 重传的方式, 允许有丟包 /错包的发生。 视频源将下行视频数据发送给网 备。 网络侧网元可以是基站、 网关或者核心网设备等, 本发明实施例对此 不作限定。
图 3为本发明视频处理方法实施例二的流程图, 本实施例提供的视频 处理方法可以应用于图 2所示的应用场景中, 本实施例以网络侧网元为基 站为例进行说明。
如图 3所示, 该视频处理方法可以包括:
步骤 301 , 基站接收下行视频数据, 该下行视频数据包含 I和 P类型 的帧或宏块, 也可能包含 B类型帧或宏块。
步骤 302 , 基站确定下行视频数据对应的业务流, 并获取对该下行视 频数据进行解码或重编码所需的参数。
具体地, 基站可以根据 IP地址、 端口、 协议号和网络内部标签中的 至少一个确定下行视频数据对应的业务流;
基站可以通过截取并记录业务流中传递的视频参数, 或通过解码, 或 通过与视频源交互, 或通过与接收端设备交互, 或通过设置的方式获得对 该下行视频数据进行解码或重编码所需的参数。
步骤 303 , 基站对接收的下行视频数据进行解码, 并根据解码过程是 否使用错误隐藏技术标记被正确解码的帧或宏块。
具体地, 如果一个帧或宏块在解码时使用了错误隐藏技术, 则该帧或 宏块没有正确解码; 或者如果一个帧或宏块所参考的数据没有正确解码, 则认为该帧或宏块也没有正确解码。 相反地, 如果一个帧或宏块的解码以 及其所有参考的数据都没有使用错误隐藏技术, 则标记该帧或宏块为正确 解码的帧或宏块。
步骤 304 ,基站将解码后的帧或宏块重新编码为 P或 B类型帧或宏块, 编码所使用的参考帧或宏块为基站编码器参考图像緩冲区中可以被终端 正确解码的帧或宏块。
具体的, 在重编码之前, 基站事先维护已发送数据单元(如混合自动 重传请求 ( Hybr id Automa t i c Repea t Reques t ; 以下简称: HARQ ) 块、 媒体接入控制 (Med ia Acces s Cont ro l ; 以下简称: MAC ) 包、 自动重传 请求 ( Automa t i c Repea t Reques t ; 以下简称: ARQ ) 块或 IP包) 与基站 编码器参考图像緩冲区中帧或宏块之间的对应关系, 并根据步骤 303的标 记结果以及基站与终端之间已发送数据的传输反馈(例如: HARQ反馈或 ARQ反馈)标记基站编码器参考图像緩冲区中可以被终端正确解码的帧或 宏块。 举例来说, 步骤 303标记正确解码的帧或宏块如果之前参考的是能 被终端正确解码的帧或宏块, 并且又正确地被传输给终端, 举例来说, 可 以通过没有 HARQ或者 ARQ错误反馈确定步骤 303标记正确解码的帧或宏 块被正确地传输给终端, 则可以认为该帧或宏块在终端可以被正确解码, 基站主要标记放入自身编码器参考图像緩冲区中的帧或宏块。
特别地, 如果基站编码器参考图像緩冲区中没有合适的帧或宏块时, 则基站可以将原先解码后的帧或宏块重新编码为 I类型帧或宏块。 在这种 情况下, 步骤 303标记的正确解码的帧或宏块在被重新编码为 I类型帧或 宏块,并被正确传输给终端之后,就认为该帧或宏块可以在终端正确解码。
步骤 305 , 基站将重新编码后的视频数据发送给终端。
可替代地, 步骤 304中可以只将接收的下行视频数据中的 I类型帧或 宏块重新编码为 P或 B类型的帧或宏块, 而将上述下行视频数据中的其余 数据直接发送给终端。
本实施例以网络侧网元为基站为例进行说明, 对于网络侧网元为网关 或核心网设备的情况, 在步骤 304 , 网络侧网元与终端之间的传输反馈需 要依赖基站将空口传输反馈(例如: HARQ反馈或 ARQ反馈 )告知给网络侧 网元; 在步骤 305 , 网络侧网元先通过有线链路将重新编码后的视频数据 传输给基站, 再由基站通过空口发送给终端。
上述实施例中, 网络侧网元将接收类型包含 I和 P (可选的包含 B ) 类型帧或宏块的视频数据重新编码, 重新编码过程中尽可能地使用 P或 B 类型的帧或宏块, 然后将编码后的下行视频数据发送给终端设备。 由于 P 或 B类型编解码单元包含的数据量较小, 因此重新编码过程中尽可能地使 用 P或 B类型的帧或宏块可以减少视频数据占用的带宽资源, 也可以提高 用户体验度。
图 4为本发明应用场景实施例二的示意图, 图 4所示的场景为非实时 下行场景, 如图 4所示, 本场景中, 视频源与终端设备之间可以采用端到 端重传的方式消除错误; 或者视频源与网络侧网元建立 TCP连接, 网络侧 网元与终端设备建立 TCP连接, 采用分段 TCP传输的方式消除传输错误。
本实施例中, 网络侧网元可以为接入网或核心网的某个网元, 该网络 侧网元可以将接收到的 IPPP ... (周期可以是固定周期或不固定周期, 中间 可以有一些帧编码为 B帧)周期结构的下行视频数据中的 I帧转换成 P或 B帧, 也可以将接收视频数据中的 I宏块重新编码为 P或 B宏块, 然后将 重新编码后的下行视频数据发送给终端设备。
由于不同路径传输或者错包重传可能造成网络侧网元接收的下行视 频数据是乱序的, 此时网络侧网元可以采用以下 3种方式中的任意一种将 下行视频数据中的 I帧或宏块重新编码成 P或 B类型帧或宏块:
1 ) 网络侧网元可以参考该网络侧网元已正确接收并解码的帧, 将接 收到的下行视频数据中的 I帧或宏块转换成 P或 B类型帧或宏块;
2 ) 当网络侧网元认为待转换的 I帧或宏块所能参考的帧不在该网络 侧网元的解码緩冲区中时, 则该网络侧网元可以不将下行视频数据中的 I 帧或宏块转换为 P或 B类型帧或宏块;
3 ) 网络侧网元等待合适的所需参考的帧或宏块到达后再将 I帧或宏 块转换为 P或 B类型帧或宏块。
同样, 本实施例中, 网络侧网元的处理逻辑都是针对一个特定的业务 流的, 因此网络侧网元在将接收到的下行视频数据进行处理之前, 网络侧 网元需要先确定下行视频数据对应的业务流, 并获取对该下行视频数据进 行解码或重编码所需的参数。 具体地, 网络侧网元可以根据 IP地址、 端 口、 协议号和网络内部标签中的至少一个确定下行视频数据对应的业务 流; 网络侧网元可以通过截取并记录业务流中传递的视频参数, 或通过解 码, 或通过与视频源交互, 或者通过与接收端设备交互, 或通过设置的方 式获得对该下行视频数据进行解码或重编码所需的参数。
另外, 本实施例中, 可选地, 终端设备可以出于回放或跳跃等本地卡 带式影像录像机(V ideo Ca s set te Recorder ; 以下简称: VCR )控制的需 要, 将接收并解码的视频数据按照一定周期将部分帧重新编码为图像组关 键帧, 例如重新编码为 IPPP ... (或者中间有一些编码为 B帧)周期结构的 视频序列。 图 5为本发明应用场景实施例三的示意图, 图 5所示的场景为上行实 时视频业务场景, 如图 5所示, 本场景中, 终端设备作为视频源发送实时 上行视频数据, 传输层不采用端到端重传的方式, 允许有丟包或错包的发 生。 网络侧 (接入网或核心网) 网元需要部署视频转码功能。
图 6为本发明视频处理方法实施例三的流程图, 本实施例提供的视频 处理方法可以应用于本发明图 5所示的应用场景中, 如图 6所示, 该视频 处理方法可以包括:
步骤 601 , 除上行视频数据中的第一帧外, 终端将视频帧或宏块编码 为 P或 B类型帧或宏块, 编码所使用的参考帧或宏块为终端编码器参考图 像緩冲区中可以被网络侧网元正确解码的帧或宏块。
具体的, 在重编码之前, 终端事先维护已发送数据单元(例如: HARQ 块、 MAC包、 ARQ块或 IP包)与终端编码器参考图像緩冲区中帧或宏块之 如: HARQ反馈或 ARQ反馈 )标记终端编码器参考图像緩冲区中可以被网络 侧正确解码的帧或宏块。 举例来说, 帧或宏块如果之前参考的是能被网络 侧网元正确解码的帧或宏块, 并且又被正确地传输给网络侧网元, 举例来 说, 可以通过没有 HARQ或 ARQ错误反馈确定帧或宏块被正确地传输给网 络侧网元, 则认为该帧或宏块在网络侧网元可以正确解码, 终端主要标记 放入自身编码器参考图像緩冲区中的帧或宏块。
特别的, 如果终端编码器参考图像緩冲区中没有合适的帧或宏块时, 终端则将待编码的帧或宏块编码为 I类型帧或宏块。 在这种情况下, I类 型帧或宏块只要正确传输给网络侧网元后, 就认为该帧或宏块可以在网络 侧网元正确解码。
步骤 602 , 终端将编码后的视频数据发送给网络侧网元。
步骤 603 , 网络侧网元收到终端发送的视频数据后进行解码和重新编 码, 重新编码过程中按照一定周期将部分帧重新编码为图像组关键帧, 例 如重新编码为 I PPP…(或者中间有一些编码为 B帧)周期结构的视频序列。
同样, 本实施例中, 网络侧网元对上行视频数据的处理逻辑是针对一 个特定的业务流的, 因此网络侧网元在对接收的上行视频数据进行解码和 重新编码之前, 需要先确定上行视频数据对应的业务流, 并获取对该上行 视频数据进行解码或重编码所需的参数。 具体地, 网络侧网元可以根据 IP 地址、 端口、 协议号和网络内部标签中的至少一个确定上行视频数据对应 的业务流; 网络侧网元可以通过截取并记录业务中传递的视频参数, 或通 过解码, 或通过与视频源交互, 或通过与接收端设备交互, 或通过设置的 方式获得对该上行视频数据进行解码或重编码所需的参数。
本实施例中, 终端设备在视频数据编码过程中尽可能地使用 P或 B类 型帧或宏块, 从而可以减少视频数据占用的带宽资源, 也可以提高用户体 验度。
本领域普通技术人员可以理解: 实现上述方法实施例的全部或部分步 骤可以通过程序指令相关的硬件来完成, 前述的程序可以存储于一计算机 可读取存储介质中, 该程序在执行时, 执行包括上述方法实施例的步骤; 而前述的存储介质包括: R0M、 RAM, 磁碟或者光盘等各种可以存储程序代 码的介质。
图 7为本发明第一设备实施例一的结构示意图, 本实施例中的第一设 备可以实现本发明图 1所示实施例的流程。 如图 7所示, 该第一设备可以 包括:
编码模块 71 ,用于参考自身编码器的参考图像緩冲区中能被第二设备 正确解码的编解码单元, 将视频数据的原始编解码单元或者解码后编解码 单元编码为 P或 B类型编解码单元,其中上述编解码单元可以为帧或宏块; 发送模块 72 , 用于将编码模块 71编码后的编解码单元发送给第二设 备。
上述第一设备, 编码模块 71参考自身编码器的参考图像緩冲区中可 以被第二设备正确解码的编解码单元, 将视频数据的原始编解码单元或者 解码后编解码单元编码为 P或 B类型编解码单元, 然后发送模块 72将编 码后的编解码单元发送给第二设备。 由于 P或 B类型编解码单元包含的数 据量较小, 因此编码模块 71将视频数据的原始编解码单元或者解码后编 解码单元编码为 P或 B类型编解码单元可以减少视频数据占用的带宽资 源, 进而可以提高用户体验度。
图 8为本发明第一设备实施例二的结构示意图, 与图 7所示的第一设 备相比, 不同之处在于, 图 8所示的第一设备还可以进一步包括: 维护模块 73 ,用于维护已发送数据单元和自身编码器的参考图像緩冲 区中编解码单元之间的对应关系;
标记模块 74 ,用于根据第一设备与第二设备之间的传输错误反馈标记 自身编码器的参考图像緩冲区中能被第二设备正确解码的编解码单元。
本实施例中, 编码模块 71还可以当自身编码器的参考图像緩冲区中 没有能被第二设备正确解码的编解码单元时, 将视频数据的原始编解码单 元或者解码后编解码单元编码为 I类型编解码单元。
进一步地, 第一设备还可以包括:
接收模块 75 , 用于从网络接收视频数据;
解码模块 76 , 用于对接收模块 75接收的视频数据进行解码, 获得解 码后编解码单元。
本实施例中, 标记模块 74还可以根据解码模块 76对上述视频数据进 行解码的过程是否使用错误隐藏技术以及第一设备与第二设备之间的传 输错误反馈, 标记自身编码器的参考图像緩冲区中能被第二设备正确解码 的编解码单元。
具体地, 解码模块 76可以只对上述视频数据中的 I类型编解码单元 进行解码, 获得解码后编解码单元。 这时, 发送模块 72可以将上述视频 数据中的 P或 B类型的编解码单元直接发送给第二设备。
进一步地, 第一设备还可以包括:
确定模块 77 , 用于根据视频数据的 IP地址、 端口号、 协议号和网络 内部标签中的至少一个确定该视频数据对应的业务流;
获取模块 78 , 用于获取对上述视频数据进行解码或重编码所需的参 数。
上述第一设备中,编码模块 71将接收类型包含 I和 P (可选的包含 B ) 类型帧或宏块的视频数据重新编码, 重新编码过程中尽可能地使用 P或 B 类型, 然后发送模块 72将编码后的下行视频数据发送给第二设备。 由于 P 或 B类型编解码单元包含的数据量较小, 因此编码模块 71将视频数据的 原始编解码单元或者解码后编解码单元编码为 P或 B类型编解码单元可以 减少视频数据占用的带宽资源, 也可以提高用户体验度。
本领域技术人员可以理解附图只是一个优选实施例的示意图, 附图中 的模块或流程并不一定是实施本发明所必须的。
本领域技术人员可以理解实施例中的装置中的模块可以按照实施例 描述进行分布于实施例的装置中, 也可以进行相应变化位于不同于本实施 例的一个或多个装置中。 上述实施例的模块可以合并为一个模块, 也可以 进一步拆分成多个子模块。
最后应说明的是: 以上实施例仅用以说明本发明的技术方案, 而非对 其限制; 尽管参照前述实施例对本发明进行了详细的说明, 本领域的普通 技术人员应当理解: 其依然可以对前述各实施例所记载的技术方案进行修 改, 或者对其中部分技术特征进行等同替换; 而这些修改或者替换, 并不 使相应技术方案的本质脱离本发明各实施例技术方案的精神和范围。

Claims

权 利 要 求 书
1、 一种视频处理方法, 其特征在于, 包括:
第一设备参考自身编码器的参考图像緩冲区中能被第二设备正确解 码的编解码单元, 将视频数据的原始编解码单元或者解码后编解码单元编 码为 P或 B类型编解码单元; 所述编解码单元包括帧或宏块;
所述第一设备将编码后的编解码单元发送给所述第二设备。
2、 根据权利要求 1所述的方法, 其特征在于, 所述第一设备参考自 身编码器的参考图像緩冲区中能被第二设备正确解码的编解码单元, 将视 频数据的原始编解码单元或者解码后编解码单元编码为 P或 B类型编解码 单元之前, 还包括:
所述第一设备维护已发送数据单元和自身编码器的参考图像緩冲区 中编解码单元之间的对应关系, 并根据所述第一设备与所述第二设备之间 的传输错误反馈, 标记自身编码器的参考图像緩冲区中能被所述第二设备 正确解码的编解码单元。
3、 根据权利要求 2所述的方法, 其特征在于, 所述第一设备将编码 后的编解码单元发送给所述第二设备之前, 还包括:
当自身编码器的参考图像緩冲区中没有能被所述第二设备正确解码 的编解码单元时, 所述第一设备将所述视频数据的原始编解码单元或者解 码后编解码单元编码为 I类型编解码单元。
4、 根据权利要求 1、 2或 3所述的方法, 其特征在于, 所述第一设备 将编码后的编解码单元发送给所述第二设备之后, 还包括:
所述第二设备对编码后的编解码单元进行解码和重新编码, 在重新编 码过程中按照预定周期将部分编解码单元编码为图像组关键帧编解码单 元。
5、 根据权利要求 2或 3所述的方法, 其特征在于, 所述第一设备参 考自身编码器的参考图像緩冲区中能被第二设备正确解码的编解码单元, 将视频数据的解码后编解码单元编码为 P或 B类型编解码单元之前, 还包 括:
所述第一设备从网络接收所述视频数据, 并对所述视频数据进行解 码, 获得所述解码后编解码单元。
6、 根据权利要求 5所述的方法, 其特征在于, 还包括: 所述第一设备根据所述对所述视频数据进行解码的过程是否使用错 误隐藏技术以及所述第一设备与所述第二设备之间的传输错误反馈, 标记 自身编码器的参考图像緩冲区中能被所述第二设备正确解码的编解码单 元。
7、 根据权利要求 5所述的方法, 其特征在于, 所述对所述视频数据 进行解码, 获得所述解码后编解码单元包括:
所述第一设备对所述视频数据中的 I类型编解码单元进行解码, 获得 所述解码后编解码单元。
8、 根据权利要求 7所述的方法, 其特征在于, 还包括:
所述第一设备将所述视频数据中的 P或 B类型的编解码单元直接发送 给所述第二设备。
9、 一种第一设备, 其特征在于, 包括:
编码模块, 用于参考自身编码器的参考图像緩冲区中能被第二设备正 确解码的编解码单元, 将视频数据的原始编解码单元或者解码后编解码单 元编码为 P或 B类型编解码单元; 所述编解码单元包括帧或宏块;
发送模块, 用于将所述编码模块编码后的编解码单元发送给所述第二 设备。
10、 根据权利要求 9所述的设备, 其特征在于, 还包括:
维护模块, 用于维护已发送数据单元和自身编码器的参考图像緩冲区 中编解码单元之间的对应关系;
标记模块, 用于根据所述第一设备与所述第二设备之间的传输错误反 馈, 标记自身编码器的参考图像緩冲区中能被所述第二设备正确解码的编 解码单元。
11、 根据权利要求 10所述的设备, 其特征在于,
所述编码模块, 还用于当自身编码器的参考图像緩冲区中没有能被所 述第二设备正确解码的编解码单元时, 将所述视频数据的原始编解码单元 或者解码后编解码单元编码为 I类型编解码单元。
12、 根据权利要求 10或 11所述的设备, 其特征在于, 还包括: 接收模块, 用于从网络接收所述视频数据; 解码模块, 用于对所述接收模块接收的视频数据进行解码, 获得所述 解码后编解码单元。
13、 根据权利要求 12所述的设备, 其特征在于,
所述标记模块, 还用于根据所述解码模块对所述视频数据进行解码的 过程是否使用错误隐藏技术以及所述第一设备与所述第二设备之间的传 输错误反馈, 标记自身编码器的参考图像緩冲区中能被所述第二设备正确 解码的编解码单元。
14、 根据权利要求 12所述的设备, 其特征在于, 所述解码模块具体 用于对所述视频数据中的 I类型编解码单元进行解码, 获得所述解码后编 解码单元。
15、 根据权利要求 14所述的设备, 其特征在于,
所述发送模块,还用于将所述视频数据中的 P或 B类型的编解码单元 直接发送给所述第二设备。
PCT/CN2011/083526 2011-03-03 2011-12-06 视频处理方法和设备 WO2012116563A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201110050993.XA CN102655604B (zh) 2011-03-03 2011-03-03 视频处理方法和设备
CN201110050993.X 2011-03-03

Publications (1)

Publication Number Publication Date
WO2012116563A1 true WO2012116563A1 (zh) 2012-09-07

Family

ID=46731121

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2011/083526 WO2012116563A1 (zh) 2011-03-03 2011-12-06 视频处理方法和设备

Country Status (2)

Country Link
CN (1) CN102655604B (zh)
WO (1) WO2012116563A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170188220A1 (en) * 2014-05-30 2017-06-29 Nokia Solutions And Networks Oy Proximity-based communications, network assisted device discovery
CN110636332A (zh) * 2019-10-21 2019-12-31 山东小桨启航科技有限公司 一种视频处理方法、装置及计算机可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1470133A (zh) * 2000-08-14 2004-01-21 ��˹��ŵ�� 视频编码
CN1606885A (zh) * 2001-12-20 2005-04-13 汤姆森特许公司 Mpeg视频记录介质和再现装置
CN101193312A (zh) * 2006-11-22 2008-06-04 中兴通讯股份有限公司 基于反馈的自适应错误恢复装置、视频通信系统和方法
CN101589616A (zh) * 2007-01-22 2009-11-25 高通股份有限公司 用以区分反向链路与前向链路视频数据错误的错误过滤器

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1470133A (zh) * 2000-08-14 2004-01-21 ��˹��ŵ�� 视频编码
CN1606885A (zh) * 2001-12-20 2005-04-13 汤姆森特许公司 Mpeg视频记录介质和再现装置
CN101193312A (zh) * 2006-11-22 2008-06-04 中兴通讯股份有限公司 基于反馈的自适应错误恢复装置、视频通信系统和方法
CN101589616A (zh) * 2007-01-22 2009-11-25 高通股份有限公司 用以区分反向链路与前向链路视频数据错误的错误过滤器

Also Published As

Publication number Publication date
CN102655604B (zh) 2016-06-22
CN102655604A (zh) 2012-09-05

Similar Documents

Publication Publication Date Title
KR100537499B1 (ko) 전송제어 파라미터 생성방법 및 프레임 특성에 따른선택적 자동 재전송 방법
CN109068187B (zh) 实时流量传送系统和方法
US20100085868A1 (en) Method and apparatus for improved multicast streaming in wireless networks
US10944973B2 (en) Estimation of video quality of experience on media servers
US8630178B2 (en) Transmitting apparatus and transmission method
JP5021765B2 (ja) 逆方向リンクおよび順方向リンクのビデオデータエラーを区別するエラーフィルタ
WO2010054543A1 (zh) 一种频道切换方法、装置和系统
CN100558028C (zh) 一种实现纠错的方法及系统以及一种实现纠错的接入设备
US9654405B2 (en) Effective intra-frame refresh in multimedia communications over packet networks
CN107210843B (zh) 使用喷泉编码的实时视频通信的系统和方法
CN101207813A (zh) 一种视频序列的编码、解码方法及编码、解码系统
WO2011076105A1 (zh) 一种视频监控系统及其前向纠错的方法
JP2010028378A (ja) 通信装置及び通信方法
CN102223218A (zh) 媒体报文重传抑制方法和设备
US8799749B2 (en) Ad-hoc multimedia group communication terminal robust to packet loss and method of operating the same
CN111131743A (zh) 基于浏览器的视频通话方法、装置、电子设备及存储介质
US7802168B1 (en) Adapting encoded data to overcome loss of data
WO2012116563A1 (zh) 视频处理方法和设备
Chen et al. Multi-stages hybrid ARQ with conditional frame skipping and reference frame selecting scheme for real-time video transport over wireless LAN
JP4252017B2 (ja) 符号化ストリーム中継装置、その方法及びプログラム
KR101745646B1 (ko) 무선 랜에서 방송 패킷 재전송 시스템 및 방법, 이를 위한 액세스 포인트
WO2010115376A1 (zh) 一种媒体流切换方法、装置和系统
KR101725345B1 (ko) 무선 랜에서 브로드캐스팅/멀티캐스팅 전송과 유니캐스팅 전송을 혼용한 방송 패킷 재전송 시스템 및 방법
WO2022266974A1 (zh) 图像处理方法、装置、业务服务器及存储介质
KR101781422B1 (ko) 무선 랜에서 차등적 오류 정정 기반의 방송 패킷 전송 시스템 및 방법, 이를 위한 액세스 포인트

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11859909

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11859909

Country of ref document: EP

Kind code of ref document: A1